From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 2093) id 9F4FA385B501; Fri, 10 Feb 2023 11:29:08 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 9F4FA385B501 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1676028548; bh=7sYvDf09tj02VW3qNJpK2UPFn2gO3TN/DeqAHuVL8gc=; h=From:To:Subject:Date:From; b=PXCE2QDDGh5cyOJfiOLSly81zloY+VMJ8v1w9+s9dbRKS7SJSgKkiIKh8U8QGI2Rg qXsdO5yiPIzh1bR1TyKxuqhM5mpaYIJQbUDhWPAfc2ytVo1tML1LjSyfH/+4TKD5FC 8keyvQsQMDeaRgRG49jUKBXuelyKBxjO4GBeHL5Q= MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="utf-8" From: Kito Cheng To: gcc-cvs@gcc.gnu.org Subject: [gcc r13-5781] RISC-V: Add vmul.vx C API tests X-Act-Checkin: gcc X-Git-Author: Ju-Zhe Zhong X-Git-Refname: refs/heads/master X-Git-Oldrev: 76cd8e80058df1d349d88103a0ab73ec0dec29b6 X-Git-Newrev: ac843ce70e695959a2f3652c55449421f4958c64 Message-Id: <20230210112908.9F4FA385B501@sourceware.org> Date: Fri, 10 Feb 2023 11:29:08 +0000 (GMT) List-Id: https://gcc.gnu.org/g:ac843ce70e695959a2f3652c55449421f4958c64 commit r13-5781-gac843ce70e695959a2f3652c55449421f4958c64 Author: Ju-Zhe Zhong Date: Fri Feb 3 15:15:08 2023 +0800 RISC-V: Add vmul.vx C API tests gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/base/vmul_vx_m_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_m_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_m_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_m_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_m_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_m_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-3.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-1.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-2.c: New test. * gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-3.c: New test. Diff: --- .../gcc.target/riscv/rvv/base/vmul_vx_m_rv32-1.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_m_rv32-2.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_m_rv32-3.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_m_rv64-1.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_m_rv64-2.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_m_rv64-3.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-1.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-2.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-3.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-1.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-2.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-3.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_rv32-1.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_rv32-2.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_rv32-3.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_rv64-1.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_rv64-2.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_rv64-3.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-1.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-2.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-3.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-1.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-2.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-3.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-1.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-2.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-3.c | 289 ++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-1.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-2.c | 292 +++++++++++++++++++++ .../gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-3.c | 292 +++++++++++++++++++++ .../riscv/rvv/base/vmul_vx_tumu_rv32-1.c | 289 ++++++++++++++++++++ .../riscv/rvv/base/vmul_vx_tumu_rv32-2.c | 289 ++++++++++++++++++++ .../riscv/rvv/base/vmul_vx_tumu_rv32-3.c | 289 ++++++++++++++++++++ .../riscv/rvv/base/vmul_vx_tumu_rv64-1.c | 292 +++++++++++++++++++++ .../riscv/rvv/base/vmul_vx_tumu_rv64-2.c | 292 +++++++++++++++++++++ .../riscv/rvv/base/vmul_vx_tumu_rv64-3.c | 292 +++++++++++++++++++++ 36 files changed, 10458 insertions(+) diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv32-1.c new file mode 100644 index 00000000000..054b1fb86a5 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_m(mask,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_m(mask,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_m(mask,op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_m(mask,op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_m(mask,op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_m(mask,op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_m(mask,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_m(mask,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_m(mask,op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_m(mask,op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_m(mask,op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_m(mask,op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_m(mask,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_m(mask,op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_m(mask,op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_m(mask,op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_m(mask,op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_m(mask,op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_m(mask,op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_m(mask,op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_m(mask,op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_m(mask,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_m(mask,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_m(mask,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_m(mask,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_m(mask,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_m(mask,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_m(mask,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_m(mask,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_m(mask,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_m(mask,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_m(mask,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_m(mask,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_m(mask,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_m(mask,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_m(mask,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_m(mask,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_m(mask,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_m(mask,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_m(mask,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_m(mask,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_m(mask,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_m(mask,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_m(mask,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv32-2.c new file mode 100644 index 00000000000..79c7a5016ba --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_m(mask,op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_m(mask,op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_m(mask,op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_m(mask,op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_m(mask,op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_m(mask,op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_m(mask,op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_m(mask,op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_m(mask,op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_m(mask,op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_m(mask,op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_m(mask,op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_m(mask,op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_m(mask,op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_m(mask,op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_m(mask,op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_m(mask,op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_m(mask,op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_m(mask,op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_m(mask,op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_m(mask,op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_m(mask,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_m(mask,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_m(mask,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_m(mask,op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_m(mask,op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_m(mask,op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_m(mask,op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_m(mask,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_m(mask,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_m(mask,op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_m(mask,op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_m(mask,op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_m(mask,op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_m(mask,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_m(mask,op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_m(mask,op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_m(mask,op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_m(mask,op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_m(mask,op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_m(mask,op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_m(mask,op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_m(mask,op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_m(mask,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv32-3.c new file mode 100644 index 00000000000..e2e47047d9d --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_m(mask,op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_m(mask,op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_m(mask,op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_m(mask,op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_m(mask,op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_m(mask,op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_m(mask,op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_m(mask,op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_m(mask,op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_m(mask,op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_m(mask,op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_m(mask,op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_m(mask,op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_m(mask,op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_m(mask,op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_m(mask,op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_m(mask,op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_m(mask,op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_m(mask,op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_m(mask,op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_m(mask,op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_m(mask,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_m(mask,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_m(mask,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_m(mask,op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_m(mask,op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_m(mask,op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_m(mask,op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_m(mask,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_m(mask,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_m(mask,op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_m(mask,op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_m(mask,op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_m(mask,op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_m(mask,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_m(mask,op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_m(mask,op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_m(mask,op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_m(mask,op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_m(mask,op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_m(mask,op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_m(mask,op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_m(mask,op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_m(mask,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv64-1.c new file mode 100644 index 00000000000..ea52ca8df9a --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_m(mask,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_m(mask,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_m(mask,op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_m(mask,op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_m(mask,op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_m(mask,op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_m(mask,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_m(mask,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_m(mask,op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_m(mask,op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_m(mask,op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_m(mask,op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_m(mask,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_m(mask,op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_m(mask,op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_m(mask,op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_m(mask,op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_m(mask,op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_m(mask,op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_m(mask,op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_m(mask,op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_m(mask,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_m(mask,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_m(mask,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_m(mask,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_m(mask,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_m(mask,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_m(mask,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_m(mask,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_m(mask,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_m(mask,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_m(mask,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_m(mask,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_m(mask,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_m(mask,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_m(mask,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_m(mask,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_m(mask,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_m(mask,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_m(mask,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_m(mask,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_m(mask,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_m(mask,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_m(mask,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv64-2.c new file mode 100644 index 00000000000..5fd8cb50dd6 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_m(mask,op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_m(mask,op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_m(mask,op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_m(mask,op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_m(mask,op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_m(mask,op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_m(mask,op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_m(mask,op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_m(mask,op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_m(mask,op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_m(mask,op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_m(mask,op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_m(mask,op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_m(mask,op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_m(mask,op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_m(mask,op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_m(mask,op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_m(mask,op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_m(mask,op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_m(mask,op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_m(mask,op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_m(mask,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_m(mask,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_m(mask,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_m(mask,op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_m(mask,op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_m(mask,op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_m(mask,op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_m(mask,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_m(mask,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_m(mask,op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_m(mask,op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_m(mask,op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_m(mask,op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_m(mask,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_m(mask,op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_m(mask,op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_m(mask,op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_m(mask,op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_m(mask,op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_m(mask,op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_m(mask,op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_m(mask,op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_m(mask,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv64-3.c new file mode 100644 index 00000000000..218de6546bf --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_m_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_m(vbool64_t mask,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_m(mask,op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_m(vbool32_t mask,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_m(mask,op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_m(vbool16_t mask,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_m(mask,op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_m(vbool8_t mask,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_m(mask,op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_m(vbool4_t mask,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_m(mask,op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_m(vbool2_t mask,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_m(mask,op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_m(vbool1_t mask,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_m(mask,op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_m(vbool64_t mask,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_m(mask,op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_m(vbool32_t mask,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_m(mask,op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_m(vbool16_t mask,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_m(mask,op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_m(vbool8_t mask,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_m(mask,op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_m(vbool4_t mask,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_m(mask,op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_m(vbool2_t mask,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_m(mask,op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_m(vbool64_t mask,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_m(mask,op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_m(vbool32_t mask,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_m(mask,op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_m(vbool16_t mask,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_m(mask,op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_m(vbool8_t mask,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_m(mask,op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_m(vbool4_t mask,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_m(mask,op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_m(vbool64_t mask,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_m(mask,op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_m(vbool32_t mask,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_m(mask,op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_m(vbool16_t mask,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_m(mask,op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_m(vbool8_t mask,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_m(mask,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_m(vbool64_t mask,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_m(mask,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_m(vbool32_t mask,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_m(mask,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_m(vbool16_t mask,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_m(mask,op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_m(vbool8_t mask,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_m(mask,op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_m(vbool4_t mask,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_m(mask,op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_m(vbool2_t mask,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_m(mask,op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_m(vbool1_t mask,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_m(mask,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_m(vbool64_t mask,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_m(mask,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_m(vbool32_t mask,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_m(mask,op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_m(vbool16_t mask,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_m(mask,op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_m(vbool8_t mask,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_m(mask,op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_m(vbool4_t mask,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_m(mask,op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_m(vbool2_t mask,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_m(mask,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_m(vbool64_t mask,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_m(mask,op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_m(vbool32_t mask,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_m(mask,op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_m(vbool16_t mask,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_m(mask,op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_m(vbool8_t mask,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_m(mask,op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_m(vbool4_t mask,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_m(mask,op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_m(vbool64_t mask,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_m(mask,op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_m(vbool32_t mask,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_m(mask,op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_m(vbool16_t mask,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_m(mask,op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_m(vbool8_t mask,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_m(mask,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-1.c new file mode 100644 index 00000000000..2848fab0fd1 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_mu(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_mu(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_mu(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_mu(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_mu(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_mu(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_mu(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_mu(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_mu(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_mu(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_mu(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_mu(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_mu(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_mu(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_mu(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_mu(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_mu(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_mu(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_mu(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_mu(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_mu(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_mu(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_mu(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_mu(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_mu(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_mu(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_mu(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_mu(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_mu(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_mu(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_mu(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_mu(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_mu(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_mu(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_mu(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_mu(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_mu(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_mu(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-2.c new file mode 100644 index 00000000000..f954a6a0d56 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_mu(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_mu(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_mu(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_mu(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_mu(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_mu(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_mu(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_mu(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_mu(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_mu(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_mu(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_mu(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_mu(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_mu(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_mu(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_mu(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_mu(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_mu(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_mu(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_mu(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_mu(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_mu(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_mu(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_mu(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_mu(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_mu(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_mu(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_mu(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_mu(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_mu(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_mu(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_mu(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_mu(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_mu(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_mu(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_mu(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_mu(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_mu(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_mu(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_mu(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_mu(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_mu(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_mu(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_mu(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-3.c new file mode 100644 index 00000000000..354d221a12c --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_mu(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_mu(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_mu(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_mu(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_mu(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_mu(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_mu(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_mu(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_mu(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_mu(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_mu(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_mu(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_mu(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_mu(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_mu(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_mu(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_mu(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_mu(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_mu(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_mu(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_mu(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_mu(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_mu(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_mu(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_mu(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_mu(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_mu(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_mu(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_mu(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_mu(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_mu(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_mu(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_mu(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_mu(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_mu(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_mu(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_mu(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_mu(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_mu(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_mu(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_mu(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_mu(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_mu(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_mu(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-1.c new file mode 100644 index 00000000000..c7ea49ff3e8 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_mu(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_mu(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_mu(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_mu(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_mu(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_mu(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_mu(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_mu(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_mu(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_mu(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_mu(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_mu(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_mu(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_mu(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_mu(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_mu(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_mu(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_mu(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_mu(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_mu(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_mu(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_mu(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_mu(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_mu(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_mu(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_mu(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_mu(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_mu(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_mu(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_mu(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_mu(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_mu(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_mu(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_mu(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_mu(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_mu(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_mu(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_mu(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_mu(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_mu(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-2.c new file mode 100644 index 00000000000..5ebae1b209c --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_mu(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_mu(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_mu(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_mu(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_mu(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_mu(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_mu(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_mu(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_mu(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_mu(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_mu(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_mu(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_mu(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_mu(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_mu(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_mu(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_mu(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_mu(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_mu(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_mu(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_mu(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_mu(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_mu(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_mu(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_mu(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_mu(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_mu(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_mu(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_mu(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_mu(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_mu(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_mu(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_mu(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_mu(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_mu(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_mu(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_mu(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_mu(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_mu(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_mu(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_mu(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_mu(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_mu(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_mu(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-3.c new file mode 100644 index 00000000000..8b5729059ae --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_mu_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_mu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_mu(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_mu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_mu(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_mu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_mu(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_mu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_mu(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_mu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_mu(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_mu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_mu(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_mu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_mu(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_mu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_mu(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_mu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_mu(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_mu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_mu(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_mu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_mu(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_mu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_mu(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_mu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_mu(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_mu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_mu(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_mu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_mu(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_mu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_mu(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_mu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_mu(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_mu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_mu(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_mu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_mu(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_mu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_mu(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_mu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_mu(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_mu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_mu(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_mu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_mu(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_mu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_mu(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_mu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_mu(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_mu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_mu(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_mu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_mu(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_mu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_mu(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_mu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_mu(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_mu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_mu(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_mu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_mu(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_mu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_mu(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_mu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_mu(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_mu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_mu(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_mu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_mu(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_mu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_mu(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_mu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_mu(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_mu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_mu(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_mu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_mu(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_mu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_mu(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_mu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_mu(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_mu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_mu(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_mu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_mu(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_mu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_mu(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv32-1.c new file mode 100644 index 00000000000..0b8a9bc2ae9 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8(op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4(op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2(op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1(op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2(op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4(op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8(op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4(op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2(op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1(op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2(op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4(op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8(op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2(op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1(op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2(op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4(op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8(op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1(op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2(op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4(op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8(op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8(op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4(op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2(op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1(op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2(op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4(op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8(op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4(op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2(op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1(op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2(op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4(op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8(op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2(op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1(op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2(op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4(op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8(op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1(op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2(op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4(op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8(op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv32-2.c new file mode 100644 index 00000000000..465c09a8edc --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8(op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4(op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2(op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1(op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2(op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4(op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8(op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4(op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2(op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1(op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2(op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4(op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8(op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2(op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1(op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2(op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4(op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8(op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1(op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2(op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4(op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8(op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8(op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4(op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2(op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1(op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2(op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4(op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8(op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4(op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2(op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1(op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2(op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4(op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8(op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2(op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1(op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2(op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4(op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8(op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1(op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2(op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4(op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8(op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv32-3.c new file mode 100644 index 00000000000..e561a039961 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8(op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4(op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2(op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1(op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2(op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4(op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8(op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4(op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2(op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1(op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2(op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4(op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8(op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2(op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1(op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2(op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4(op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8(op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1(op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2(op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4(op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8(op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8(op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4(op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2(op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1(op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2(op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4(op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8(op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4(op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2(op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1(op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2(op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4(op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8(op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2(op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1(op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2(op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4(op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8(op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1(op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2(op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4(op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8(op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv64-1.c new file mode 100644 index 00000000000..dc05162309d --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8(op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4(op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2(op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1(op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2(op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4(op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8(op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4(op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2(op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1(op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2(op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4(op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8(op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2(op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1(op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2(op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4(op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8(op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1(op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2(op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4(op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8(op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8(op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4(op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2(op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1(op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2(op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4(op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8(op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4(op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2(op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1(op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2(op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4(op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8(op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2(op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1(op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2(op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4(op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8(op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1(op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2(op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4(op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8(op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv64-2.c new file mode 100644 index 00000000000..a75e275df84 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8(op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4(op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2(op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1(op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2(op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4(op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8(op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4(op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2(op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1(op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2(op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4(op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8(op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2(op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1(op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2(op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4(op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8(op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1(op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2(op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4(op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8(op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8(op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4(op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2(op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1(op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2(op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4(op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8(op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4(op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2(op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1(op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2(op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4(op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8(op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2(op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1(op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2(op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4(op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8(op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1(op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2(op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4(op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8(op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv64-3.c new file mode 100644 index 00000000000..618b885a953 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8(vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8(op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4(vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4(op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2(vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2(op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1(vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1(op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2(vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2(op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4(vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4(op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8(vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8(op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4(vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4(op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2(vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2(op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1(vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1(op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2(vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2(op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4(vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4(op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8(vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8(op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2(vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2(op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1(vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1(op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2(vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2(op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4(vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4(op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8(vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8(op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1(vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1(op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2(vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2(op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4(vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4(op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8(vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8(op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8(vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8(op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4(vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4(op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2(vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2(op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1(vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1(op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2(vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2(op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4(vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4(op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8(vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8(op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4(vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4(op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2(vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2(op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1(vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1(op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2(vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2(op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4(vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4(op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8(vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8(op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2(vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2(op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1(vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1(op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2(vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2(op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4(vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4(op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8(vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8(op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1(vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1(op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2(vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2(op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4(vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4(op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8(vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8(op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*t[au],\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-1.c new file mode 100644 index 00000000000..18d8e9af624 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tu(merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tu(merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tu(merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tu(merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tu(merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tu(merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tu(merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tu(merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tu(merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tu(merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tu(merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tu(merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tu(merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tu(merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tu(merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tu(merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tu(merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tu(merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tu(merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tu(merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tu(merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tu(merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tu(merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tu(merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tu(merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tu(merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tu(merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tu(merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tu(merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tu(merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tu(merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tu(merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tu(merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tu(merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tu(merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tu(merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tu(merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tu(merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tu(merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tu(merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tu(merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tu(merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tu(merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tu(merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-2.c new file mode 100644 index 00000000000..eabe4b85348 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tu(merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tu(merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tu(merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tu(merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tu(merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tu(merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tu(merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tu(merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tu(merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tu(merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tu(merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tu(merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tu(merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tu(merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tu(merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tu(merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tu(merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tu(merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tu(merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tu(merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tu(merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tu(merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tu(merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tu(merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tu(merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tu(merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tu(merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tu(merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tu(merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tu(merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tu(merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tu(merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tu(merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tu(merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tu(merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tu(merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tu(merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tu(merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tu(merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tu(merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tu(merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tu(merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tu(merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tu(merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-3.c new file mode 100644 index 00000000000..c7c64ef21c4 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tu(merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tu(merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tu(merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tu(merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tu(merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tu(merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tu(merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tu(merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tu(merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tu(merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tu(merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tu(merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tu(merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tu(merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tu(merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tu(merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tu(merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tu(merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tu(merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tu(merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tu(merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tu(merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tu(merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tu(merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tu(merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tu(merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tu(merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tu(merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tu(merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tu(merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tu(merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tu(merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tu(merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tu(merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tu(merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tu(merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tu(merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tu(merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tu(merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tu(merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tu(merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tu(merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tu(merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tu(merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-1.c new file mode 100644 index 00000000000..da877e10cde --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tu(merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tu(merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tu(merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tu(merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tu(merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tu(merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tu(merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tu(merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tu(merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tu(merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tu(merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tu(merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tu(merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tu(merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tu(merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tu(merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tu(merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tu(merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tu(merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tu(merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tu(merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tu(merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tu(merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tu(merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tu(merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tu(merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tu(merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tu(merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tu(merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tu(merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tu(merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tu(merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tu(merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tu(merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tu(merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tu(merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tu(merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tu(merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tu(merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tu(merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tu(merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tu(merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tu(merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tu(merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-2.c new file mode 100644 index 00000000000..2f5617fc7df --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tu(merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tu(merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tu(merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tu(merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tu(merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tu(merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tu(merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tu(merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tu(merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tu(merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tu(merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tu(merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tu(merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tu(merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tu(merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tu(merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tu(merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tu(merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tu(merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tu(merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tu(merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tu(merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tu(merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tu(merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tu(merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tu(merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tu(merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tu(merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tu(merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tu(merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tu(merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tu(merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tu(merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tu(merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tu(merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tu(merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tu(merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tu(merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tu(merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tu(merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tu(merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tu(merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tu(merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tu(merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-3.c new file mode 100644 index 00000000000..aba66651686 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tu_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tu(vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tu(merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tu(vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tu(merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tu(vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tu(merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tu(vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tu(merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tu(vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tu(merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tu(vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tu(merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tu(vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tu(merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tu(vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tu(merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tu(vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tu(merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tu(vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tu(merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tu(vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tu(merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tu(vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tu(merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tu(vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tu(merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tu(vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tu(merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tu(vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tu(merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tu(vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tu(merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tu(vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tu(merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tu(vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tu(merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tu(vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tu(merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tu(vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tu(merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tu(vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tu(merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tu(vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tu(merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tu(vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tu(merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tu(vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tu(merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tu(vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tu(merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tu(vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tu(merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tu(vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tu(merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tu(vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tu(merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tu(vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tu(merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tu(vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tu(merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tu(vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tu(merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tu(vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tu(merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tu(vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tu(merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tu(vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tu(merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tu(vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tu(merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tu(vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tu(merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tu(vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tu(merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tu(vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tu(merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tu(vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tu(merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tu(vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tu(merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tu(vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tu(merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tu(vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tu(merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tu(vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tu(merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tu(vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tu(merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-1.c new file mode 100644 index 00000000000..33eeeca5379 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tum(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tum(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tum(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tum(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tum(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tum(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tum(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tum(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tum(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tum(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tum(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tum(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tum(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tum(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tum(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tum(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tum(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tum(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tum(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tum(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tum(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tum(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tum(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tum(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tum(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tum(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tum(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tum(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tum(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tum(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tum(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tum(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tum(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tum(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tum(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tum(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tum(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tum(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-2.c new file mode 100644 index 00000000000..ce85103b857 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tum(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tum(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tum(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tum(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tum(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tum(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tum(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tum(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tum(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tum(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tum(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tum(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tum(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tum(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tum(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tum(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tum(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tum(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tum(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tum(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tum(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tum(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tum(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tum(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tum(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tum(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tum(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tum(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tum(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tum(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tum(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tum(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tum(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tum(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tum(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tum(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tum(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tum(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tum(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tum(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tum(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tum(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tum(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tum(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-3.c new file mode 100644 index 00000000000..9e4c9ac0ae8 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tum(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tum(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tum(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tum(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tum(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tum(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tum(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tum(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tum(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tum(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tum(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tum(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tum(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tum(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tum(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tum(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tum(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tum(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tum(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tum(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tum(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tum(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tum(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tum(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tum(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tum(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tum(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tum(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tum(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tum(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tum(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tum(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tum(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tum(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tum(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tum(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tum(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tum(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tum(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tum(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tum(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tum(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tum(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tum(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-1.c new file mode 100644 index 00000000000..63c8ef72d97 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tum(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tum(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tum(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tum(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tum(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tum(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tum(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tum(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tum(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tum(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tum(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tum(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tum(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tum(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tum(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tum(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tum(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tum(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tum(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tum(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tum(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tum(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tum(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tum(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tum(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tum(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tum(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tum(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tum(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tum(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tum(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tum(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tum(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tum(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tum(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tum(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tum(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tum(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tum(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tum(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-2.c new file mode 100644 index 00000000000..6835b128848 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tum(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tum(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tum(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tum(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tum(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tum(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tum(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tum(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tum(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tum(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tum(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tum(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tum(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tum(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tum(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tum(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tum(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tum(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tum(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tum(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tum(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tum(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tum(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tum(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tum(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tum(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tum(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tum(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tum(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tum(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tum(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tum(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tum(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tum(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tum(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tum(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tum(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tum(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tum(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tum(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tum(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tum(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tum(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tum(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-3.c new file mode 100644 index 00000000000..6c357f87d15 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tum_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tum(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tum(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tum(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tum(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tum(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tum(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tum(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tum(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tum(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tum(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tum(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tum(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tum(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tum(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tum(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tum(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tum(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tum(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tum(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tum(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tum(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tum(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tum(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tum(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tum(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tum(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tum(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tum(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tum(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tum(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tum(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tum(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tum(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tum(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tum(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tum(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tum(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tum(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tum(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tum(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tum(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tum(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tum(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tum(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tum(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tum(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tum(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tum(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tum(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tum(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tum(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tum(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tum(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tum(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tum(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tum(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tum(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tum(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tum(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tum(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tum(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tum(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tum(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tum(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tum(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tum(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tum(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tum(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tum(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tum(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tum(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tum(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tum(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tum(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tum(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tum(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tum(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tum(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tum(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tum(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tum(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tum(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tum(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tum(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tum(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tum(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tum(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tum(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*m[au]\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-1.c new file mode 100644 index 00000000000..86a69d5786c --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-1.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tumu(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tumu(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tumu(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tumu(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tumu(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tumu(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tumu(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tumu(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tumu(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tumu(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tumu(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tumu(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tumu(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tumu(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tumu(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tumu(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tumu(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tumu(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tumu(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tumu(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-2.c new file mode 100644 index 00000000000..bb06f05b17e --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-2.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tumu(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tumu(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tumu(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tumu(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tumu(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tumu(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tumu(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tumu(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tumu(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tumu(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tumu(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tumu(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tumu(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tumu(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tumu(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tumu(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tumu(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tumu(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tumu(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tumu(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tumu(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tumu(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tumu(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tumu(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tumu(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tumu(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tumu(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tumu(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tumu(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tumu(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tumu(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tumu(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tumu(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tumu(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tumu(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tumu(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tumu(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tumu(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-3.c new file mode 100644 index 00000000000..0376886a7c8 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv32-3.c @@ -0,0 +1,289 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv32gcv -mabi=ilp32d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tumu(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tumu(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tumu(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tumu(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tumu(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tumu(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tumu(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tumu(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tumu(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tumu(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tumu(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tumu(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tumu(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tumu(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tumu(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tumu(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tumu(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tumu(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tumu(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tumu(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tumu(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tumu(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tumu(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tumu(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tumu(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tumu(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tumu(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tumu(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tumu(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tumu(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tumu(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tumu(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tumu(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tumu(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tumu(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tumu(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tumu(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tumu(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vmul\.vv\s+v[0-9]+,\s*v[0-9]+,\s*v[0-9]+,\s*v0.t} 8 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-1.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-1.c new file mode 100644 index 00000000000..9363a313a85 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-1.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tumu(mask,merge,op1,op2,vl); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tumu(mask,merge,op1,op2,vl); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tumu(mask,merge,op1,op2,vl); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tumu(mask,merge,op1,op2,vl); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tumu(mask,merge,op1,op2,vl); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tumu(mask,merge,op1,op2,vl); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tumu(mask,merge,op1,op2,vl); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tumu(mask,merge,op1,op2,vl); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tumu(mask,merge,op1,op2,vl); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tumu(mask,merge,op1,op2,vl); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tumu(mask,merge,op1,op2,vl); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tumu(mask,merge,op1,op2,vl); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tumu(mask,merge,op1,op2,vl); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tumu(mask,merge,op1,op2,vl); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tumu(mask,merge,op1,op2,vl); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tumu(mask,merge,op1,op2,vl); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tumu(mask,merge,op1,op2,vl); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tumu(mask,merge,op1,op2,vl); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tumu(mask,merge,op1,op2,vl); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tumu(mask,merge,op1,op2,vl); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tumu(mask,merge,op1,op2,vl); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tumu(mask,merge,op1,op2,vl); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tumu(mask,merge,op1,op2,vl); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-2.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-2.c new file mode 100644 index 00000000000..2021fc17557 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-2.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tumu(mask,merge,op1,op2,31); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tumu(mask,merge,op1,op2,31); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tumu(mask,merge,op1,op2,31); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tumu(mask,merge,op1,op2,31); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tumu(mask,merge,op1,op2,31); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tumu(mask,merge,op1,op2,31); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tumu(mask,merge,op1,op2,31); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tumu(mask,merge,op1,op2,31); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tumu(mask,merge,op1,op2,31); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tumu(mask,merge,op1,op2,31); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tumu(mask,merge,op1,op2,31); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tumu(mask,merge,op1,op2,31); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tumu(mask,merge,op1,op2,31); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tumu(mask,merge,op1,op2,31); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tumu(mask,merge,op1,op2,31); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tumu(mask,merge,op1,op2,31); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tumu(mask,merge,op1,op2,31); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tumu(mask,merge,op1,op2,31); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tumu(mask,merge,op1,op2,31); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tumu(mask,merge,op1,op2,31); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tumu(mask,merge,op1,op2,31); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tumu(mask,merge,op1,op2,31); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tumu(mask,merge,op1,op2,31); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tumu(mask,merge,op1,op2,31); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tumu(mask,merge,op1,op2,31); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tumu(mask,merge,op1,op2,31); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tumu(mask,merge,op1,op2,31); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tumu(mask,merge,op1,op2,31); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tumu(mask,merge,op1,op2,31); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tumu(mask,merge,op1,op2,31); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tumu(mask,merge,op1,op2,31); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tumu(mask,merge,op1,op2,31); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tumu(mask,merge,op1,op2,31); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tumu(mask,merge,op1,op2,31); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tumu(mask,merge,op1,op2,31); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tumu(mask,merge,op1,op2,31); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tumu(mask,merge,op1,op2,31); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tumu(mask,merge,op1,op2,31); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tumu(mask,merge,op1,op2,31); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tumu(mask,merge,op1,op2,31); +} + + + +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e8,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e16,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e32,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetivli\s+zero,\s*31,\s*e64,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-3.c b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-3.c new file mode 100644 index 00000000000..0b5e2da4884 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/vmul_vx_tumu_rv64-3.c @@ -0,0 +1,292 @@ +/* { dg-do compile } */ +/* { dg-options "-march=rv64gcv -mabi=lp64d -O3 -fno-schedule-insns -fno-schedule-insns2" } */ + +#include "riscv_vector.h" + +vint8mf8_t test___riscv_vmul_vx_i8mf8_tumu(vbool64_t mask,vint8mf8_t merge,vint8mf8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf8_tumu(mask,merge,op1,op2,32); +} + + +vint8mf4_t test___riscv_vmul_vx_i8mf4_tumu(vbool32_t mask,vint8mf4_t merge,vint8mf4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf4_tumu(mask,merge,op1,op2,32); +} + + +vint8mf2_t test___riscv_vmul_vx_i8mf2_tumu(vbool16_t mask,vint8mf2_t merge,vint8mf2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8mf2_tumu(mask,merge,op1,op2,32); +} + + +vint8m1_t test___riscv_vmul_vx_i8m1_tumu(vbool8_t mask,vint8m1_t merge,vint8m1_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m1_tumu(mask,merge,op1,op2,32); +} + + +vint8m2_t test___riscv_vmul_vx_i8m2_tumu(vbool4_t mask,vint8m2_t merge,vint8m2_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m2_tumu(mask,merge,op1,op2,32); +} + + +vint8m4_t test___riscv_vmul_vx_i8m4_tumu(vbool2_t mask,vint8m4_t merge,vint8m4_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m4_tumu(mask,merge,op1,op2,32); +} + + +vint8m8_t test___riscv_vmul_vx_i8m8_tumu(vbool1_t mask,vint8m8_t merge,vint8m8_t op1,int8_t op2,size_t vl) +{ + return __riscv_vmul_vx_i8m8_tumu(mask,merge,op1,op2,32); +} + + +vint16mf4_t test___riscv_vmul_vx_i16mf4_tumu(vbool64_t mask,vint16mf4_t merge,vint16mf4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf4_tumu(mask,merge,op1,op2,32); +} + + +vint16mf2_t test___riscv_vmul_vx_i16mf2_tumu(vbool32_t mask,vint16mf2_t merge,vint16mf2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16mf2_tumu(mask,merge,op1,op2,32); +} + + +vint16m1_t test___riscv_vmul_vx_i16m1_tumu(vbool16_t mask,vint16m1_t merge,vint16m1_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m1_tumu(mask,merge,op1,op2,32); +} + + +vint16m2_t test___riscv_vmul_vx_i16m2_tumu(vbool8_t mask,vint16m2_t merge,vint16m2_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m2_tumu(mask,merge,op1,op2,32); +} + + +vint16m4_t test___riscv_vmul_vx_i16m4_tumu(vbool4_t mask,vint16m4_t merge,vint16m4_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m4_tumu(mask,merge,op1,op2,32); +} + + +vint16m8_t test___riscv_vmul_vx_i16m8_tumu(vbool2_t mask,vint16m8_t merge,vint16m8_t op1,int16_t op2,size_t vl) +{ + return __riscv_vmul_vx_i16m8_tumu(mask,merge,op1,op2,32); +} + + +vint32mf2_t test___riscv_vmul_vx_i32mf2_tumu(vbool64_t mask,vint32mf2_t merge,vint32mf2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32mf2_tumu(mask,merge,op1,op2,32); +} + + +vint32m1_t test___riscv_vmul_vx_i32m1_tumu(vbool32_t mask,vint32m1_t merge,vint32m1_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m1_tumu(mask,merge,op1,op2,32); +} + + +vint32m2_t test___riscv_vmul_vx_i32m2_tumu(vbool16_t mask,vint32m2_t merge,vint32m2_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m2_tumu(mask,merge,op1,op2,32); +} + + +vint32m4_t test___riscv_vmul_vx_i32m4_tumu(vbool8_t mask,vint32m4_t merge,vint32m4_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m4_tumu(mask,merge,op1,op2,32); +} + + +vint32m8_t test___riscv_vmul_vx_i32m8_tumu(vbool4_t mask,vint32m8_t merge,vint32m8_t op1,int32_t op2,size_t vl) +{ + return __riscv_vmul_vx_i32m8_tumu(mask,merge,op1,op2,32); +} + + +vint64m1_t test___riscv_vmul_vx_i64m1_tumu(vbool64_t mask,vint64m1_t merge,vint64m1_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m1_tumu(mask,merge,op1,op2,32); +} + + +vint64m2_t test___riscv_vmul_vx_i64m2_tumu(vbool32_t mask,vint64m2_t merge,vint64m2_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m2_tumu(mask,merge,op1,op2,32); +} + + +vint64m4_t test___riscv_vmul_vx_i64m4_tumu(vbool16_t mask,vint64m4_t merge,vint64m4_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m4_tumu(mask,merge,op1,op2,32); +} + + +vint64m8_t test___riscv_vmul_vx_i64m8_tumu(vbool8_t mask,vint64m8_t merge,vint64m8_t op1,int64_t op2,size_t vl) +{ + return __riscv_vmul_vx_i64m8_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf8_t test___riscv_vmul_vx_u8mf8_tumu(vbool64_t mask,vuint8mf8_t merge,vuint8mf8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf8_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf4_t test___riscv_vmul_vx_u8mf4_tumu(vbool32_t mask,vuint8mf4_t merge,vuint8mf4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf4_tumu(mask,merge,op1,op2,32); +} + + +vuint8mf2_t test___riscv_vmul_vx_u8mf2_tumu(vbool16_t mask,vuint8mf2_t merge,vuint8mf2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint8m1_t test___riscv_vmul_vx_u8m1_tumu(vbool8_t mask,vuint8m1_t merge,vuint8m1_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m1_tumu(mask,merge,op1,op2,32); +} + + +vuint8m2_t test___riscv_vmul_vx_u8m2_tumu(vbool4_t mask,vuint8m2_t merge,vuint8m2_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m2_tumu(mask,merge,op1,op2,32); +} + + +vuint8m4_t test___riscv_vmul_vx_u8m4_tumu(vbool2_t mask,vuint8m4_t merge,vuint8m4_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m4_tumu(mask,merge,op1,op2,32); +} + + +vuint8m8_t test___riscv_vmul_vx_u8m8_tumu(vbool1_t mask,vuint8m8_t merge,vuint8m8_t op1,uint8_t op2,size_t vl) +{ + return __riscv_vmul_vx_u8m8_tumu(mask,merge,op1,op2,32); +} + + +vuint16mf4_t test___riscv_vmul_vx_u16mf4_tumu(vbool64_t mask,vuint16mf4_t merge,vuint16mf4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf4_tumu(mask,merge,op1,op2,32); +} + + +vuint16mf2_t test___riscv_vmul_vx_u16mf2_tumu(vbool32_t mask,vuint16mf2_t merge,vuint16mf2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint16m1_t test___riscv_vmul_vx_u16m1_tumu(vbool16_t mask,vuint16m1_t merge,vuint16m1_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m1_tumu(mask,merge,op1,op2,32); +} + + +vuint16m2_t test___riscv_vmul_vx_u16m2_tumu(vbool8_t mask,vuint16m2_t merge,vuint16m2_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m2_tumu(mask,merge,op1,op2,32); +} + + +vuint16m4_t test___riscv_vmul_vx_u16m4_tumu(vbool4_t mask,vuint16m4_t merge,vuint16m4_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m4_tumu(mask,merge,op1,op2,32); +} + + +vuint16m8_t test___riscv_vmul_vx_u16m8_tumu(vbool2_t mask,vuint16m8_t merge,vuint16m8_t op1,uint16_t op2,size_t vl) +{ + return __riscv_vmul_vx_u16m8_tumu(mask,merge,op1,op2,32); +} + + +vuint32mf2_t test___riscv_vmul_vx_u32mf2_tumu(vbool64_t mask,vuint32mf2_t merge,vuint32mf2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32mf2_tumu(mask,merge,op1,op2,32); +} + + +vuint32m1_t test___riscv_vmul_vx_u32m1_tumu(vbool32_t mask,vuint32m1_t merge,vuint32m1_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m1_tumu(mask,merge,op1,op2,32); +} + + +vuint32m2_t test___riscv_vmul_vx_u32m2_tumu(vbool16_t mask,vuint32m2_t merge,vuint32m2_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m2_tumu(mask,merge,op1,op2,32); +} + + +vuint32m4_t test___riscv_vmul_vx_u32m4_tumu(vbool8_t mask,vuint32m4_t merge,vuint32m4_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m4_tumu(mask,merge,op1,op2,32); +} + + +vuint32m8_t test___riscv_vmul_vx_u32m8_tumu(vbool4_t mask,vuint32m8_t merge,vuint32m8_t op1,uint32_t op2,size_t vl) +{ + return __riscv_vmul_vx_u32m8_tumu(mask,merge,op1,op2,32); +} + + +vuint64m1_t test___riscv_vmul_vx_u64m1_tumu(vbool64_t mask,vuint64m1_t merge,vuint64m1_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m1_tumu(mask,merge,op1,op2,32); +} + + +vuint64m2_t test___riscv_vmul_vx_u64m2_tumu(vbool32_t mask,vuint64m2_t merge,vuint64m2_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m2_tumu(mask,merge,op1,op2,32); +} + + +vuint64m4_t test___riscv_vmul_vx_u64m4_tumu(vbool16_t mask,vuint64m4_t merge,vuint64m4_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m4_tumu(mask,merge,op1,op2,32); +} + + +vuint64m8_t test___riscv_vmul_vx_u64m8_tumu(vbool8_t mask,vuint64m8_t merge,vuint64m8_t op1,uint64_t op2,size_t vl) +{ + return __riscv_vmul_vx_u64m8_tumu(mask,merge,op1,op2,32); +} + + + +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e8,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e16,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*mf2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e32,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m1,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m2,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m4,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */ +/* { dg-final { scan-assembler-times {vsetvli\s+zero,\s*[a-x0-9]+,\s*e64,\s*m8,\s*tu,\s*mu\s+vmul\.vx\s+v[0-9]+,\s*v[0-9]+,\s*[a-x0-9]+,\s*v0.t} 2 } } */