From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-vs1-xe2d.google.com (mail-vs1-xe2d.google.com [IPv6:2607:f8b0:4864:20::e2d]) by sourceware.org (Postfix) with ESMTPS id D03893858D28 for ; Wed, 12 Apr 2023 12:49:42 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org D03893858D28 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-vs1-xe2d.google.com with SMTP id g13so9985237vsk.12 for ; Wed, 12 Apr 2023 05:49:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681303781; x=1683895781; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=1JQteZbX1RgYbgXgjAVvjFbu4qNXVchzoqr1vN21hi0=; b=ha6eKIVW03CGrH9fPP2A6dhGfV1pa/o6axutXAQ0/Y4ByfH6ARhdxuUb5AUcE7XhH8 nR6O5N3J4xxQhq5vwv0dLs6zwi9smlYLCemqg49HhvAFdHuNBscMaRvC8OnkEgxfnrLP AKq7UuiLkPhiZxt9MIT8zrornrjUv8HvavnNZE2rVvJxEELQ4QoyaPuk8HGvJaKmMIv5 xB8y9Dhw4lwo45GhZ/fk7mmGZrmtLTARPkigCeI6RA7IgnWmvekIkz1Er1v59F/eO64z FF+3W4I6DIJoSkfJwJoQqRjGRmE72sm70AG8KznVjSdgcEEr9G+qq1bp/oz6StwJdtJp CM4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681303781; x=1683895781; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1JQteZbX1RgYbgXgjAVvjFbu4qNXVchzoqr1vN21hi0=; b=vaCKvJRvaFXuCBMVySKydm+5oh3cmbwnyLmhX3tXjFeODCrk/Jf81rsM8SydwBd6Pc bvbraIFvAfCJQ8NjjV2dSnRjaq2nkeIxy6qtM8LU+HXLQGG73Rama1/anaGX5fGrFQb7 8mrEFfGqQ4ZFZmnFljRuWHS0ElTJP6LLynQXxuhHpDL8E+Q2PpdCt8UL/LU4vlXRbAW+ hAHkV0ohrnBR+UFX11AeEv2kfXVeT/iuFUXtuoKgHJ7WW6vnO6uilP3069ZCRwIGHJWf vFjzlvSr5KvDOKVxm9BYRAI+exhQ7K7gLI5bDmJUe5WCGmFB1DECvyturHal14k1rLcR 2EqA== X-Gm-Message-State: AAQBX9cjbSp0te4x2lIeCxxrrVagp8hvFpwIYt1d4DNgT+vqI+OpRyA8 ZsMfBGfYkwpuRPHOWgiNkmD7aT3lsZYcGF3/2pM= X-Google-Smtp-Source: AKy350Y9BzgiicE1DCPwcjXh3gqF2hT+TOEBdB7/SsA0uwkxU3QSVl4ETDsWcbwZWzp/QTT9ijSIInF1Zb1wgNpGgxI= X-Received: by 2002:a67:d918:0:b0:42c:3f5a:755d with SMTP id t24-20020a67d918000000b0042c3f5a755dmr1217418vsj.7.1681303780654; Wed, 12 Apr 2023 05:49:40 -0700 (PDT) MIME-Version: 1.0 References: <20230412110846.308184-1-juzhe.zhong@rivai.ai> In-Reply-To: <20230412110846.308184-1-juzhe.zhong@rivai.ai> From: Kito Cheng Date: Wed, 12 Apr 2023 20:49:29 +0800 Message-ID: Subject: Re: [PATCH] RISC-V: Fix PR109479 To: juzhe.zhong@rivai.ai Cc: gcc-patches@gcc.gnu.org, palmer@dabbelt.com, jeffreyalaw@gmail.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-8.9 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,KAM_NUMSUBJECT,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,UPPERCASE_50_75 autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: OK for trunk, but plz improve the coverage of the testcase, e.g. vint16mf4_t has fixed too but not tested in testcase. On Wed, Apr 12, 2023 at 7:09=E2=80=AFPM wrote: > > From: Ju-Zhe Zhong > > Fix supporting data type according to RVV ISA. > For vint64m*_t, we should only allow them in zve64* instead of zve32*_zvl= 64b (>=3D64b). > Ideally, we should make error message more friendly like Clang. > https://godbolt.org/z/f9GMv4dMo to report the RVV type require extenstion= name. > However, I failed to find a way to do that. So current GCC can only repor= t "unknown" type. > And I added comments to remind us doing this in the future. > > Tested and Regression all passed. OK for GCC-13 trunk ? > > PR 109479 > > gcc/ChangeLog: > > * config/riscv/riscv-vector-builtins-types.def (vint8mf8_t): Fix = predicate. > (vint16mf4_t): Ditto. > (vint32mf2_t): Ditto. > (vint64m1_t): Ditto. > (vint64m2_t): Ditto. > (vint64m4_t): Ditto. > (vint64m8_t): Ditto. > (vuint8mf8_t): Ditto. > (vuint16mf4_t): Ditto. > (vuint32mf2_t): Ditto. > (vuint64m1_t): Ditto. > (vuint64m2_t): Ditto. > (vuint64m4_t): Ditto. > (vuint64m8_t): Ditto. > (vfloat32mf2_t): Ditto. > (vbool64_t): Ditto. > * config/riscv/riscv-vector-builtins.cc (register_builtin_type): = Add comments. > (register_vector_type): Ditto. > (check_required_extensions): Fix predicate. > * config/riscv/riscv-vector-builtins.h (RVV_REQUIRE_ZVE64): Ditto= . > (RVV_REQUIRE_ELEN_64): Ditto. > (RVV_REQUIRE_MIN_VLEN_64): Ditto. > * config/riscv/riscv-vector-switch.def (TARGET_VECTOR_FP32): Remo= ve it. > (TARGET_VECTOR_FP64): Ditto. > (ENTRY): Fix predicate. > > gcc/testsuite/ChangeLog: > > * gcc.target/riscv/rvv/base/pr109479.c: New test. > > --- > .../riscv/riscv-vector-builtins-types.def | 348 +++++++++--------- > gcc/config/riscv/riscv-vector-builtins.cc | 14 +- > gcc/config/riscv/riscv-vector-builtins.h | 3 +- > gcc/config/riscv/riscv-vector-switch.def | 56 ++- > .../gcc.target/riscv/rvv/base/pr109479.c | 11 + > 5 files changed, 220 insertions(+), 212 deletions(-) > create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/pr109479.c > > diff --git a/gcc/config/riscv/riscv-vector-builtins-types.def b/gcc/confi= g/riscv/riscv-vector-builtins-types.def > index a55d494f1d9..a74df066521 100644 > --- a/gcc/config/riscv/riscv-vector-builtins-types.def > +++ b/gcc/config/riscv/riscv-vector-builtins-types.def > @@ -235,53 +235,53 @@ along with GCC; see the file COPYING3. If not see > #define DEF_RVV_LMUL4_OPS(TYPE, REQUIRE) > #endif > > -DEF_RVV_I_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_I_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_I_OPS (vint8mf4_t, 0) > DEF_RVV_I_OPS (vint8mf2_t, 0) > DEF_RVV_I_OPS (vint8m1_t, 0) > DEF_RVV_I_OPS (vint8m2_t, 0) > DEF_RVV_I_OPS (vint8m4_t, 0) > DEF_RVV_I_OPS (vint8m8_t, 0) > -DEF_RVV_I_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_I_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_I_OPS (vint16mf2_t, 0) > DEF_RVV_I_OPS (vint16m1_t, 0) > DEF_RVV_I_OPS (vint16m2_t, 0) > DEF_RVV_I_OPS (vint16m4_t, 0) > DEF_RVV_I_OPS (vint16m8_t, 0) > -DEF_RVV_I_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_I_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_I_OPS (vint32m1_t, 0) > DEF_RVV_I_OPS (vint32m2_t, 0) > DEF_RVV_I_OPS (vint32m4_t, 0) > DEF_RVV_I_OPS (vint32m8_t, 0) > -DEF_RVV_I_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_I_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) > > -DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_U_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_U_OPS (vuint8mf4_t, 0) > DEF_RVV_U_OPS (vuint8mf2_t, 0) > DEF_RVV_U_OPS (vuint8m1_t, 0) > DEF_RVV_U_OPS (vuint8m2_t, 0) > DEF_RVV_U_OPS (vuint8m4_t, 0) > DEF_RVV_U_OPS (vuint8m8_t, 0) > -DEF_RVV_U_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_U_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_U_OPS (vuint16mf2_t, 0) > DEF_RVV_U_OPS (vuint16m1_t, 0) > DEF_RVV_U_OPS (vuint16m2_t, 0) > DEF_RVV_U_OPS (vuint16m4_t, 0) > DEF_RVV_U_OPS (vuint16m8_t, 0) > -DEF_RVV_U_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_U_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_U_OPS (vuint32m1_t, 0) > DEF_RVV_U_OPS (vuint32m2_t, 0) > DEF_RVV_U_OPS (vuint32m4_t, 0) > DEF_RVV_U_OPS (vuint32m8_t, 0) > -DEF_RVV_U_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_U_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) > > -DEF_RVV_F_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE64= ) > +DEF_RVV_F_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_V= LEN_64) > DEF_RVV_F_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_F_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_F_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32) > @@ -291,7 +291,7 @@ DEF_RVV_F_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64) > DEF_RVV_F_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64) > DEF_RVV_F_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64) > > -DEF_RVV_B_OPS (vbool64_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_B_OPS (vbool64_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_B_OPS (vbool32_t, 0) > DEF_RVV_B_OPS (vbool16_t, 0) > DEF_RVV_B_OPS (vbool8_t, 0) > @@ -299,82 +299,82 @@ DEF_RVV_B_OPS (vbool4_t, 0) > DEF_RVV_B_OPS (vbool2_t, 0) > DEF_RVV_B_OPS (vbool1_t, 0) > > -DEF_RVV_WEXTI_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WEXTI_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_WEXTI_OPS (vint16mf2_t, 0) > DEF_RVV_WEXTI_OPS (vint16m1_t, 0) > DEF_RVV_WEXTI_OPS (vint16m2_t, 0) > DEF_RVV_WEXTI_OPS (vint16m4_t, 0) > DEF_RVV_WEXTI_OPS (vint16m8_t, 0) > -DEF_RVV_WEXTI_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WEXTI_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_WEXTI_OPS (vint32m1_t, 0) > DEF_RVV_WEXTI_OPS (vint32m2_t, 0) > DEF_RVV_WEXTI_OPS (vint32m4_t, 0) > DEF_RVV_WEXTI_OPS (vint32m8_t, 0) > -DEF_RVV_WEXTI_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_WEXTI_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_WEXTI_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_WEXTI_OPS (vint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WEXTI_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_WEXTI_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_WEXTI_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_WEXTI_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) > > -DEF_RVV_QEXTI_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_QEXTI_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_QEXTI_OPS (vint32m1_t, 0) > DEF_RVV_QEXTI_OPS (vint32m2_t, 0) > DEF_RVV_QEXTI_OPS (vint32m4_t, 0) > DEF_RVV_QEXTI_OPS (vint32m8_t, 0) > -DEF_RVV_QEXTI_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_QEXTI_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_QEXTI_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_QEXTI_OPS (vint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_QEXTI_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_QEXTI_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_QEXTI_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_QEXTI_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) > > -DEF_RVV_OEXTI_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_OEXTI_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_OEXTI_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_OEXTI_OPS (vint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_OEXTI_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_OEXTI_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_OEXTI_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_OEXTI_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) > > -DEF_RVV_WEXTU_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WEXTU_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_WEXTU_OPS (vuint16mf2_t, 0) > DEF_RVV_WEXTU_OPS (vuint16m1_t, 0) > DEF_RVV_WEXTU_OPS (vuint16m2_t, 0) > DEF_RVV_WEXTU_OPS (vuint16m4_t, 0) > DEF_RVV_WEXTU_OPS (vuint16m8_t, 0) > -DEF_RVV_WEXTU_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WEXTU_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_WEXTU_OPS (vuint32m1_t, 0) > DEF_RVV_WEXTU_OPS (vuint32m2_t, 0) > DEF_RVV_WEXTU_OPS (vuint32m4_t, 0) > DEF_RVV_WEXTU_OPS (vuint32m8_t, 0) > -DEF_RVV_WEXTU_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_WEXTU_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_WEXTU_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_WEXTU_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WEXTU_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_WEXTU_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_WEXTU_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_WEXTU_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) > > -DEF_RVV_QEXTU_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_QEXTU_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_QEXTU_OPS (vuint32m1_t, 0) > DEF_RVV_QEXTU_OPS (vuint32m2_t, 0) > DEF_RVV_QEXTU_OPS (vuint32m4_t, 0) > DEF_RVV_QEXTU_OPS (vuint32m8_t, 0) > -DEF_RVV_QEXTU_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_QEXTU_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_QEXTU_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_QEXTU_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_QEXTU_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_QEXTU_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_QEXTU_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_QEXTU_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) > > -DEF_RVV_OEXTU_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_OEXTU_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_OEXTU_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_OEXTU_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_OEXTU_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_OEXTU_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_OEXTU_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_OEXTU_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) > > -DEF_RVV_FULL_V_I_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_FULL_V_I_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_FULL_V_I_OPS (vint8mf4_t, 0) > DEF_RVV_FULL_V_I_OPS (vint8mf2_t, 0) > DEF_RVV_FULL_V_I_OPS (vint8m1_t, 0) > DEF_RVV_FULL_V_I_OPS (vint8m2_t, 0) > DEF_RVV_FULL_V_I_OPS (vint8m4_t, 0) > DEF_RVV_FULL_V_I_OPS (vint8m8_t, 0) > -DEF_RVV_FULL_V_I_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_FULL_V_I_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_FULL_V_I_OPS (vint16mf2_t, 0) > DEF_RVV_FULL_V_I_OPS (vint16m1_t, 0) > DEF_RVV_FULL_V_I_OPS (vint16m2_t, 0) > DEF_RVV_FULL_V_I_OPS (vint16m4_t, 0) > DEF_RVV_FULL_V_I_OPS (vint16m8_t, 0) > -DEF_RVV_FULL_V_I_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_FULL_V_I_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_FULL_V_I_OPS (vint32m1_t, 0) > DEF_RVV_FULL_V_I_OPS (vint32m2_t, 0) > DEF_RVV_FULL_V_I_OPS (vint32m4_t, 0) > @@ -384,20 +384,20 @@ DEF_RVV_FULL_V_I_OPS (vint64m2_t, RVV_REQUIRE_FULL_= V) > DEF_RVV_FULL_V_I_OPS (vint64m4_t, RVV_REQUIRE_FULL_V) > DEF_RVV_FULL_V_I_OPS (vint64m8_t, RVV_REQUIRE_FULL_V) > > -DEF_RVV_FULL_V_U_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_FULL_V_U_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_FULL_V_U_OPS (vuint8mf4_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint8mf2_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint8m1_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint8m2_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint8m4_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint8m8_t, 0) > -DEF_RVV_FULL_V_U_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_FULL_V_U_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_FULL_V_U_OPS (vuint16mf2_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint16m1_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint16m2_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint16m4_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint16m8_t, 0) > -DEF_RVV_FULL_V_U_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_FULL_V_U_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_FULL_V_U_OPS (vuint32m1_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint32m2_t, 0) > DEF_RVV_FULL_V_U_OPS (vuint32m4_t, 0) > @@ -412,128 +412,128 @@ DEF_RVV_WEXTF_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN= _FP_64) > DEF_RVV_WEXTF_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64) > DEF_RVV_WEXTF_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64) > > -DEF_RVV_CONVERT_I_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_CONVERT_I_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_CONVERT_I_OPS (vint32m1_t, 0) > DEF_RVV_CONVERT_I_OPS (vint32m2_t, 0) > DEF_RVV_CONVERT_I_OPS (vint32m4_t, 0) > DEF_RVV_CONVERT_I_OPS (vint32m8_t, 0) > -DEF_RVV_CONVERT_I_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_CONVERT_I_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_CONVERT_I_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_CONVERT_I_OPS (vint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_CONVERT_I_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_CONVERT_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_CONVERT_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_CONVERT_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) > > -DEF_RVV_CONVERT_U_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_CONVERT_U_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_CONVERT_U_OPS (vuint32m1_t, 0) > DEF_RVV_CONVERT_U_OPS (vuint32m2_t, 0) > DEF_RVV_CONVERT_U_OPS (vuint32m4_t, 0) > DEF_RVV_CONVERT_U_OPS (vuint32m8_t, 0) > -DEF_RVV_CONVERT_U_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_CONVERT_U_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_CONVERT_U_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_CONVERT_U_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_CONVERT_U_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_CONVERT_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_CONVERT_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_CONVERT_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) > > -DEF_RVV_WCONVERT_I_OPS (vint64m1_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE= _ZVE64) > -DEF_RVV_WCONVERT_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE= _ZVE64) > -DEF_RVV_WCONVERT_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE= _ZVE64) > -DEF_RVV_WCONVERT_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE= _ZVE64) > +DEF_RVV_WCONVERT_I_OPS (vint64m1_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE= _ELEN_64) > +DEF_RVV_WCONVERT_I_OPS (vint64m2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE= _ELEN_64) > +DEF_RVV_WCONVERT_I_OPS (vint64m4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE= _ELEN_64) > +DEF_RVV_WCONVERT_I_OPS (vint64m8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE= _ELEN_64) > > -DEF_RVV_WCONVERT_U_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIR= E_ZVE64) > -DEF_RVV_WCONVERT_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIR= E_ZVE64) > -DEF_RVV_WCONVERT_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIR= E_ZVE64) > -DEF_RVV_WCONVERT_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIR= E_ZVE64) > +DEF_RVV_WCONVERT_U_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIR= E_ELEN_64) > +DEF_RVV_WCONVERT_U_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIR= E_ELEN_64) > +DEF_RVV_WCONVERT_U_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIR= E_ELEN_64) > +DEF_RVV_WCONVERT_U_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIR= E_ELEN_64) > > DEF_RVV_WCONVERT_F_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64) > DEF_RVV_WCONVERT_F_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64) > DEF_RVV_WCONVERT_F_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64) > DEF_RVV_WCONVERT_F_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64) > > -DEF_RVV_WI_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WI_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_WI_OPS (vint8mf4_t, 0) > DEF_RVV_WI_OPS (vint8mf2_t, 0) > DEF_RVV_WI_OPS (vint8m1_t, 0) > DEF_RVV_WI_OPS (vint8m2_t, 0) > DEF_RVV_WI_OPS (vint8m4_t, 0) > DEF_RVV_WI_OPS (vint8m8_t, 0) > -DEF_RVV_WI_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WI_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_WI_OPS (vint16mf2_t, 0) > DEF_RVV_WI_OPS (vint16m1_t, 0) > DEF_RVV_WI_OPS (vint16m2_t, 0) > DEF_RVV_WI_OPS (vint16m4_t, 0) > DEF_RVV_WI_OPS (vint16m8_t, 0) > -DEF_RVV_WI_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WI_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_WI_OPS (vint32m1_t, 0) > DEF_RVV_WI_OPS (vint32m2_t, 0) > DEF_RVV_WI_OPS (vint32m4_t, 0) > DEF_RVV_WI_OPS (vint32m8_t, 0) > > -DEF_RVV_WU_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WU_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_WU_OPS (vuint8mf4_t, 0) > DEF_RVV_WU_OPS (vuint8mf2_t, 0) > DEF_RVV_WU_OPS (vuint8m1_t, 0) > DEF_RVV_WU_OPS (vuint8m2_t, 0) > DEF_RVV_WU_OPS (vuint8m4_t, 0) > DEF_RVV_WU_OPS (vuint8m8_t, 0) > -DEF_RVV_WU_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WU_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_WU_OPS (vuint16mf2_t, 0) > DEF_RVV_WU_OPS (vuint16m1_t, 0) > DEF_RVV_WU_OPS (vuint16m2_t, 0) > DEF_RVV_WU_OPS (vuint16m4_t, 0) > DEF_RVV_WU_OPS (vuint16m8_t, 0) > -DEF_RVV_WU_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_WU_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_WU_OPS (vuint32m1_t, 0) > DEF_RVV_WU_OPS (vuint32m2_t, 0) > DEF_RVV_WU_OPS (vuint32m4_t, 0) > DEF_RVV_WU_OPS (vuint32m8_t, 0) > > -DEF_RVV_WF_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZVE6= 4) > +DEF_RVV_WF_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MIN_= VLEN_64) > DEF_RVV_WF_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_WF_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_WF_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_WF_OPS (vfloat32m8_t, RVV_REQUIRE_ELEN_FP_32) > > -DEF_RVV_EI16_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EI16_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EI16_OPS (vint8mf4_t, 0) > DEF_RVV_EI16_OPS (vint8mf2_t, 0) > DEF_RVV_EI16_OPS (vint8m1_t, 0) > DEF_RVV_EI16_OPS (vint8m2_t, 0) > DEF_RVV_EI16_OPS (vint8m4_t, 0) > -DEF_RVV_EI16_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EI16_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EI16_OPS (vint16mf2_t, 0) > DEF_RVV_EI16_OPS (vint16m1_t, 0) > DEF_RVV_EI16_OPS (vint16m2_t, 0) > DEF_RVV_EI16_OPS (vint16m4_t, 0) > DEF_RVV_EI16_OPS (vint16m8_t, 0) > -DEF_RVV_EI16_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EI16_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EI16_OPS (vint32m1_t, 0) > DEF_RVV_EI16_OPS (vint32m2_t, 0) > DEF_RVV_EI16_OPS (vint32m4_t, 0) > DEF_RVV_EI16_OPS (vint32m8_t, 0) > -DEF_RVV_EI16_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EI16_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EI16_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EI16_OPS (vint64m8_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EI16_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EI16_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EI16_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EI16_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EI16_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EI16_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EI16_OPS (vuint8mf4_t, 0) > DEF_RVV_EI16_OPS (vuint8mf2_t, 0) > DEF_RVV_EI16_OPS (vuint8m1_t, 0) > DEF_RVV_EI16_OPS (vuint8m2_t, 0) > DEF_RVV_EI16_OPS (vuint8m4_t, 0) > -DEF_RVV_EI16_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EI16_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EI16_OPS (vuint16mf2_t, 0) > DEF_RVV_EI16_OPS (vuint16m1_t, 0) > DEF_RVV_EI16_OPS (vuint16m2_t, 0) > DEF_RVV_EI16_OPS (vuint16m4_t, 0) > DEF_RVV_EI16_OPS (vuint16m8_t, 0) > -DEF_RVV_EI16_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EI16_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EI16_OPS (vuint32m1_t, 0) > DEF_RVV_EI16_OPS (vuint32m2_t, 0) > DEF_RVV_EI16_OPS (vuint32m4_t, 0) > DEF_RVV_EI16_OPS (vuint32m8_t, 0) > -DEF_RVV_EI16_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EI16_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EI16_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EI16_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EI16_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_ZV= E64) > +DEF_RVV_EI16_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EI16_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EI16_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EI16_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EI16_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_REQUIRE_MI= N_VLEN_64) > DEF_RVV_EI16_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_EI16_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_EI16_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32) > @@ -543,36 +543,36 @@ DEF_RVV_EI16_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP= _64) > DEF_RVV_EI16_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64) > DEF_RVV_EI16_OPS (vfloat64m8_t, RVV_REQUIRE_ELEN_FP_64) > > -DEF_RVV_EEW8_INTERPRET_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW8_INTERPRET_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EEW8_INTERPRET_OPS (vint16mf2_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vint16m1_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vint16m2_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vint16m4_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vint16m8_t, 0) > -DEF_RVV_EEW8_INTERPRET_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW8_INTERPRET_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EEW8_INTERPRET_OPS (vint32m1_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vint32m2_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vint32m4_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vint32m8_t, 0) > -DEF_RVV_EEW8_INTERPRET_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW8_INTERPRET_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW8_INTERPRET_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW8_INTERPRET_OPS (vint64m8_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW8_INTERPRET_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW8_INTERPRET_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW8_INTERPRET_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW8_INTERPRET_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW8_INTERPRET_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW8_INTERPRET_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EEW8_INTERPRET_OPS (vuint16mf2_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vuint16m1_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vuint16m2_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vuint16m4_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vuint16m8_t, 0) > -DEF_RVV_EEW8_INTERPRET_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW8_INTERPRET_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EEW8_INTERPRET_OPS (vuint32m1_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vuint32m2_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vuint32m4_t, 0) > DEF_RVV_EEW8_INTERPRET_OPS (vuint32m8_t, 0) > -DEF_RVV_EEW8_INTERPRET_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW8_INTERPRET_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW8_INTERPRET_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW8_INTERPRET_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW8_INTERPRET_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW8_INTERPRET_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW8_INTERPRET_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW8_INTERPRET_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) > > DEF_RVV_EEW16_INTERPRET_OPS (vint8mf4_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vint8mf2_t, 0) > @@ -580,30 +580,30 @@ DEF_RVV_EEW16_INTERPRET_OPS (vint8m1_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vint8m2_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vint8m4_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vint8m8_t, 0) > -DEF_RVV_EEW16_INTERPRET_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW16_INTERPRET_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EEW16_INTERPRET_OPS (vint32m1_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vint32m2_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vint32m4_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vint32m8_t, 0) > -DEF_RVV_EEW16_INTERPRET_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW16_INTERPRET_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW16_INTERPRET_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW16_INTERPRET_OPS (vint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW16_INTERPRET_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW16_INTERPRET_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW16_INTERPRET_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW16_INTERPRET_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) > DEF_RVV_EEW16_INTERPRET_OPS (vuint8mf4_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vuint8mf2_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vuint8m1_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vuint8m2_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vuint8m4_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vuint8m8_t, 0) > -DEF_RVV_EEW16_INTERPRET_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW16_INTERPRET_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_EEW16_INTERPRET_OPS (vuint32m1_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vuint32m2_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vuint32m4_t, 0) > DEF_RVV_EEW16_INTERPRET_OPS (vuint32m8_t, 0) > -DEF_RVV_EEW16_INTERPRET_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW16_INTERPRET_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW16_INTERPRET_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW16_INTERPRET_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW16_INTERPRET_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW16_INTERPRET_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW16_INTERPRET_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW16_INTERPRET_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) > > DEF_RVV_EEW32_INTERPRET_OPS (vint8mf2_t, 0) > DEF_RVV_EEW32_INTERPRET_OPS (vint8m1_t, 0) > @@ -615,10 +615,10 @@ DEF_RVV_EEW32_INTERPRET_OPS (vint16m1_t, 0) > DEF_RVV_EEW32_INTERPRET_OPS (vint16m2_t, 0) > DEF_RVV_EEW32_INTERPRET_OPS (vint16m4_t, 0) > DEF_RVV_EEW32_INTERPRET_OPS (vint16m8_t, 0) > -DEF_RVV_EEW32_INTERPRET_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW32_INTERPRET_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW32_INTERPRET_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW32_INTERPRET_OPS (vint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW32_INTERPRET_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW32_INTERPRET_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW32_INTERPRET_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW32_INTERPRET_OPS (vint64m8_t, RVV_REQUIRE_ELEN_64) > DEF_RVV_EEW32_INTERPRET_OPS (vuint8mf2_t, 0) > DEF_RVV_EEW32_INTERPRET_OPS (vuint8m1_t, 0) > DEF_RVV_EEW32_INTERPRET_OPS (vuint8m2_t, 0) > @@ -629,10 +629,10 @@ DEF_RVV_EEW32_INTERPRET_OPS (vuint16m1_t, 0) > DEF_RVV_EEW32_INTERPRET_OPS (vuint16m2_t, 0) > DEF_RVV_EEW32_INTERPRET_OPS (vuint16m4_t, 0) > DEF_RVV_EEW32_INTERPRET_OPS (vuint16m8_t, 0) > -DEF_RVV_EEW32_INTERPRET_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW32_INTERPRET_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW32_INTERPRET_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_EEW32_INTERPRET_OPS (vuint64m8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_EEW32_INTERPRET_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW32_INTERPRET_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW32_INTERPRET_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_EEW32_INTERPRET_OPS (vuint64m8_t, RVV_REQUIRE_ELEN_64) > > DEF_RVV_EEW64_INTERPRET_OPS (vint8m1_t, 0) > DEF_RVV_EEW64_INTERPRET_OPS (vint8m2_t, 0) > @@ -659,43 +659,43 @@ DEF_RVV_EEW64_INTERPRET_OPS (vuint32m2_t, 0) > DEF_RVV_EEW64_INTERPRET_OPS (vuint32m4_t, 0) > DEF_RVV_EEW64_INTERPRET_OPS (vuint32m8_t, 0) > > -DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf4_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vint8mf2_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vint8m1_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vint8m2_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vint8m4_t, 0) > -DEF_RVV_X2_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X2_VLMUL_EXT_OPS (vint16mf2_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vint16m1_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vint16m2_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vint16m4_t, 0) > -DEF_RVV_X2_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X2_VLMUL_EXT_OPS (vint32m1_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vint32m2_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vint32m4_t, 0) > -DEF_RVV_X2_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X2_VLMUL_EXT_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X2_VLMUL_EXT_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf4_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint8mf2_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint8m1_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint8m2_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint8m4_t, 0) > -DEF_RVV_X2_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint16mf2_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint16m1_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint16m2_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint16m4_t, 0) > -DEF_RVV_X2_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint32m1_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint32m2_t, 0) > DEF_RVV_X2_VLMUL_EXT_OPS (vuint32m4_t, 0) > -DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_RE= QUIRE_ZVE64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_RE= QUIRE_MIN_VLEN_64) > DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_X2_VLMUL_EXT_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32) > @@ -703,118 +703,118 @@ DEF_RVV_X2_VLMUL_EXT_OPS (vfloat64m1_t, RVV_REQUI= RE_ELEN_FP_64) > DEF_RVV_X2_VLMUL_EXT_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64) > DEF_RVV_X2_VLMUL_EXT_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64) > > -DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf4_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vint8mf2_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vint8m1_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vint8m2_t, 0) > -DEF_RVV_X4_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X4_VLMUL_EXT_OPS (vint16mf2_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vint16m1_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vint16m2_t, 0) > -DEF_RVV_X4_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X4_VLMUL_EXT_OPS (vint32m1_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vint32m2_t, 0) > -DEF_RVV_X4_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X4_VLMUL_EXT_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf4_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vuint8mf2_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vuint8m1_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vuint8m2_t, 0) > -DEF_RVV_X4_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X4_VLMUL_EXT_OPS (vuint16mf2_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vuint16m1_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vuint16m2_t, 0) > -DEF_RVV_X4_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X4_VLMUL_EXT_OPS (vuint32m1_t, 0) > DEF_RVV_X4_VLMUL_EXT_OPS (vuint32m2_t, 0) > -DEF_RVV_X4_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X4_VLMUL_EXT_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_RE= QUIRE_ZVE64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_RE= QUIRE_MIN_VLEN_64) > DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_X4_VLMUL_EXT_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_X4_VLMUL_EXT_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64) > DEF_RVV_X4_VLMUL_EXT_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64) > > -DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf4_t, 0) > DEF_RVV_X8_VLMUL_EXT_OPS (vint8mf2_t, 0) > DEF_RVV_X8_VLMUL_EXT_OPS (vint8m1_t, 0) > -DEF_RVV_X8_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X8_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X8_VLMUL_EXT_OPS (vint16mf2_t, 0) > DEF_RVV_X8_VLMUL_EXT_OPS (vint16m1_t, 0) > -DEF_RVV_X8_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X8_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X8_VLMUL_EXT_OPS (vint32m1_t, 0) > -DEF_RVV_X8_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X8_VLMUL_EXT_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf4_t, 0) > DEF_RVV_X8_VLMUL_EXT_OPS (vuint8mf2_t, 0) > DEF_RVV_X8_VLMUL_EXT_OPS (vuint8m1_t, 0) > -DEF_RVV_X8_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X8_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X8_VLMUL_EXT_OPS (vuint16mf2_t, 0) > DEF_RVV_X8_VLMUL_EXT_OPS (vuint16m1_t, 0) > -DEF_RVV_X8_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X8_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X8_VLMUL_EXT_OPS (vuint32m1_t, 0) > -DEF_RVV_X8_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X8_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_RE= QUIRE_ZVE64) > +DEF_RVV_X8_VLMUL_EXT_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > +DEF_RVV_X8_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_RE= QUIRE_MIN_VLEN_64) > DEF_RVV_X8_VLMUL_EXT_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_X8_VLMUL_EXT_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64) > > -DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf4_t, 0) > DEF_RVV_X16_VLMUL_EXT_OPS (vint8mf2_t, 0) > -DEF_RVV_X16_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X16_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X16_VLMUL_EXT_OPS (vint16mf2_t, 0) > -DEF_RVV_X16_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X16_VLMUL_EXT_OPS (vint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > +DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf4_t, 0) > DEF_RVV_X16_VLMUL_EXT_OPS (vuint8mf2_t, 0) > -DEF_RVV_X16_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X16_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X16_VLMUL_EXT_OPS (vuint16mf2_t, 0) > -DEF_RVV_X16_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X16_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_R= EQUIRE_ZVE64) > +DEF_RVV_X16_VLMUL_EXT_OPS (vuint32mf2_t, RVV_REQUIRE_MIN_VLEN_64) > +DEF_RVV_X16_VLMUL_EXT_OPS (vfloat32mf2_t, RVV_REQUIRE_ELEN_FP_32 | RVV_R= EQUIRE_MIN_VLEN_64) > > -DEF_RVV_X32_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X32_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X32_VLMUL_EXT_OPS (vint8mf4_t, 0) > -DEF_RVV_X32_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X32_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X32_VLMUL_EXT_OPS (vint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > +DEF_RVV_X32_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > DEF_RVV_X32_VLMUL_EXT_OPS (vuint8mf4_t, 0) > -DEF_RVV_X32_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X32_VLMUL_EXT_OPS (vuint16mf4_t, RVV_REQUIRE_MIN_VLEN_64) > > -DEF_RVV_X64_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_ZVE64) > -DEF_RVV_X64_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_X64_VLMUL_EXT_OPS (vint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > +DEF_RVV_X64_VLMUL_EXT_OPS (vuint8mf8_t, RVV_REQUIRE_MIN_VLEN_64) > > DEF_RVV_LMUL1_OPS (vint8m1_t, 0) > DEF_RVV_LMUL1_OPS (vint16m1_t, 0) > DEF_RVV_LMUL1_OPS (vint32m1_t, 0) > -DEF_RVV_LMUL1_OPS (vint64m1_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_LMUL1_OPS (vint64m1_t, RVV_REQUIRE_ELEN_64) > DEF_RVV_LMUL1_OPS (vuint8m1_t, 0) > DEF_RVV_LMUL1_OPS (vuint16m1_t, 0) > DEF_RVV_LMUL1_OPS (vuint32m1_t, 0) > -DEF_RVV_LMUL1_OPS (vuint64m1_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_LMUL1_OPS (vuint64m1_t, RVV_REQUIRE_ELEN_64) > DEF_RVV_LMUL1_OPS (vfloat32m1_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_LMUL1_OPS (vfloat64m1_t, RVV_REQUIRE_ELEN_FP_64) > > DEF_RVV_LMUL2_OPS (vint8m2_t, 0) > DEF_RVV_LMUL2_OPS (vint16m2_t, 0) > DEF_RVV_LMUL2_OPS (vint32m2_t, 0) > -DEF_RVV_LMUL2_OPS (vint64m2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_LMUL2_OPS (vint64m2_t, RVV_REQUIRE_ELEN_64) > DEF_RVV_LMUL2_OPS (vuint8m2_t, 0) > DEF_RVV_LMUL2_OPS (vuint16m2_t, 0) > DEF_RVV_LMUL2_OPS (vuint32m2_t, 0) > -DEF_RVV_LMUL2_OPS (vuint64m2_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_LMUL2_OPS (vuint64m2_t, RVV_REQUIRE_ELEN_64) > DEF_RVV_LMUL2_OPS (vfloat32m2_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_LMUL2_OPS (vfloat64m2_t, RVV_REQUIRE_ELEN_FP_64) > > DEF_RVV_LMUL4_OPS (vint8m4_t, 0) > DEF_RVV_LMUL4_OPS (vint16m4_t, 0) > DEF_RVV_LMUL4_OPS (vint32m4_t, 0) > -DEF_RVV_LMUL4_OPS (vint64m4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_LMUL4_OPS (vint64m4_t, RVV_REQUIRE_ELEN_64) > DEF_RVV_LMUL4_OPS (vuint8m4_t, 0) > DEF_RVV_LMUL4_OPS (vuint16m4_t, 0) > DEF_RVV_LMUL4_OPS (vuint32m4_t, 0) > -DEF_RVV_LMUL4_OPS (vuint64m4_t, RVV_REQUIRE_ZVE64) > +DEF_RVV_LMUL4_OPS (vuint64m4_t, RVV_REQUIRE_ELEN_64) > DEF_RVV_LMUL4_OPS (vfloat32m4_t, RVV_REQUIRE_ELEN_FP_32) > DEF_RVV_LMUL4_OPS (vfloat64m4_t, RVV_REQUIRE_ELEN_FP_64) > > diff --git a/gcc/config/riscv/riscv-vector-builtins.cc b/gcc/config/riscv= /riscv-vector-builtins.cc > index bd16fe9db7d..01cea23d3e6 100644 > --- a/gcc/config/riscv/riscv-vector-builtins.cc > +++ b/gcc/config/riscv/riscv-vector-builtins.cc > @@ -2312,6 +2312,10 @@ register_builtin_type (vector_type_index type, tre= e eltype, machine_mode mode) > builtin_types[type].scalar =3D eltype; > builtin_types[type].scalar_ptr =3D build_pointer_type (eltype); > builtin_types[type].scalar_const_ptr =3D build_const_pointer (eltype); > + /* TODO: We currently just skip the register of the illegal RVV type. > + Ideally, we should report error message more friendly instead of > + reporting "unknown" type. Support more friendly error message in > + the future. */ > if (!riscv_v_ext_vector_mode_p (mode)) > return; > > @@ -2362,6 +2366,10 @@ register_vector_type (vector_type_index type) > > /* When vectype is NULL, the corresponding builtin type > is disabled according to '-march'. */ > + /* TODO: We currently just skip the register of the illegal RVV type. > + Ideally, we should report error message more friendly instead of > + reporting "unknown" type. Support more friendly error message in > + the future. */ > if (!vectype) > return; > tree id =3D get_identifier (vector_types[type].name); > @@ -2452,12 +2460,14 @@ check_required_extensions (const function_instanc= e &instance) > riscv_isa_flags |=3D RVV_REQUIRE_ELEN_FP_32; > if (TARGET_VECTOR_ELEN_FP_64) > riscv_isa_flags |=3D RVV_REQUIRE_ELEN_FP_64; > - if (TARGET_MIN_VLEN > 32) > - riscv_isa_flags |=3D RVV_REQUIRE_ZVE64; > + if (TARGET_VECTOR_ELEN_64) > + riscv_isa_flags |=3D RVV_REQUIRE_ELEN_64; > if (TARGET_64BIT) > riscv_isa_flags |=3D RVV_REQUIRE_RV64BIT; > if (TARGET_FULL_V) > riscv_isa_flags |=3D RVV_REQUIRE_FULL_V; > + if (TARGET_MIN_VLEN > 32) > + riscv_isa_flags |=3D RVV_REQUIRE_MIN_VLEN_64; > > uint64_t missing_extensions =3D required_extensions & ~riscv_isa_flags= ; > if (missing_extensions !=3D 0) > diff --git a/gcc/config/riscv/riscv-vector-builtins.h b/gcc/config/riscv/= riscv-vector-builtins.h > index 84dfe676773..8ffb9d33e33 100644 > --- a/gcc/config/riscv/riscv-vector-builtins.h > +++ b/gcc/config/riscv/riscv-vector-builtins.h > @@ -103,10 +103,11 @@ static const unsigned int CP_WRITE_CSR =3D 1U << 5; > > /* Bit values used to identify required extensions for RVV intrinsics. = */ > #define RVV_REQUIRE_RV64BIT (1 << 0) /* Require RV64. */ > -#define RVV_REQUIRE_ZVE64 (1 << 1) /* Require TARGET_MIN_VLEN > 32. = */ > +#define RVV_REQUIRE_ELEN_64 (1 << 1) /* Require TARGET_VECTOR_ELEN_64.= */ > #define RVV_REQUIRE_ELEN_FP_32 (1 << 2) /* Require FP ELEN >=3D 32. */ > #define RVV_REQUIRE_ELEN_FP_64 (1 << 3) /* Require FP ELEN >=3D 64. */ > #define RVV_REQUIRE_FULL_V (1 << 4) /* Require Full 'V' extension. */ > +#define RVV_REQUIRE_MIN_VLEN_64 (1 << 5) /* Require TARGET_MIN_VLE= N >=3D 64. */ > > /* Enumerates the RVV operand types. */ > enum operand_type_index > diff --git a/gcc/config/riscv/riscv-vector-switch.def b/gcc/config/riscv/= riscv-vector-switch.def > index 8e4aed40338..3b944547b49 100644 > --- a/gcc/config/riscv/riscv-vector-switch.def > +++ b/gcc/config/riscv/riscv-vector-switch.def > @@ -83,16 +83,6 @@ TODO: FP16 vector needs support of 'zvfh', we don't su= pport it yet. */ > #define ENTRY(MODE, REQUIREMENT, VLMUL_FOR_MIN_VLEN32, RATIO_FOR_MIN_VLE= N32, \ > VLMUL_FOR_MIN_VLEN64, RATIO_FOR_MIN_VLEN64) > #endif > -/* Flag of FP32 vector. */ > -#ifndef TARGET_VECTOR_FP32 > -#define TARGET_VECTOR_FP32 = \ > - (TARGET_HARD_FLOAT && (TARGET_VECTOR_ELEN_FP_32 || TARGET_VECTOR_ELEN_= FP_64)) > -#endif > -/* Flag of FP64 vector. */ > -#ifndef TARGET_VECTOR_FP64 > -#define TARGET_VECTOR_FP64 = \ > - (TARGET_DOUBLE_FLOAT && TARGET_VECTOR_ELEN_FP_64 && (TARGET_MIN_VLEN >= 32)) > -#endif > > /* Mask modes. Disable VNx64BImode when TARGET_MIN_VLEN =3D=3D 32. */ > ENTRY (VNx64BI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 1) > @@ -129,35 +119,31 @@ ENTRY (VNx2HF, false, LMUL_1, 16, LMUL_F2, 32) > ENTRY (VNx1HF, false, LMUL_F2, 32, LMUL_F4, 64) > > /* SEW =3D 32. Disable VNx16SImode when TARGET_MIN_VLEN =3D=3D 32. > - For single-precision floating-point, we need TARGET_VECTOR_FP32 =3D= =3D > - RVV_ENABLE. */ > + For single-precision floating-point, we need TARGET_VECTOR_ELEN_FP_32= to be > + true. */ > ENTRY (VNx16SI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 4) > ENTRY (VNx8SI, true, LMUL_8, 4, LMUL_4, 8) > ENTRY (VNx4SI, true, LMUL_4, 8, LMUL_2, 16) > ENTRY (VNx2SI, true, LMUL_2, 16, LMUL_1, 32) > ENTRY (VNx1SI, true, LMUL_1, 32, LMUL_F2, 64) > > -ENTRY (VNx16SF, TARGET_VECTOR_FP32 && (TARGET_MIN_VLEN > 32), LMUL_RESER= VED, 0, > - LMUL_8, 4) > -ENTRY (VNx8SF, TARGET_VECTOR_FP32, LMUL_8, 4, LMUL_4, 8) > -ENTRY (VNx4SF, TARGET_VECTOR_FP32, LMUL_4, 8, LMUL_2, 16) > -ENTRY (VNx2SF, TARGET_VECTOR_FP32, LMUL_2, 16, LMUL_1, 32) > -ENTRY (VNx1SF, TARGET_VECTOR_FP32, LMUL_1, 32, LMUL_F2, 64) > - > -/* SEW =3D 64. Enable when TARGET_MIN_VLEN > 32. > - For double-precision floating-point, we need TARGET_VECTOR_FP64 =3D= =3D > - RVV_ENABLE. */ > -ENTRY (VNx8DI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_8, 8) > -ENTRY (VNx4DI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_4, 16) > -ENTRY (VNx2DI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_2, 32) > -ENTRY (VNx1DI, TARGET_MIN_VLEN > 32, LMUL_RESERVED, 0, LMUL_1, 64) > - > -ENTRY (VNx8DF, TARGET_VECTOR_FP64 && (TARGET_MIN_VLEN > 32), LMUL_RESERV= ED, 0, > - LMUL_8, 8) > -ENTRY (VNx4DF, TARGET_VECTOR_FP64, LMUL_RESERVED, 0, LMUL_4, 16) > -ENTRY (VNx2DF, TARGET_VECTOR_FP64, LMUL_RESERVED, 0, LMUL_2, 32) > -ENTRY (VNx1DF, TARGET_VECTOR_FP64, LMUL_RESERVED, 0, LMUL_1, 64) > - > -#undef TARGET_VECTOR_FP32 > -#undef TARGET_VECTOR_FP64 > +ENTRY (VNx16SF, TARGET_VECTOR_ELEN_FP_32, LMUL_RESERVED, 0, LMUL_8, 4) > +ENTRY (VNx8SF, TARGET_VECTOR_ELEN_FP_32, LMUL_8, 4, LMUL_4, 8) > +ENTRY (VNx4SF, TARGET_VECTOR_ELEN_FP_32, LMUL_4, 8, LMUL_2, 16) > +ENTRY (VNx2SF, TARGET_VECTOR_ELEN_FP_32, LMUL_2, 16, LMUL_1, 32) > +ENTRY (VNx1SF, TARGET_VECTOR_ELEN_FP_32, LMUL_1, 32, LMUL_F2, 64) > + > +/* SEW =3D 64. Enable when TARGET_VECTOR_ELEN_64 is true. > + For double-precision floating-point, we need TARGET_VECTOR_ELEN_FP_64= to be > + true. */ > +ENTRY (VNx8DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_8, 8) > +ENTRY (VNx4DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_4, 16) > +ENTRY (VNx2DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_2, 32) > +ENTRY (VNx1DI, TARGET_VECTOR_ELEN_64, LMUL_RESERVED, 0, LMUL_1, 64) > + > +ENTRY (VNx8DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_8, 8) > +ENTRY (VNx4DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_4, 16) > +ENTRY (VNx2DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_2, 32) > +ENTRY (VNx1DF, TARGET_VECTOR_ELEN_FP_64, LMUL_RESERVED, 0, LMUL_1, 64) > + > #undef ENTRY > diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/pr109479.c b/gcc/tes= tsuite/gcc.target/riscv/rvv/base/pr109479.c > new file mode 100644 > index 00000000000..6c8498af02b > --- /dev/null > +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/pr109479.c > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3 -march=3Drv32gc_zve32x_zvl64b -mabi=3Dilp32d" } */ > + > +void foo0 () {__rvv_int64m1_t t;} /* { dg-error {unknown type name '__rv= v_int64m1_t'} } */ > +void foo1 () {__rvv_uint64m1_t t;} /* { dg-error {unknown type name '__r= vv_uint64m1_t'} } */ > +void foo2 () {__rvv_int64m2_t t;} /* { dg-error {unknown type name '__rv= v_int64m2_t'} } */ > +void foo3 () {__rvv_uint64m2_t t;} /* { dg-error {unknown type name '__r= vv_uint64m2_t'} } */ > +void foo4 () {__rvv_int64m4_t t;} /* { dg-error {unknown type name '__rv= v_int64m4_t'} } */ > +void foo5 () {__rvv_uint64m4_t t;} /* { dg-error {unknown type name '__r= vv_uint64m4_t'} } */ > +void foo6 () {__rvv_int64m8_t t;} /* { dg-error {unknown type name '__rv= v_int64m8_t'} } */ > +void foo7 () {__rvv_uint64m8_t t;} /* { dg-error {unknown type name '__r= vv_uint64m8_t'} } */ > -- > 2.36.1 >