From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com [IPv6:2a00:1450:4864:20::12d]) by sourceware.org (Postfix) with ESMTPS id C5D943858D37 for ; Fri, 28 Apr 2023 06:41:02 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org C5D943858D37 Authentication-Results: sourceware.org; dmarc=pass (p=quarantine dis=none) header.from=sifive.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=sifive.com Received: by mail-lf1-x12d.google.com with SMTP id 2adb3069b0e04-4efe8991b8aso8085446e87.0 for ; Thu, 27 Apr 2023 23:41:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1682664061; x=1685256061; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Bd48r9bVIzPWk1yqRYfUD+xlY1j4e15xxoPJHEqRu1M=; b=cYANRnlsmvLRcHZmDF5uBcufnQKR1NuL/Huu7vzwN1uoV/yPlHDbOhgt4evUgT1pEW JVyvSYpQ/HWD8SzobsGwGUGKrutduV0XVh10ta3i7zuIKlE7Ymtts3tlIffi2E4rHXg/ KuNRigk4elDwrEVGDj1xrbxynHj1sJQjqTbRAHrub06VoYQFLRQq5xhYWRVB3tXe4XUe jYzY9et9eH7OwhTND3kJpBkrzKvMUmESPrl08bJew1FTHzrWog0qDu7W+X1cbrzGSDLv 0lEFGj9mXpiGtTGcs+lHykiCLf/44wcYQpr+4SCOicZ/fcCqJovJyQTd+zGRa5gAE+CK iU3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682664061; x=1685256061; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Bd48r9bVIzPWk1yqRYfUD+xlY1j4e15xxoPJHEqRu1M=; b=JfnfMF4pePDBwW8QDbyPQkG30BojMgKJaM43uE5HH9ABSWQouAcGKqyqtkNzBGmEpi /n5nnQaJ5lQce4fL3fr7TGmR4DrbOJvhCP5a6Toh7FGOAtNyOMK6KObEvJYsqmxtJ+ne yFE6lGRhNu41GkqYztOwYJ5gNA41oi4wnxYgXzIKO9WkeZkcKMSRPwlULf0OpJaEtEVM X9qRNYE+HhaRbLDTEu0oJUdtbiTJMnXnf4vnLKNSwrWq7W2CVLRQ/2VQjgHgjYlv68+b ImNDySfZp0fR8npTjdd+fa2pExdkv1/P8Ao7q6PoJGWmWS0qEIaFxSmiemGR7kjD3obW fjNA== X-Gm-Message-State: AC+VfDxTUfBhHN9+xRkwtTYUAk5jtuHvuyIMu5Ehz2UibOWJaJ7LOMU9 tCW02WlEs9OzTtKhOPHXCVgxVdMc0KeZU6ga7NyCrQ== X-Google-Smtp-Source: ACHHUZ4TcBtN1ByOSqDA1q6E0vwOJzBYcm184wV9JmmJaU91uvQXfUz/3o7gxDZQeNZIUzzi4lj8o6TYzSAJjhqsHMI= X-Received: by 2002:ac2:4353:0:b0:4ee:d4bd:3472 with SMTP id o19-20020ac24353000000b004eed4bd3472mr1102995lfl.35.1682664061152; Thu, 27 Apr 2023 23:41:01 -0700 (PDT) MIME-Version: 1.0 References: <20230427143005.1781966-1-pan2.li@intel.com> <20230428024641.3757002-1-pan2.li@intel.com> In-Reply-To: <20230428024641.3757002-1-pan2.li@intel.com> From: Kito Cheng Date: Fri, 28 Apr 2023 14:40:50 +0800 Message-ID: Subject: Re: [PATCH v2] RISC-V: Allow RVV VMS{Compare}(V1, V1) simplify to VMCLR To: pan2.li@intel.com Cc: gcc-patches@gcc.gnu.org, juzhe.zhong@rivai.ai, yanzhang.wang@intel.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-9.3 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,GIT_PATCH_0,KAM_ASCII_DIVIDERS,KAM_SHORT,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: LGTM I thought it can optimization __riscv_vmseq_vv_i8m8_b1(v1, v1, vl) too, but don't know why it's not evaluated (eq:VNx128BI (reg/v:VNx128QI 137 [ v1 ]) (reg/v:VNx128QI 137 [ v1 ])) to true, anyway, I guess it should be your next step to investigate :) On Fri, Apr 28, 2023 at 10:46=E2=80=AFAM wrote: > > From: Pan Li > > When some RVV integer compare operators act on the same vector > registers without mask. They can be simplified to VMCLR. > > This PATCH allow the ne, lt, ltu, gt, gtu to perform such kind > of the simplification by adding one new define_split. > > Given we have: > vbool1_t test_shortcut_for_riscv_vmslt_case_0(vint8m8_t v1, size_t vl) { > return __riscv_vmslt_vv_i8m8_b1(v1, v1, vl); > } > > Before this patch: > vsetvli zero,a2,e8,m8,ta,ma > vl8re8.v v24,0(a1) > vmslt.vv v8,v24,v24 > vsetvli a5,zero,e8,m8,ta,ma > vsm.v v8,0(a0) > ret > > After this patch: > vsetvli zero,a2,e8,mf8,ta,ma > vmclr.m v24 <- optimized to vmclr.m > vsetvli zero,a5,e8,mf8,ta,ma > vsm.v v24,0(a0) > ret > > As above, we may have one instruction eliminated and require less > vector registers. > > gcc/ChangeLog: > > * config/riscv/vector.md: Add new define split to perform > the simplification. > > gcc/testsuite/ChangeLog: > > * gcc.target/riscv/rvv/base/integer_compare_insn_shortcut.c: New = test. > > Signed-off-by: Pan Li > Co-authored-by: kito-cheng > --- > gcc/config/riscv/vector.md | 32 ++ > .../rvv/base/integer_compare_insn_shortcut.c | 291 ++++++++++++++++++ > 2 files changed, 323 insertions(+) > create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/base/integer_compa= re_insn_shortcut.c > > diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md > index b3d23441679..1642822d098 100644 > --- a/gcc/config/riscv/vector.md > +++ b/gcc/config/riscv/vector.md > @@ -7689,3 +7689,35 @@ (define_insn "@pred_fault_load" > "vleff.v\t%0,%3%p1" > [(set_attr "type" "vldff") > (set_attr "mode" "")]) > + > +;; ---------------------------------------------------------------------= -------- > +;; ---- Integer Compare Instructions Simplification > +;; ---------------------------------------------------------------------= -------- > +;; Simplify to VMCLR.m Includes: > +;; - 1. VMSNE > +;; - 2. VMSLT > +;; - 3. VMSLTU > +;; - 4. VMSGT > +;; - 5. VMSGTU > +;; ---------------------------------------------------------------------= -------- > +(define_split > + [(set (match_operand:VB 0 "register_operand") > + (if_then_else:VB > + (unspec:VB > + [(match_operand:VB 1 "vector_all_trues_mask_operand") > + (match_operand 4 "vector_length_operand") > + (match_operand 5 "const_int_operand") > + (match_operand 6 "const_int_operand") > + (reg:SI VL_REGNUM) > + (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > + (match_operand:VB 3 "vector_move_operand") > + (match_operand:VB 2 "vector_undef_operand")))] > + "TARGET_VECTOR" > + [(const_int 0)] > + { > + emit_insn (gen_pred_mov (mode, operands[0], CONST1_RTX (= mode), > + RVV_VUNDEF (mode), operands[3], > + operands[4], operands[5])); > + DONE; > + } > +) > diff --git a/gcc/testsuite/gcc.target/riscv/rvv/base/integer_compare_insn= _shortcut.c b/gcc/testsuite/gcc.target/riscv/rvv/base/integer_compare_insn_= shortcut.c > new file mode 100644 > index 00000000000..8954adad09d > --- /dev/null > +++ b/gcc/testsuite/gcc.target/riscv/rvv/base/integer_compare_insn_shortc= ut.c > @@ -0,0 +1,291 @@ > +/* { dg-do compile } */ > +/* { dg-options "-march=3Drv64gcv -mabi=3Dlp64 -O3" } */ > + > +#include "riscv_vector.h" > + > +vbool1_t test_shortcut_for_riscv_vmseq_case_0(vint8m8_t v1, size_t vl) { > + return __riscv_vmseq_vv_i8m8_b1(v1, v1, vl); > +} > + > +vbool2_t test_shortcut_for_riscv_vmseq_case_1(vint8m4_t v1, size_t vl) { > + return __riscv_vmseq_vv_i8m4_b2(v1, v1, vl); > +} > + > +vbool4_t test_shortcut_for_riscv_vmseq_case_2(vint8m2_t v1, size_t vl) { > + return __riscv_vmseq_vv_i8m2_b4(v1, v1, vl); > +} > + > +vbool8_t test_shortcut_for_riscv_vmseq_case_3(vint8m1_t v1, size_t vl) { > + return __riscv_vmseq_vv_i8m1_b8(v1, v1, vl); > +} > + > +vbool16_t test_shortcut_for_riscv_vmseq_case_4(vint8mf2_t v1, size_t vl)= { > + return __riscv_vmseq_vv_i8mf2_b16(v1, v1, vl); > +} > + > +vbool32_t test_shortcut_for_riscv_vmseq_case_5(vint8mf4_t v1, size_t vl)= { > + return __riscv_vmseq_vv_i8mf4_b32(v1, v1, vl); > +} > + > +vbool64_t test_shortcut_for_riscv_vmseq_case_6(vint8mf8_t v1, size_t vl)= { > + return __riscv_vmseq_vv_i8mf8_b64(v1, v1, vl); > +} > + > +vbool1_t test_shortcut_for_riscv_vmsne_case_0(vint8m8_t v1, size_t vl) { > + return __riscv_vmsne_vv_i8m8_b1(v1, v1, vl); > +} > + > +vbool2_t test_shortcut_for_riscv_vmsne_case_1(vint8m4_t v1, size_t vl) { > + return __riscv_vmsne_vv_i8m4_b2(v1, v1, vl); > +} > + > +vbool4_t test_shortcut_for_riscv_vmsne_case_2(vint8m2_t v1, size_t vl) { > + return __riscv_vmsne_vv_i8m2_b4(v1, v1, vl); > +} > + > +vbool8_t test_shortcut_for_riscv_vmsne_case_3(vint8m1_t v1, size_t vl) { > + return __riscv_vmsne_vv_i8m1_b8(v1, v1, vl); > +} > + > +vbool16_t test_shortcut_for_riscv_vmsne_case_4(vint8mf2_t v1, size_t vl)= { > + return __riscv_vmsne_vv_i8mf2_b16(v1, v1, vl); > +} > + > +vbool32_t test_shortcut_for_riscv_vmsne_case_5(vint8mf4_t v1, size_t vl)= { > + return __riscv_vmsne_vv_i8mf4_b32(v1, v1, vl); > +} > + > +vbool64_t test_shortcut_for_riscv_vmsne_case_6(vint8mf8_t v1, size_t vl)= { > + return __riscv_vmsne_vv_i8mf8_b64(v1, v1, vl); > +} > + > +vbool1_t test_shortcut_for_riscv_vmslt_case_0(vint8m8_t v1, size_t vl) { > + return __riscv_vmslt_vv_i8m8_b1(v1, v1, vl); > +} > + > +vbool2_t test_shortcut_for_riscv_vmslt_case_1(vint8m4_t v1, size_t vl) { > + return __riscv_vmslt_vv_i8m4_b2(v1, v1, vl); > +} > + > +vbool4_t test_shortcut_for_riscv_vmslt_case_2(vint8m2_t v1, size_t vl) { > + return __riscv_vmslt_vv_i8m2_b4(v1, v1, vl); > +} > + > +vbool8_t test_shortcut_for_riscv_vmslt_case_3(vint8m1_t v1, size_t vl) { > + return __riscv_vmslt_vv_i8m1_b8(v1, v1, vl); > +} > + > +vbool16_t test_shortcut_for_riscv_vmslt_case_4(vint8mf2_t v1, size_t vl)= { > + return __riscv_vmslt_vv_i8mf2_b16(v1, v1, vl); > +} > + > +vbool32_t test_shortcut_for_riscv_vmslt_case_5(vint8mf4_t v1, size_t vl)= { > + return __riscv_vmslt_vv_i8mf4_b32(v1, v1, vl); > +} > + > +vbool64_t test_shortcut_for_riscv_vmslt_case_6(vint8mf8_t v1, size_t vl)= { > + return __riscv_vmslt_vv_i8mf8_b64(v1, v1, vl); > +} > + > +vbool1_t test_shortcut_for_riscv_vmsltu_case_0(vuint8m8_t v1, size_t vl)= { > + return __riscv_vmsltu_vv_u8m8_b1(v1, v1, vl); > +} > + > +vbool2_t test_shortcut_for_riscv_vmsltu_case_1(vuint8m4_t v1, size_t vl)= { > + return __riscv_vmsltu_vv_u8m4_b2(v1, v1, vl); > +} > + > +vbool4_t test_shortcut_for_riscv_vmsltu_case_2(vuint8m2_t v1, size_t vl)= { > + return __riscv_vmsltu_vv_u8m2_b4(v1, v1, vl); > +} > + > +vbool8_t test_shortcut_for_riscv_vmsltu_case_3(vuint8m1_t v1, size_t vl)= { > + return __riscv_vmsltu_vv_u8m1_b8(v1, v1, vl); > +} > + > +vbool16_t test_shortcut_for_riscv_vmsltu_case_4(vuint8mf2_t v1, size_t v= l) { > + return __riscv_vmsltu_vv_u8mf2_b16(v1, v1, vl); > +} > + > +vbool32_t test_shortcut_for_riscv_vmsltu_case_5(vuint8mf4_t v1, size_t v= l) { > + return __riscv_vmsltu_vv_u8mf4_b32(v1, v1, vl); > +} > + > +vbool64_t test_shortcut_for_riscv_vmsltu_case_6(vuint8mf8_t v1, size_t v= l) { > + return __riscv_vmsltu_vv_u8mf8_b64(v1, v1, vl); > +} > + > +vbool1_t test_shortcut_for_riscv_vmsle_case_0(vint8m8_t v1, size_t vl) { > + return __riscv_vmsle_vv_i8m8_b1(v1, v1, vl); > +} > + > +vbool2_t test_shortcut_for_riscv_vmsle_case_1(vint8m4_t v1, size_t vl) { > + return __riscv_vmsle_vv_i8m4_b2(v1, v1, vl); > +} > + > +vbool4_t test_shortcut_for_riscv_vmsle_case_2(vint8m2_t v1, size_t vl) { > + return __riscv_vmsle_vv_i8m2_b4(v1, v1, vl); > +} > + > +vbool8_t test_shortcut_for_riscv_vmsle_case_3(vint8m1_t v1, size_t vl) { > + return __riscv_vmsle_vv_i8m1_b8(v1, v1, vl); > +} > + > +vbool16_t test_shortcut_for_riscv_vmsle_case_4(vint8mf2_t v1, size_t vl)= { > + return __riscv_vmsle_vv_i8mf2_b16(v1, v1, vl); > +} > + > +vbool32_t test_shortcut_for_riscv_vmsle_case_5(vint8mf4_t v1, size_t vl)= { > + return __riscv_vmsle_vv_i8mf4_b32(v1, v1, vl); > +} > + > +vbool64_t test_shortcut_for_riscv_vmsle_case_6(vint8mf8_t v1, size_t vl)= { > + return __riscv_vmsle_vv_i8mf8_b64(v1, v1, vl); > +} > + > +vbool1_t test_shortcut_for_riscv_vmsleu_case_0(vuint8m8_t v1, size_t vl)= { > + return __riscv_vmsleu_vv_u8m8_b1(v1, v1, vl); > +} > + > +vbool2_t test_shortcut_for_riscv_vmsleu_case_1(vuint8m4_t v1, size_t vl)= { > + return __riscv_vmsleu_vv_u8m4_b2(v1, v1, vl); > +} > + > +vbool4_t test_shortcut_for_riscv_vmsleu_case_2(vuint8m2_t v1, size_t vl)= { > + return __riscv_vmsleu_vv_u8m2_b4(v1, v1, vl); > +} > + > +vbool8_t test_shortcut_for_riscv_vmsleu_case_3(vuint8m1_t v1, size_t vl)= { > + return __riscv_vmsleu_vv_u8m1_b8(v1, v1, vl); > +} > + > +vbool16_t test_shortcut_for_riscv_vmsleu_case_4(vuint8mf2_t v1, size_t v= l) { > + return __riscv_vmsleu_vv_u8mf2_b16(v1, v1, vl); > +} > + > +vbool32_t test_shortcut_for_riscv_vmsleu_case_5(vuint8mf4_t v1, size_t v= l) { > + return __riscv_vmsleu_vv_u8mf4_b32(v1, v1, vl); > +} > + > +vbool64_t test_shortcut_for_riscv_vmsleu_case_6(vuint8mf8_t v1, size_t v= l) { > + return __riscv_vmsleu_vv_u8mf8_b64(v1, v1, vl); > +} > + > +vbool1_t test_shortcut_for_riscv_vmsgt_case_0(vint8m8_t v1, size_t vl) { > + return __riscv_vmsgt_vv_i8m8_b1(v1, v1, vl); > +} > + > +vbool2_t test_shortcut_for_riscv_vmsgt_case_1(vint8m4_t v1, size_t vl) { > + return __riscv_vmsgt_vv_i8m4_b2(v1, v1, vl); > +} > + > +vbool4_t test_shortcut_for_riscv_vmsgt_case_2(vint8m2_t v1, size_t vl) { > + return __riscv_vmsgt_vv_i8m2_b4(v1, v1, vl); > +} > + > +vbool8_t test_shortcut_for_riscv_vmsgt_case_3(vint8m1_t v1, size_t vl) { > + return __riscv_vmsgt_vv_i8m1_b8(v1, v1, vl); > +} > + > +vbool16_t test_shortcut_for_riscv_vmsgt_case_4(vint8mf2_t v1, size_t vl)= { > + return __riscv_vmsgt_vv_i8mf2_b16(v1, v1, vl); > +} > + > +vbool32_t test_shortcut_for_riscv_vmsgt_case_5(vint8mf4_t v1, size_t vl)= { > + return __riscv_vmsgt_vv_i8mf4_b32(v1, v1, vl); > +} > + > +vbool64_t test_shortcut_for_riscv_vmsgt_case_6(vint8mf8_t v1, size_t vl)= { > + return __riscv_vmsgt_vv_i8mf8_b64(v1, v1, vl); > +} > + > +vbool1_t test_shortcut_for_riscv_vmsgtu_case_0(vuint8m8_t v1, size_t vl)= { > + return __riscv_vmsgtu_vv_u8m8_b1(v1, v1, vl); > +} > + > +vbool2_t test_shortcut_for_riscv_vmsgtu_case_1(vuint8m4_t v1, size_t vl)= { > + return __riscv_vmsgtu_vv_u8m4_b2(v1, v1, vl); > +} > + > +vbool4_t test_shortcut_for_riscv_vmsgtu_case_2(vuint8m2_t v1, size_t vl)= { > + return __riscv_vmsgtu_vv_u8m2_b4(v1, v1, vl); > +} > + > +vbool8_t test_shortcut_for_riscv_vmsgtu_case_3(vuint8m1_t v1, size_t vl)= { > + return __riscv_vmsgtu_vv_u8m1_b8(v1, v1, vl); > +} > + > +vbool16_t test_shortcut_for_riscv_vmsgtu_case_4(vuint8mf2_t v1, size_t v= l) { > + return __riscv_vmsgtu_vv_u8mf2_b16(v1, v1, vl); > +} > + > +vbool32_t test_shortcut_for_riscv_vmsgtu_case_5(vuint8mf4_t v1, size_t v= l) { > + return __riscv_vmsgtu_vv_u8mf4_b32(v1, v1, vl); > +} > + > +vbool64_t test_shortcut_for_riscv_vmsgtu_case_6(vuint8mf8_t v1, size_t v= l) { > + return __riscv_vmsgtu_vv_u8mf8_b64(v1, v1, vl); > +} > + > +vbool1_t test_shortcut_for_riscv_vmsge_case_0(vint8m8_t v1, size_t vl) { > + return __riscv_vmsge_vv_i8m8_b1(v1, v1, vl); > +} > + > +vbool2_t test_shortcut_for_riscv_vmsge_case_1(vint8m4_t v1, size_t vl) { > + return __riscv_vmsge_vv_i8m4_b2(v1, v1, vl); > +} > + > +vbool4_t test_shortcut_for_riscv_vmsge_case_2(vint8m2_t v1, size_t vl) { > + return __riscv_vmsge_vv_i8m2_b4(v1, v1, vl); > +} > + > +vbool8_t test_shortcut_for_riscv_vmsge_case_3(vint8m1_t v1, size_t vl) { > + return __riscv_vmsge_vv_i8m1_b8(v1, v1, vl); > +} > + > +vbool16_t test_shortcut_for_riscv_vmsge_case_4(vint8mf2_t v1, size_t vl)= { > + return __riscv_vmsge_vv_i8mf2_b16(v1, v1, vl); > +} > + > +vbool32_t test_shortcut_for_riscv_vmsge_case_5(vint8mf4_t v1, size_t vl)= { > + return __riscv_vmsge_vv_i8mf4_b32(v1, v1, vl); > +} > + > +vbool64_t test_shortcut_for_riscv_vmsge_case_6(vint8mf8_t v1, size_t vl)= { > + return __riscv_vmsge_vv_i8mf8_b64(v1, v1, vl); > +} > + > +vbool1_t test_shortcut_for_riscv_vmsgeu_case_0(vuint8m8_t v1, size_t vl)= { > + return __riscv_vmsgeu_vv_u8m8_b1(v1, v1, vl); > +} > + > +vbool2_t test_shortcut_for_riscv_vmsgeu_case_1(vuint8m4_t v1, size_t vl)= { > + return __riscv_vmsgeu_vv_u8m4_b2(v1, v1, vl); > +} > + > +vbool4_t test_shortcut_for_riscv_vmsgeu_case_2(vuint8m2_t v1, size_t vl)= { > + return __riscv_vmsgeu_vv_u8m2_b4(v1, v1, vl); > +} > + > +vbool8_t test_shortcut_for_riscv_vmsgeu_case_3(vuint8m1_t v1, size_t vl)= { > + return __riscv_vmsgeu_vv_u8m1_b8(v1, v1, vl); > +} > + > +vbool16_t test_shortcut_for_riscv_vmsgeu_case_4(vuint8mf2_t v1, size_t v= l) { > + return __riscv_vmsgeu_vv_u8mf2_b16(v1, v1, vl); > +} > + > +vbool32_t test_shortcut_for_riscv_vmsgeu_case_5(vuint8mf4_t v1, size_t v= l) { > + return __riscv_vmsgeu_vv_u8mf4_b32(v1, v1, vl); > +} > + > +vbool64_t test_shortcut_for_riscv_vmsgeu_case_6(vuint8mf8_t v1, size_t v= l) { > + return __riscv_vmsgeu_vv_u8mf8_b64(v1, v1, vl); > +} > + > +/* { dg-final { scan-assembler-times {vmseq\.vv\sv[0-9],\s*v[0-9],\s*v[0= -9]} 7 } } */ > +/* { dg-final { scan-assembler-times {vmsle\.vv\sv[0-9],\s*v[0-9],\s*v[0= -9]} 7 } } */ > +/* { dg-final { scan-assembler-times {vmsleu\.vv\sv[0-9],\s*v[0-9],\s*v[= 0-9]} 7 } } */ > +/* { dg-final { scan-assembler-times {vmsge\.vv\sv[0-9],\s*v[0-9],\s*v[0= -9]} 7 } } */ > +/* { dg-final { scan-assembler-times {vmsgeu\.vv\sv[0-9],\s*v[0-9],\s*v[= 0-9]} 7 } } */ > +/* { dg-final { scan-assembler-times {vmclr\.m\sv[0-9]} 35 } } */ > -- > 2.34.1 >