From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com [IPv6:2a00:1450:4864:20::131]) by sourceware.org (Postfix) with ESMTPS id 2B8343858D32 for ; Wed, 27 Dec 2023 02:47:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 2B8343858D32 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 2B8343858D32 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2a00:1450:4864:20::131 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703645275; cv=none; b=j0JD/3Kvl47x8+1/3As1Y8arRK9CGhmfXNUQgMTyqP2Qun7dBTjSpxFh4mDBa9DERjN9XvaUpMGvXMjYcrT6wHO3CAn/q8XT2ixKyWBpf1CoLCboQnjwKAMZA9Gwj1RMGb//f6opYVgpP7Uf2wIJYMjVgNpTE2LTnOoT71L873k= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1703645275; c=relaxed/simple; bh=5aZKpE7+yDlw0YEZMf6+P+zDpcuWOYvRabUNOBPXR9Q=; h=DKIM-Signature:MIME-Version:From:Date:Message-ID:Subject:To; b=N2aUAQb0aoRt9kNq61n22Q8MMbrz4+C/j+3h6mK42lMFb1W1Fm13NYMQoUQmnVq6UkYoFfOe1xRZaXaiJgcz7b+dGSxMcFv5CXaZ1Jod4yfsYR/jEvmPRKzqJCtWO3waRPWozO+xLkP9gyqjs/84HfTw1/Xya0cZRReEa2HXox8= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-lf1-x131.google.com with SMTP id 2adb3069b0e04-50e81761e43so1067521e87.2 for ; Tue, 26 Dec 2023 18:47:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703645266; x=1704250066; darn=gcc.gnu.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=pW3x/LgBcqG4wHt+LqZQShAfNyT2fgVD681hgpo3yHs=; b=ajbG7zWGKApZystWc+uSQr4+s1CEXoklw27opmKeuPn0nYghcZUtB2w/FvdYznZrLU +nrMVwYr6NvsaLtrNHHcYMnbXN9Ta+QATLYabM7SYRlmvyy7I2R2oOfEioEUPh52R2RA HyUGGBJuSCJj/SVJTiHS2hfTC8BFjlcG8YhU7bHNiE2GfJW/Ytw4c9PGRjEG7v1HWHMx ou6PWuNEOSffxQygfUTcyGh/pKinSELTq19spm2lSSaqi/n/ZbWvCdYRghRAZ6XFivqM hciodRPqAvIartNK4Gy3MCuTJ12bV0RyjQyS82bS+MUcO/FScvThi0mbL8EE7ABNhgmo Sswg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703645266; x=1704250066; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pW3x/LgBcqG4wHt+LqZQShAfNyT2fgVD681hgpo3yHs=; b=eBdg87s96PYVqf88N2PM3A11oiWmApkGHEN5cJDXsj+Rrfi1nnPYOwKd6pD0JNnoD6 zNjG8OLMIN0sRTlQDUzlrT7/X2HEAjrE+Khq1wwBwlwa74pzsecnGBc+GEUyZFvM/yOs g/L0e78NVFU4r0w2m9IGLOjn61krwEXh34oVqQ211LMdf+pepNlbI4BuidSAXdzjXMRH DnjHjbkkQOWf7Vg/ACMWl0itSeiPfl4tawu2gV0xNPA/pUVTSbqcI9nZvMbGHJOXZ7BG Q7JEJkKG5ohSUILVsTSfQQ069aODdb/0DEgAtEeCeIEfPuZKLecd+MeScTk8UAZNfsHV bmxA== X-Gm-Message-State: AOJu0YyF2VIHiNfwI9IAfdA7zqGE3P8LCtfKgc97ZtVwOVcw2mrjHdly vr6koYjTru40IK6dFuFFEw57bt3VPqrodQeXRzkH+SdueCw= X-Google-Smtp-Source: AGHT+IFATIjYpROx8vh1uhKlSu78arRIqMKMzWaCMEp9rrPDm6pRyeV4rm5C6a2zfkjiIl9pYxAezWPFgxkIJRq66dQ= X-Received: by 2002:a05:6512:2204:b0:50e:7e15:a935 with SMTP id h4-20020a056512220400b0050e7e15a935mr1241254lfu.47.1703645266007; Tue, 26 Dec 2023 18:47:46 -0800 (PST) MIME-Version: 1.0 References: <20231222015936.8935-1-wangfeng@eswincomputing.com> <202312221004013858893@eswincomputing.com> In-Reply-To: <202312221004013858893@eswincomputing.com> From: Kito Cheng Date: Wed, 27 Dec 2023 10:47:33 +0800 Message-ID: Subject: Re: [PATCH] RISC-V: Add crypto machine descriptions To: Feng Wang Cc: gcc-patches , Jeff Law , "juzhe.zhong@rivai.ai" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,KAM_SHORT,RCVD_IN_DNSWL_NONE,SCC_10_SHORT_WORD_LINES,SCC_5_SHORT_WORD_LINES,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: Thanks Feng, the patch is LGTM from my side, I am happy to accept vector crypto stuffs for GCC 14, it's mostly intrinsic stuff, and the only few non-intrinsic stuff also low risk enough (e.g. vrol, vctz) On Fri, Dec 22, 2023 at 10:04=E2=80=AFAM Feng Wang wrote: > > 2023-12-22 09:59 Feng Wang wrote: > > Sorry for forgetting to add the patch version number. It should be [PATCH= v8 2/3] > > >Patch v8: Remove unused iterator and add newline at the end. > > > > >Patch v7: Remove mode of const_int_operand and typo. Add > > > > > newline at the end and comment at the beginning. > > > > >Patch v6: Swap the operator order of vandn.vv > > > > >Patch v5: Add vec_duplicate operator. > > > > >Patch v4: Add process of SEW=3D64 in RV32 system. > > > > >Patch v3: Moidfy constrains for crypto vector. > > > > >Patch v2: Add crypto vector ins into RATIO attr and use vr as > > > > >destination register. > > > > > > > > > >This patch add the crypto machine descriptions(vector-crypto.md) and > > > > >some new iterators which are used by crypto vector ext. > > > > > > > > > >Co-Authored by: Songhe Zhu > > > > >Co-Authored by: Ciyan Pan > > > > >gcc/ChangeLog: > > > > > > > > > > * config/riscv/iterators.md: Add rotate insn name. > > > > > * config/riscv/riscv.md: Add new insns name for crypto vector. > > > > > * config/riscv/vector-iterators.md: Add new iterators for crypto = vector. > > > > > * config/riscv/vector.md: Add the corresponding attr for crypto v= ector. > > > > > * config/riscv/vector-crypto.md: New file.The machine description= s for crypto vector. > > > > >--- > > > > > gcc/config/riscv/iterators.md | 4 +- > > > > > gcc/config/riscv/riscv.md | 33 +- > > > > > gcc/config/riscv/vector-crypto.md | 654 +++++++++++++++++++++++++++ > > > > > gcc/config/riscv/vector-iterators.md | 36 ++ > > > > > gcc/config/riscv/vector.md | 55 ++- > > > > > 5 files changed, 761 insertions(+), 21 deletions(-) > > > > > create mode 100755 gcc/config/riscv/vector-crypto.md > > > > > > > > > >diff --git a/gcc/config/riscv/iterators.md b/gcc/config/riscv/iterators.= md > > > > >index ecf033f2fa7..f332fba7031 100644 > > > > >--- a/gcc/config/riscv/iterators.md > > > > >+++ b/gcc/config/riscv/iterators.md > > > > >@@ -304,7 +304,9 @@ > > > > > (umax "maxu") > > > > > (clz "clz") > > > > > (ctz "ctz") > > > > >- (popcount "cpop")]) > > > > >+ (popcount "cpop") > > > > >+ (rotate "rol") > > > > >+ (rotatert "ror")]) > > > > > > > > > > ;; ------------------------------------------------------------------- > > > > > ;; Int Iterators. > > > > >diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md > > > > >index ee8b71c22aa..88019a46a53 100644 > > > > >--- a/gcc/config/riscv/riscv.md > > > > >+++ b/gcc/config/riscv/riscv.md > > > > >@@ -427,6 +427,34 @@ > > > > > ;; vcompress vector compress instruction > > > > > ;; vmov whole vector register move > > > > > ;; vector unknown vector instruction > > > > >+;; 17. Crypto Vector instructions > > > > >+;; vandn crypto vector bitwise and-not instructions > > > > >+;; vbrev crypto vector reverse bits in elements instructions > > > > >+;; vbrev8 crypto vector reverse bits in bytes instructions > > > > >+;; vrev8 crypto vector reverse bytes instructions > > > > >+;; vclz crypto vector count leading Zeros instructions > > > > >+;; vctz crypto vector count lrailing Zeros instructions > > > > >+;; vrol crypto vector rotate left instructions > > > > >+;; vror crypto vector rotate right instructions > > > > >+;; vwsll crypto vector widening shift left logical instructions > > > > >+;; vclmul crypto vector carry-less multiply - return low half ins= tructions > > > > >+;; vclmulh crypto vector carry-less multiply - return high half in= structions > > > > >+;; vghsh crypto vector add-multiply over GHASH Galois-Field inst= ructions > > > > >+;; vgmul crypto vector multiply over GHASH Galois-Field instrumc= tions > > > > >+;; vaesef crypto vector AES final-round encryption instructions > > > > >+;; vaesem crypto vector AES middle-round encryption instructions > > > > >+;; vaesdf crypto vector AES final-round decryption instructions > > > > >+;; vaesdm crypto vector AES middle-round decryption instructions > > > > >+;; vaeskf1 crypto vector AES-128 Forward KeySchedule generation in= structions > > > > >+;; vaeskf2 crypto vector AES-256 Forward KeySchedule generation in= structions > > > > >+;; vaesz crypto vector AES round zero encryption/decryption inst= ructions > > > > >+;; vsha2ms crypto vector SHA-2 message schedule instructions > > > > >+;; vsha2ch crypto vector SHA-2 two rounds of compression instructi= ons > > > > >+;; vsha2cl crypto vector SHA-2 two rounds of compression instructi= ons > > > > >+;; vsm4k crypto vector SM4 KeyExpansion instructions > > > > >+;; vsm4r crypto vector SM4 Rounds instructions > > > > >+;; vsm3me crypto vector SM3 Message Expansion instructions > > > > >+;; vsm3c crypto vector SM3 Compression instructions > > > > > (define_attr "type" > > > > > "unknown,branch,jump,jalr,ret,call,load,fpload,store,fpstore, > > > > > mtc,mfc,const,arith,logical,shift,slt,imul,idiv,move,fmove,fadd,fmul= , > > > > >@@ -446,7 +474,9 @@ > > > > > vired,viwred,vfredu,vfredo,vfwredu,vfwredo, > > > > > vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vimovxv,vfmovvf,vfmovfv= , > > > > > vslideup,vslidedown,vislide1up,vislide1down,vfslide1up,vfslide1down, > > > > >- vgather,vcompress,vmov,vector" > > > > >+ vgather,vcompress,vmov,vector,vandn,vbrev,vbrev8,vrev8,vclz,vctz,vcp= op,vrol,vror,vwsll, > > > > >+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaeskf1,vaesk= f2,vaesz, > > > > >+ vsha2ms,vsha2ch,vsha2cl,vsm4k,vsm4r,vsm3me,vsm3c" > > > > > (cond [(eq_attr "got" "load") (const_string "load") > > > > > > > > > > ;; If a doubleword move uses these expensive instructions, > > > > >@@ -3777,6 +3807,7 @@ > > > > > (include "thead.md") > > > > > (include "generic-ooo.md") > > > > > (include "vector.md") > > > > >+(include "vector-crypto.md") > > > > > (include "zicond.md") > > > > > (include "sfb.md") > > > > > (include "zc.md") > > > > >diff --git a/gcc/config/riscv/vector-crypto.md b/gcc/config/riscv/vector= -crypto.md > > > > >new file mode 100755 > > > > >index 00000000000..9235bdac548 > > > > >--- /dev/null > > > > >+++ b/gcc/config/riscv/vector-crypto.md > > > > >@@ -0,0 +1,654 @@ > > > > >+;; Machine description for the RISC-V Vector Crypto extensions. > > > > >+;; Copyright (C) 2023 Free Software Foundation, Inc. > > > > >+ > > > > >+;; This file is part of GCC. > > > > >+ > > > > >+;; GCC is free software; you can redistribute it and/or modify > > > > >+;; it under the terms of the GNU General Public License as published by > > > > >+;; the Free Software Foundation; either version 3, or (at your option) > > > > >+;; any later version. > > > > >+ > > > > >+;; GCC is distributed in the hope that it will be useful, > > > > >+;; but WITHOUT ANY WARRANTY; without even the implied warranty of > > > > >+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > > > > >+;; GNU General Public License for more details. > > > > >+ > > > > >+;; You should have received a copy of the GNU General Public License > > > > >+;; along with GCC; see the file COPYING3. If not see > > > > >+;; . > > > > >+ > > > > >+(define_c_enum "unspec" [ > > > > >+ ;; Zvbb unspecs > > > > >+ UNSPEC_VBREV > > > > >+ UNSPEC_VBREV8 > > > > >+ UNSPEC_VREV8 > > > > >+ UNSPEC_VCLMUL > > > > >+ UNSPEC_VCLMULH > > > > >+ UNSPEC_VGHSH > > > > >+ UNSPEC_VGMUL > > > > >+ UNSPEC_VAESEF > > > > >+ UNSPEC_VAESEFVV > > > > >+ UNSPEC_VAESEFVS > > > > >+ UNSPEC_VAESEM > > > > >+ UNSPEC_VAESEMVV > > > > >+ UNSPEC_VAESEMVS > > > > >+ UNSPEC_VAESDF > > > > >+ UNSPEC_VAESDFVV > > > > >+ UNSPEC_VAESDFVS > > > > >+ UNSPEC_VAESDM > > > > >+ UNSPEC_VAESDMVV > > > > >+ UNSPEC_VAESDMVS > > > > >+ UNSPEC_VAESZ > > > > >+ UNSPEC_VAESZVVNULL > > > > >+ UNSPEC_VAESZVS > > > > >+ UNSPEC_VAESKF1 > > > > >+ UNSPEC_VAESKF2 > > > > >+ UNSPEC_VSHA2MS > > > > >+ UNSPEC_VSHA2CH > > > > >+ UNSPEC_VSHA2CL > > > > >+ UNSPEC_VSM4K > > > > >+ UNSPEC_VSM4R > > > > >+ UNSPEC_VSM4RVV > > > > >+ UNSPEC_VSM4RVS > > > > >+ UNSPEC_VSM3ME > > > > >+ UNSPEC_VSM3C > > > > >+]) > > > > >+ > > > > >+(define_int_attr rev [(UNSPEC_VBREV "brev") (UNSPEC_VBREV8 "brev8") (U= NSPEC_VREV8 "rev8")]) > > > > >+ > > > > >+(define_int_attr h [(UNSPEC_VCLMUL "") (UNSPEC_VCLMULH "h")]) > > > > >+ > > > > >+(define_int_attr vv_ins_name [(UNSPEC_VGMUL "gmul" ) (UNSPEC_VAESEFV= V "aesef") > > > > >+ (UNSPEC_VAESEMVV "aesem") (UNSPEC_VAESDFV= V "aesdf") > > > > >+ (UNSPEC_VAESDMVV "aesdm") (UNSPEC_VAESEFV= S "aesef") > > > > >+ (UNSPEC_VAESEMVS "aesem") (UNSPEC_VAESDFV= S "aesdf") > > > > >+ (UNSPEC_VAESDMVS "aesdm") (UNSPEC_VAESZVS= "aesz" ) > > > > >+ (UNSPEC_VSM4RVV "sm4r" ) (UNSPEC_VSM4RVS= "sm4r" )]) > > > > >+ > > > > >+(define_int_attr vv_ins1_name [(UNSPEC_VGHSH "ghsh") (UNSPEC_VSHA2M= S "sha2ms") > > > > >+ (UNSPEC_VSHA2CH "sha2ch") (UNSPEC_VSHA2C= L "sha2cl")]) > > > > >+ > > > > >+(define_int_attr vi_ins_name [(UNSPEC_VAESKF1 "aeskf1") (UNSPEC_VSM4K "= sm4k")]) > > > > >+ > > > > >+(define_int_attr vi_ins1_name [(UNSPEC_VAESKF2 "aeskf2") (UNSPEC_VSM3C = "sm3c")]) > > > > >+ > > > > >+(define_int_attr ins_type [(UNSPEC_VGMUL "vv") (UNSPEC_VAESEFVV "vv"= ) > > > > >+ (UNSPEC_VAESEMVV "vv") (UNSPEC_VAESDFVV "vv"= ) > > > > >+ (UNSPEC_VAESDMVV "vv") (UNSPEC_VAESEFVS "vs"= ) > > > > >+ (UNSPEC_VAESEMVS "vs") (UNSPEC_VAESDFVS "vs"= ) > > > > >+ (UNSPEC_VAESDMVS "vs") (UNSPEC_VAESZVS "vs"= ) > > > > >+ (UNSPEC_VSM4RVV "vv") (UNSPEC_VSM4RVS "vs"= )]) > > > > >+ > > > > >+(define_int_iterator UNSPEC_VRBB8 [UNSPEC_VBREV UNSPEC_VBREV8 UNSPEC_VR= EV8]) > > > > >+ > > > > >+(define_int_iterator UNSPEC_CLMUL [UNSPEC_VCLMUL UNSPEC_VCLMULH]) > > > > >+ > > > > >+(define_int_iterator UNSPEC_CRYPTO_VV [UNSPEC_VGMUL UNSPEC_VAESEFVV = UNSPEC_VAESEMVV > > > > >+ UNSPEC_VAESDFVV UNSPEC_VAESDMVV = UNSPEC_VAESEFVS > > > > >+ UNSPEC_VAESEMVS UNSPEC_VAESDFVS = UNSPEC_VAESDMVS > > > > >+ UNSPEC_VAESZVS UNSPEC_VSM4RVV = UNSPEC_VSM4RVS]) > > > > >+ > > > > >+(define_int_iterator UNSPEC_VGNHAB [UNSPEC_VGHSH UNSPEC_VSHA2MS UNSPEC_= VSHA2CH UNSPEC_VSHA2CL]) > > > > >+ > > > > >+(define_int_iterator UNSPEC_CRYPTO_VI [UNSPEC_VAESKF1 UNSPEC_VSM4K]) > > > > >+ > > > > >+(define_int_iterator UNSPEC_CRYPTO_VI1 [UNSPEC_VAESKF2 UNSPEC_VSM3C]) > > > > >+ > > > > >+;; zvbb instructions patterns. > > > > >+;; vandn.vv vandn.vx vrol.vv vrol.vx > > > > >+;; vror.vv vror.vx vror.vi > > > > >+;; vwsll.vv vwsll.vx vwsll.vi > > > > >+(define_insn "@pred_vandn" > > > > >+ [(set (match_operand:VI 0 "register_operand" "=3Dvd, vr, vd, = vr") > > > > >+ (if_then_else:VI > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" "vm,Wc1, vm,Wc1= ") > > > > >+ (match_operand 5 "vector_length_operand" "rK, rK, rK, rK= ") > > > > >+ (match_operand 6 "const_int_operand" " i, i, i, i= ") > > > > >+ (match_operand 7 "const_int_operand" " i, i, i, i= ") > > > > >+ (match_operand 8 "const_int_operand" " i, i, i, i= ") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (and:VI > > > > >+ (not:VI (match_operand:VI 4 "register_operand" "vr, vr, vr, vr= ")) > > > > >+ (match_operand:VI 3 "register_operand" "vr, vr, vr, vr= ")) > > > > >+ (match_operand:VI 2 "vector_merge_operand" "vu, vu, 0, 0= ")))] > > > > >+ "TARGET_ZVBB || TARGET_ZVKB" > > > > >+ "vandn.vv\t%0,%3,%4%p1" > > > > >+ [(set_attr "type" "vandn") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "@pred_vandn_scalar" > > > > >+ [(set (match_operand:VI_QHS 0 "register_operand" "=3Dvd, vr,vd, v= r") > > > > >+ (if_then_else:VI_QHS > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" " vm,Wc1,vm,Wc1"= ) > > > > >+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK"= ) > > > > >+ (match_operand 6 "const_int_operand" " i, i, i, i"= ) > > > > >+ (match_operand 7 "const_int_operand" " i, i, i, i"= ) > > > > >+ (match_operand 8 "const_int_operand" " i, i, i, i"= ) > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (and:VI_QHS > > > > >+ (not:VI_QHS > > > > >+ (vec_duplicate:VI_QHS > > > > >+ (match_operand: 4 "register_operand" " r, r, r, r")= )) > > > > >+ (match_operand:VI_QHS 3 "register_operand" "vr, vr,vr, vr")= ) > > > > >+ (match_operand:VI_QHS 2 "vector_merge_operand" "vu, vu, 0, 0")= ))] > > > > >+ "TARGET_ZVBB || TARGET_ZVKB" > > > > >+ "vandn.vx\t%0,%3,%4%p1" > > > > >+ [(set_attr "type" "vandn") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+;; Handle GET_MODE_INNER (mode) =3D DImode. We need to split them since > > > > >+;; we need to deal with SEW =3D 64 in RV32 system. > > > > >+(define_expand "@pred_vandn_scalar" > > > > >+ [(set (match_operand:VI_D 0 "register_operand") > > > > >+ (if_then_else:VI_D > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand") > > > > >+ (match_operand 5 "vector_length_operand") > > > > >+ (match_operand 6 "const_int_operand") > > > > >+ (match_operand 7 "const_int_operand") > > > > >+ (match_operand 8 "const_int_operand") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (and:VI_D > > > > >+ (not:VI_D > > > > >+ (vec_duplicate:VI_D > > > > >+ (match_operand: 4 "reg_or_int_operand"))) > > > > >+ (match_operand:VI_D 3 "register_operand")) > > > > >+ (match_operand:VI_D 2 "vector_merge_operand")))] > > > > >+ "TARGET_ZVBB || TARGET_ZVKB" > > > > >+{ > > > > >+ if (riscv_vector::sew64_scalar_helper ( > > > > >+ operands, > > > > >+ /* scalar op */&operands[4], > > > > >+ /* vl */operands[5], > > > > >+ mode, > > > > >+ false, > > > > >+ [] (rtx *operands, rtx boardcast_scalar) { > > > > >+ emit_insn (gen_pred_vandn (operands[0], operands[1], > > > > >+ operands[2], operands[3], boardcast_scalar, operands[5], > > > > >+ operands[6], operands[7], operands[8])); > > > > >+ }, > > > > >+ (riscv_vector::avl_type) INTVAL (operands[8]))) > > > > >+ DONE; > > > > >+}) > > > > >+ > > > > >+(define_insn "*pred_vandn_scalar" > > > > >+ [(set (match_operand:VI_D 0 "register_operand" "=3Dvd, vr,vd, = vr") > > > > >+ (if_then_else:VI_D > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" " vm,Wc1,vm,Wc1= ") > > > > >+ (match_operand 5 "vector_length_operand" " rK, rK,rK, rK= ") > > > > >+ (match_operand 6 "const_int_operand" " i, i, i, i= ") > > > > >+ (match_operand 7 "const_int_operand" " i, i, i, i= ") > > > > >+ (match_operand 8 "const_int_operand" " i, i, i, i= ") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (and:VI_D > > > > >+ (not:VI_D > > > > >+ (vec_duplicate:VI_D > > > > >+ (match_operand: 4 "reg_or_0_operand" " rJ, rJ,rJ, rJ"= ))) > > > > >+ (match_operand:VI_D 3 "register_operand" " vr, vr,vr, vr"= )) > > > > >+ (match_operand:VI_D 2 "vector_merge_operand" " vu, vu, 0, 0"= )))] > > > > >+ "TARGET_ZVBB || TARGET_ZVKB" > > > > >+ "vandn.vx\t%0,%3,%z4%p1" > > > > >+ [(set_attr "type" "vandn") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "*pred_vandn_extended_scalar" > > > > >+ [(set (match_operand:VI_D 0 "register_operand" "=3Dvd, vr,= vd, vr") > > > > >+ (if_then_else:VI_D > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" " vm,Wc1,vm= ,Wc1") > > > > >+ (match_operand 5 "vector_length_operand" " rK, rK,rK= , rK") > > > > >+ (match_operand 6 "const_int_operand" " i, i, i= , i") > > > > >+ (match_operand 7 "const_int_operand" " i, i, i= , i") > > > > >+ (match_operand 8 "const_int_operand" " i, i, i= , i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (and:VI_D > > > > >+ (not:VI_D > > > > >+ (vec_duplicate:VI_D > > > > >+ (sign_extend: > > > > >+ (match_operand: 4 "reg_or_0_operand" " rJ, rJ,rJ= , rJ")))) > > > > >+ (match_operand:VI_D 3 "register_operand" " vr, vr,vr= , vr")) > > > > >+ (match_operand:VI_D 2 "vector_merge_operand" " vu, vu, 0, = 0")))] > > > > >+ "TARGET_ZVBB || TARGET_ZVKB" > > > > >+ "vandn.vx\t%0,%3,%z4%p1" > > > > >+ [(set_attr "type" "vandn") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "@pred_v" > > > > >+ [(set (match_operand:VI 0 "register_operand" "=3Dvd,vd, vr, vr= ") > > > > >+ (if_then_else:VI > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" " vm,vm,Wc1,Wc1") > > > > >+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK") > > > > >+ (match_operand 6 "const_int_operand" " i, i, i, i") > > > > >+ (match_operand 7 "const_int_operand" " i, i, i, i") > > > > >+ (match_operand 8 "const_int_operand" " i, i, i, i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (bitmanip_rotate:VI > > > > >+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr") > > > > >+ (match_operand:VI 4 "register_operand" " vr,vr, vr, vr")= ) > > > > >+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")= ))] > > > > >+ "TARGET_ZVBB || TARGET_ZVKB" > > > > >+ "v.vv\t%0,%3,%4%p1" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "@pred_v_scalar" > > > > >+ [(set (match_operand:VI 0 "register_operand" "=3Dvd,vd, vr, vr= ") > > > > >+ (if_then_else:VI > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" " vm,vm,Wc1,Wc1") > > > > >+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK") > > > > >+ (match_operand 6 "const_int_operand" " i, i, i, i") > > > > >+ (match_operand 7 "const_int_operand" " i, i, i, i") > > > > >+ (match_operand 8 "const_int_operand" " i, i, i, i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (bitmanip_rotate:VI > > > > >+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr") > > > > >+ (match_operand 4 "pmode_register_operand" " r, r, r, r")= ) > > > > >+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")= ))] > > > > >+ "TARGET_ZVBB || TARGET_ZVKB" > > > > >+ "v.vx\t%0,%3,%4%p1" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "*pred_vror_scalar" > > > > >+ [(set (match_operand:VI 0 "register_operand" "=3Dvd,vd, vr,vr"= ) > > > > >+ (if_then_else:VI > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" " vm,vm,Wc1,Wc1") > > > > >+ (match_operand 5 "vector_length_operand" " rK,rK, rK, rK") > > > > >+ (match_operand 6 "const_int_operand" " i, i, i, i") > > > > >+ (match_operand 7 "const_int_operand" " i, i, i, i") > > > > >+ (match_operand 8 "const_int_operand" " i, i, i, i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (rotatert:VI > > > > >+ (match_operand:VI 3 "register_operand" " vr,vr, vr, vr") > > > > >+ (match_operand 4 "const_csr_operand" " K, K, K, K")= ) > > > > >+ (match_operand:VI 2 "vector_merge_operand" " vu, 0, vu, 0")= ))] > > > > >+ "TARGET_ZVBB || TARGET_ZVKB" > > > > >+ "vror.vi\t%0,%3,%4%p1" > > > > >+ [(set_attr "type" "vror") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "@pred_vwsll" > > > > >+ [(set (match_operand:VWEXTI 0 "register_operand" "=3D&vr") > > > > >+ (if_then_else:VWEXTI > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" "vmWc1") > > > > >+ (match_operand 5 "vector_length_operand" " rK") > > > > >+ (match_operand 6 "const_int_operand" " i") > > > > >+ (match_operand 7 "const_int_operand" " i") > > > > >+ (match_operand 8 "const_int_operand" " i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (ashift:VWEXTI > > > > >+ (zero_extend:VWEXTI > > > > >+ (match_operand: 3 "register_operand" "vr")) > > > > >+ (match_operand: 4 "register_operand" "vr")) > > > > >+ (match_operand:VWEXTI 2 "vector_merge_operand" "0vu")))] > > > > >+ "TARGET_ZVBB" > > > > >+ "vwsll.vv\t%0,%3,%4%p1" > > > > >+ [(set_attr "type" "vwsll") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "@pred_vwsll_scalar" > > > > >+ [(set (match_operand:VWEXTI 0 "register_operand" "=3Dvd, vr, vd, v= r, vd, vr, vd, vr, vd, vr, vd, vr, ?&vr, ?&vr") > > > > >+ (if_then_else:VWEXTI > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" " vm,Wc1, vm,Wc1,= vm,Wc1, vm,Wc1, vm,Wc1, vm,Wc1,vmWc1,vmWc1") > > > > >+ (match_operand 5 "vector_length_operand" " rK, rK, rK, rK,= rK, rK, rK, rK, rK, rK, rK, rK, rK, rK") > > > > >+ (match_operand 6 "const_int_operand" " i, i, i, i,= i, i, i, i, i, i, i, i, i, i") > > > > >+ (match_operand 7 "const_int_operand" " i, i, i, i,= i, i, i, i, i, i, i, i, i, i") > > > > >+ (match_operand 8 "const_int_operand" " i, i, i, i,= i, i, i, i, i, i, i, i, i, i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (ashift:VWEXTI > > > > >+ (zero_extend:VWEXTI > > > > >+ (match_operand: 3 "register_operand" "W21,W= 21,W21,W21,W42,W42,W42,W42,W84,W84,W84,W84, vr, vr")) > > > > >+ (match_operand: 4 "pmode_reg_or_uimm5_operand" " rK, = rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK, rK")) > > > > >+ (match_operand:VWEXTI 2 "vector_merge_operand" " vu, v= u, 0, 0, vu, vu, 0, 0, vu, vu, 0, 0, vu, 0")))] > > > > >+ "TARGET_ZVBB" > > > > >+ "vwsll.v%o4\t%0,%3,%4%p1" > > > > >+ [(set_attr "type" "vwsll") > > > > >+ (set_attr "mode" "") > > > > >+ (set_attr "group_overlap" "W21,W21,W21,W21,W42,W42,W42,W42,W84,W84,W= 84,W84,none,none")]) > > > > >+ > > > > >+;; vbrev.v vbrev8.v vrev8.v > > > > >+(define_insn "@pred_v" > > > > >+ [(set (match_operand:VI 0 "register_operand" "=3Dvd,vr,vd,vr") > > > > >+ (if_then_else:VI > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" "vm,Wc1,vm,Wc1") > > > > >+ (match_operand 4 "vector_length_operand" "rK,rK, rK, rK") > > > > >+ (match_operand 5 "const_int_operand" "i, i, i, i") > > > > >+ (match_operand 6 "const_int_operand" "i, i, i, i") > > > > >+ (match_operand 7 "const_int_operand" "i, i, i, i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VI > > > > >+ [(match_operand:VI 3 "register_operand" "vr,vr, vr, vr")]= UNSPEC_VRBB8) > > > > >+ (match_operand:VI 2 "vector_merge_operand" "vu,vu, 0, 0"))= )] > > > > >+ "TARGET_ZVBB || TARGET_ZVKB" > > > > >+ "v.v\t%0,%3%p1" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+;; vclz.v vctz.v > > > > >+(define_insn "@pred_v" > > > > >+ [(set (match_operand:VI 0 "register_operand" "=3Dvd, vr") > > > > >+ (clz_ctz_pcnt:VI > > > > >+ (parallel > > > > >+ [(match_operand:VI 2 "register_operand" "vr, vr") > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" "vm,Wc1") > > > > >+ (match_operand 3 "vector_length_operand" "rK, rK") > > > > >+ (match_operand 4 "const_int_operand" " i, i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE)])))] > > > > >+ "TARGET_ZVBB" > > > > >+ "v.v\t%0,%2%p1" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+;; zvbc instructions patterns. > > > > >+;; vclmul.vv vclmul.vx > > > > >+;; vclmulh.vv vclmulh.vx > > > > >+(define_insn "@pred_vclmul" > > > > >+ [(set (match_operand:VI_D 0 "register_operand" "=3Dvd,vr,vd, vr"= ) > > > > >+ (if_then_else:VI_D > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" "vm,Wc1,vm,Wc1") > > > > >+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK") > > > > >+ (match_operand 6 "const_int_operand" " i, i, i, i") > > > > >+ (match_operand 7 "const_int_operand" " i, i, i, i") > > > > >+ (match_operand 8 "const_int_operand" " i, i, i, i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VI_D > > > > >+ [(match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr") > > > > >+ (match_operand:VI_D 4 "register_operand" "vr, vr,vr, vr")= ]UNSPEC_CLMUL) > > > > >+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")= ))] > > > > >+ "TARGET_ZVBC" > > > > >+ "vclmul.vv\t%0,%3,%4%p1" > > > > >+ [(set_attr "type" "vclmul") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+;; Deal with SEW =3D 64 in RV32 system. > > > > >+(define_expand "@pred_vclmul_scalar" > > > > >+ [(set (match_operand:VI_D 0 "register_operand") > > > > >+ (if_then_else:VI_D > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand") > > > > >+ (match_operand 5 "vector_length_operand") > > > > >+ (match_operand 6 "const_int_operand") > > > > >+ (match_operand 7 "const_int_operand") > > > > >+ (match_operand 8 "const_int_operand") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VI_D > > > > >+ [(vec_duplicate:VI_D > > > > >+ (match_operand: 4 "register_operand")) > > > > >+ (match_operand:VI_D 3 "register_operand")]UNSPEC_CLMUL) > > > > >+ (match_operand:VI_D 2 "vector_merge_operand")))] > > > > >+ "TARGET_ZVBC" > > > > >+{ > > > > >+ if (riscv_vector::sew64_scalar_helper ( > > > > >+ operands, > > > > >+ /* scalar op */&operands[4], > > > > >+ /* vl */operands[5], > > > > >+ mode, > > > > >+ false, > > > > >+ [] (rtx *operands, rtx boardcast_scalar) { > > > > >+ emit_insn (gen_pred_vclmul (operands[0], operands[1], > > > > >+ operands[2], operands[3], boardcast_scalar, operands[5], > > > > >+ operands[6], operands[7], operands[8])); > > > > >+ }, > > > > >+ (riscv_vector::avl_type) INTVAL (operands[8]))) > > > > >+ DONE; > > > > >+}) > > > > >+ > > > > >+(define_insn "*pred_vclmul_scalar" > > > > >+ [(set (match_operand:VI_D 0 "register_operand" "=3Dvd,vr,vd, vr= ") > > > > >+ (if_then_else:VI_D > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" "vm,Wc1,vm,Wc1") > > > > >+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK") > > > > >+ (match_operand 6 "const_int_operand" " i, i, i, i") > > > > >+ (match_operand 7 "const_int_operand" " i, i, i, i") > > > > >+ (match_operand 8 "const_int_operand" " i, i, i, i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VI_D > > > > >+ [(vec_duplicate:VI_D > > > > >+ (match_operand: 4 "reg_or_0_operand" "rJ, rJ,rJ, rJ")= ) > > > > >+ (match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr")= ]UNSPEC_CLMUL) > > > > >+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0")= ))] > > > > >+ "TARGET_ZVBC" > > > > >+ "vclmul.vx\t%0,%3,%4%p1" > > > > >+ [(set_attr "type" "vclmul") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "*pred_vclmul_extend_scalar" > > > > >+ [(set (match_operand:VI_D 0 "register_operand" "=3Dvd,vr,vd, = vr") > > > > >+ (if_then_else:VI_D > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "vector_mask_operand" "vm,Wc1,vm,Wc1"= ) > > > > >+ (match_operand 5 "vector_length_operand" "rK, rK,rK, rK"= ) > > > > >+ (match_operand 6 "const_int_operand" " i, i, i, i"= ) > > > > >+ (match_operand 7 "const_int_operand" " i, i, i, i"= ) > > > > >+ (match_operand 8 "const_int_operand" " i, i, i, i"= ) > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VI_D > > > > >+ [(vec_duplicate:VI_D > > > > >+ (sign_extend: > > > > >+ (match_operand: 4 "reg_or_0_operand" " rJ, rJ,rJ, = rJ"))) > > > > >+ (match_operand:VI_D 3 "register_operand" "vr, vr,vr, vr= ")]UNSPEC_CLMUL) > > > > >+ (match_operand:VI_D 2 "vector_merge_operand" "vu, vu, 0, 0= ")))] > > > > >+ "TARGET_ZVBC" > > > > >+ "vclmul.vx\t%0,%3,%4%p1" > > > > >+ [(set_attr "type" "vclmul") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+;; zvknh[ab] and zvkg instructions patterns. > > > > >+;; vsha2ms.vv vsha2ch.vv vsha2cl.vv vghsh.vv > > > > >+(define_insn "@pred_v" > > > > >+ [(set (match_operand:VQEXTI 0 "register_operand" "=3Dvr") > > > > >+ (if_then_else:VQEXTI > > > > >+ (unspec: > > > > >+ [(match_operand 4 "vector_length_operand" "rK") > > > > >+ (match_operand 5 "const_int_operand" " i") > > > > >+ (match_operand 6 "const_int_operand" " i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VQEXTI > > > > >+ [(match_operand:VQEXTI 1 "register_operand" " 0") > > > > >+ (match_operand:VQEXTI 2 "register_operand" "vr") > > > > >+ (match_operand:VQEXTI 3 "register_operand" "vr")] UNSPEC_VGN= HAB) > > > > >+ (match_dup 1)))] > > > > >+ "TARGET_ZVKNHA || TARGET_ZVKNHB || TARGET_ZVKG" > > > > >+ "v.vv\t%0,%2,%3" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+;; zvkned and zvksed amd zvkg instructions patterns. > > > > >+;; vgmul.vv vaesz.vs > > > > >+;; vaesef.[vv,vs] vaesem.[vv,vs] vaesdf.[vv,vs] vaesdm.[vv,vs] > > > > >+;; vsm4r.[vv,vs] > > > > >+(define_insn "@pred_crypto_vv" > > > > >+ [(set (match_operand:VSI 0 "register_operand" "=3Dvr") > > > > >+ (if_then_else:VSI > > > > >+ (unspec: > > > > >+ [(match_operand 3 "vector_length_operand" " rK") > > > > >+ (match_operand 4 "const_int_operand" " i") > > > > >+ (match_operand 5 "const_int_operand" " i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VSI > > > > >+ [(match_operand:VSI 1 "register_operand" " 0") > > > > >+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_= VV) > > > > >+ (match_dup 1)))] > > > > >+ "TARGET_ZVKNED || TARGET_ZVKSED || TARGET_ZVKG" > > > > >+ "v.\t%0,%2" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "@pred_crypto_vvx1_scalar" > > > > >+ [(set (match_operand:VSI 0 "register_operand" "=3D&vr") > > > > >+ (if_then_else:VSI > > > > >+ (unspec: > > > > >+ [(match_operand 3 "vector_length_operand" " rK") > > > > >+ (match_operand 4 "const_int_operand" " i") > > > > >+ (match_operand 5 "const_int_operand" " i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VSI > > > > >+ [(match_operand:VSI 1 "register_operand" " 0") > > > > >+ (match_operand:VSI 2 "register_operand" "vr")] UNSPEC_CRYPTO_= VV) > > > > >+ (match_dup 1)))] > > > > >+ "TARGET_ZVKNED || TARGET_ZVKSED" > > > > >+ "v.\t%0,%2" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "@pred_crypto_vvx2_scalar" > > > > >+ [(set (match_operand: 0 "register_operand" "=3D&vr") > > > > >+ (if_then_else: > > > > >+ (unspec: > > > > >+ [(match_operand 3 "vector_length_operand" "rK") > > > > >+ (match_operand 4 "const_int_operand" " i") > > > > >+ (match_operand 5 "const_int_operand" " i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "register_operand" " 0") > > > > >+ (match_operand:VLMULX2_SI 2 "register_operand" "vr")] UNSPEC_= CRYPTO_VV) > > > > >+ (match_dup 1)))] > > > > >+ "TARGET_ZVKNED || TARGET_ZVKSED" > > > > >+ "v.\t%0,%2" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "@pred_crypto_vvx4_scalar" > > > > >+ [(set (match_operand: 0 "register_operand" "=3D&vr") > > > > >+ (if_then_else: > > > > >+ (unspec: > > > > >+ [(match_operand 3 "vector_length_operand" " rK") > > > > >+ (match_operand 4 "const_int_operand" " i") > > > > >+ (match_operand 5 "const_int_operand" " i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "register_operand" " 0") > > > > >+ (match_operand:VLMULX4_SI 2 "register_operand" "vr")] UNSPEC_= CRYPTO_VV) > > > > >+ (match_dup 1)))] > > > > >+ "TARGET_ZVKNED || TARGET_ZVKSED" > > > > >+ "v.\t%0,%2" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "@pred_crypto_vvx8_scalar" > > > > >+ [(set (match_operand: 0 "register_operand" "=3D&vr") > > > > >+ (if_then_else: > > > > >+ (unspec: > > > > >+ [(match_operand 3 "vector_length_operand" " rK") > > > > >+ (match_operand 4 "const_int_operand" " i") > > > > >+ (match_operand 5 "const_int_operand" " i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "register_operand" " 0") > > > > >+ (match_operand:VLMULX8_SI 2 "register_operand" " vr")] UNSPEC_= CRYPTO_VV) > > > > >+ (match_dup 1)))] > > > > >+ "TARGET_ZVKNED || TARGET_ZVKSED" > > > > >+ "v.\t%0,%2" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+(define_insn "@pred_crypto_vvx16_scalar" > > > > >+ [(set (match_operand: 0 "register_operand" "=3D&vr") > > > > >+ (if_then_else: > > > > >+ (unspec: > > > > >+ [(match_operand 3 "vector_length_operand" " rK") > > > > >+ (match_operand 4 "const_int_operand" " i") > > > > >+ (match_operand 5 "const_int_operand" " i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec: > > > > >+ [(match_operand: 1 "register_operand" " 0") > > > > >+ (match_operand:VLMULX16_SI 2 "register_operand" " vr")] UNSPE= C_CRYPTO_VV) > > > > >+ (match_dup 1)))] > > > > >+ "TARGET_ZVKNED || TARGET_ZVKSED" > > > > >+ "v.\t%0,%2" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+;; vaeskf1.vi vsm4k.vi > > > > >+(define_insn "@pred_crypto_vi_scalar" > > > > >+ [(set (match_operand:VSI 0 "register_operand" "=3Dvr, vr") > > > > >+ (if_then_else:VSI > > > > >+ (unspec: > > > > >+ [(match_operand 4 "vector_length_operand" "rK, rK") > > > > >+ (match_operand 5 "const_int_operand" " i, i") > > > > >+ (match_operand 6 "const_int_operand" " i, i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VSI > > > > >+ [(match_operand:VSI 2 "register_operand" "vr, vr") > > > > >+ (match_operand 3 "const_int_operand" " i, i")] UNSP= EC_CRYPTO_VI) > > > > >+ (match_operand:VSI 1 "vector_merge_operand" "vu, 0")))] > > > > >+ "TARGET_ZVKNED || TARGET_ZVKSED" > > > > >+ "v.vi\t%0,%2,%3" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+;; vaeskf2.vi vsm3c.vi > > > > >+(define_insn "@pred_vi_nomaskedoff_scalar" > > > > >+ [(set (match_operand:VSI 0 "register_operand" "=3Dvr") > > > > >+ (if_then_else:VSI > > > > >+ (unspec: > > > > >+ [(match_operand 4 "vector_length_operand" "rK") > > > > >+ (match_operand 5 "const_int_operand" " i") > > > > >+ (match_operand 6 "const_int_operand" " i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VSI > > > > >+ [(match_operand:VSI 1 "register_operand" " 0") > > > > >+ (match_operand:VSI 2 "register_operand" "vr") > > > > >+ (match_operand 3 "const_int_operand" " i")] UNSPEC_CRYPTO_VI= 1) > > > > >+ (match_dup 1)))] > > > > >+ "TARGET_ZVKNED || TARGET_ZVKSH" > > > > >+ "v.vi\t%0,%2,%3" > > > > >+ [(set_attr "type" "v") > > > > >+ (set_attr "mode" "")]) > > > > >+ > > > > >+;; zvksh instructions patterns. > > > > >+;; vsm3me.vv > > > > >+(define_insn "@pred_vsm3me" > > > > >+ [(set (match_operand:VSI 0 "register_operand" "=3Dvr, vr") > > > > >+ (if_then_else:VSI > > > > >+ (unspec: > > > > >+ [(match_operand 4 "vector_length_operand" " rK, rK") > > > > >+ (match_operand 5 "const_int_operand" " i, i") > > > > >+ (match_operand 6 "const_int_operand" " i, i") > > > > >+ (reg:SI VL_REGNUM) > > > > >+ (reg:SI VTYPE_REGNUM)] UNSPEC_VPREDICATE) > > > > >+ (unspec:VSI > > > > >+ [(match_operand:VSI 2 "register_operand" " vr, vr") > > > > >+ (match_operand:VSI 3 "register_operand" " vr, vr")] UNSPEC_V= SM3ME) > > > > >+ (match_operand:VSI 1 "vector_merge_operand" " vu, 0")))] > > > > >+ "TARGET_ZVKSH" > > > > >+ "vsm3me.vv\t%0,%2,%3" > > > > >+ [(set_attr "type" "vsm3me") > > > > >+ (set_attr "mode" "")]) > > > > >diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vec= tor-iterators.md > > > > >index 5f5f7b5b986..317dc9de253 100644 > > > > >--- a/gcc/config/riscv/vector-iterators.md > > > > >+++ b/gcc/config/riscv/vector-iterators.md > > > > >@@ -3916,3 +3916,39 @@ > > > > > (V1024BI "riscv_vector::vls_mode_valid_p (V1024BImode) && TARGET_MIN_= VLEN >=3D 1024") > > > > > (V2048BI "riscv_vector::vls_mode_valid_p (V2048BImode) && TARGET_MIN_= VLEN >=3D 2048") > > > > > (V4096BI "riscv_vector::vls_mode_valid_p (V4096BImode) && TARGET_MIN_= VLEN >=3D 4096")]) > > > > >+ > > > > >+(define_mode_iterator VSI [ > > > > >+ RVVM8SI RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") > > > > >+]) > > > > >+ > > > > >+(define_mode_iterator VLMULX2_SI [ > > > > >+ RVVM4SI RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") > > > > >+]) > > > > >+ > > > > >+(define_mode_iterator VLMULX4_SI [ > > > > >+ RVVM2SI RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") > > > > >+]) > > > > >+ > > > > >+(define_mode_iterator VLMULX8_SI [ > > > > >+ RVVM1SI (RVVMF2SI "TARGET_MIN_VLEN > 32") > > > > >+]) > > > > >+ > > > > >+(define_mode_iterator VLMULX16_SI [ > > > > >+ (RVVMF2SI "TARGET_MIN_VLEN > 32") > > > > >+]) > > > > >+ > > > > >+(define_mode_attr VSIX2 [ > > > > >+ (RVVM8SI "RVVM8SI") (RVVM4SI "RVVM8SI") (RVVM2SI "RVVM4SI") (RVVM1SI = "RVVM2SI") (RVVMF2SI "RVVM1SI") > > > > >+]) > > > > >+ > > > > >+(define_mode_attr VSIX4 [ > > > > >+ (RVVM2SI "RVVM8SI") (RVVM1SI "RVVM4SI") (RVVMF2SI "RVVM2SI") > > > > >+]) > > > > >+ > > > > >+(define_mode_attr VSIX8 [ > > > > >+ (RVVM1SI "RVVM8SI") (RVVMF2SI "RVVM4SI") > > > > >+]) > > > > >+ > > > > >+(define_mode_attr VSIX16 [ > > > > >+ (RVVMF2SI "RVVM8SI") > > > > >+]) > > > > >diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md > > > > >index f607d768b26..caf1b88ba5e 100644 > > > > >--- a/gcc/config/riscv/vector.md > > > > >+++ b/gcc/config/riscv/vector.md > > > > >@@ -52,7 +52,9 @@ > > > > > vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovvx,vi= movxv,vfmovvf,vfmovfv,\ > > > > > vslideup,vslidedown,vislide1up,vislide1down,vfs= lide1up,vfslide1down,\ > > > > > vgather,vcompress,vlsegde,vssegte,vlsegds,vsseg= ts,vlsegdux,vlsegdox,\ > > > > >- vssegtux,vssegtox,vlsegdff") > > > > >+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,v= rev8,vclz,vctz,vrol,\ > > > > >+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,= vaesem,vaesdf,vaesdm,\ > > > > >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl= ,vsm4k,vsm4r,vsm3me,vsm3c") > > > > > (const_string "true")] > > > > > (const_string "false"))) > > > > > > > > > >@@ -74,7 +76,9 @@ > > > > > vmalu,vmpop,vmffs,vmsfs,vmiota,vmidx,vimovxv,vf= movfv,\ > > > > > vslideup,vslidedown,vislide1up,vislide1down,vfs= lide1up,vfslide1down,\ > > > > > vgather,vcompress,vlsegde,vssegte,vlsegds,vsseg= ts,vlsegdux,vlsegdox,\ > > > > >- vssegtux,vssegtox,vlsegdff") > > > > >+ vssegtux,vssegtox,vlsegdff,vandn,vbrev,vbrev8,v= rev8,vclz,vctz,vrol,\ > > > > >+ vror,vwsll,vclmul,vclmulh,vghsh,vgmul,vaesef,= vaesem,vaesdf,vaesdm,\ > > > > >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl= ,vsm4k,vsm4r,vsm3me,vsm3c") > > > > > (const_string "true")] > > > > > (const_string "false"))) > > > > > > > > > >@@ -426,7 +430,11 @@ > > > > > viwred,vfredu,vfredo,vfwredu,vfwredo,vimovvx,\ > > > > > vimovxv,vfmovvf,vfmovfv,vslideup,vslidedown,\ > > > > > vislide1up,vislide1down,vfslide1up,vfslide1down= ,\ > > > > >- vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vs= segtox") > > > > >+ vgather,vcompress,vlsegdux,vlsegdox,vssegtux,vs= segtox,\ > > > > >+ vandn,vbrev,vbrev8,vrev8,vclz,vctz,vrol,vror,vw= sll,\ > > > > >+ vclmul,vclmulh,vghsh,vgmul,vaesef,vaesem,vaesdf= ,vaesdm,\ > > > > >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,vsha2cl,v= sm4k,vsm4r,\ > > > > >+ vsm3me,vsm3c") > > > > > (const_int INVALID_ATTRIBUTE) > > > > > (eq_attr "mode" "RVVM8QI,RVVM1BI") (const_int 1) > > > > > (eq_attr "mode" "RVVM4QI,RVVMF2BI") (const_int 2) > > > > >@@ -698,10 +706,12 @@ > > > > > vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtft= oi,vfncvtftof,vfclass,\ > > > > > vired,viwred,vfredu,vfredo,vfwredu,vfwred= o,vimovxv,vfmovfv,\ > > > > > vslideup,vslidedown,vislide1up,vislide1do= wn,vfslide1up,vfslide1down,\ > > > > >- vgather,vldff,viwmuladd,vfwmuladd,vlsegde= ,vlsegds,vlsegdux,vlsegdox,vlsegdff") > > > > >+ vgather,vldff,viwmuladd,vfwmuladd,vlsegde= ,vlsegds,vlsegdux,vlsegdox,vlsegdff,\ > > > > >+ vandn,vbrev,vbrev8,vrev8,vrol,vror,vwsl= l,vclmul,vclmulh") > > > > > (const_int 2) > > > > > > > > > >- (eq_attr "type" "vimerge,vfmerge,vcompress") > > > > >+ (eq_attr "type" "vimerge,vfmerge,vcompress,vghsh,vgmul,vae= sef,vaesem,vaesdf,vaesdm,\ > > > > >+ vaeskf1,vaeskf2,vaesz,vsha2ms,vsha2ch,v= sha2cl,vsm4k,vsm4r,vsm3me,vsm3c") > > > > > (const_int 1) > > > > > > > > > > (eq_attr "type" "vimuladd,vfmuladd") > > > > >@@ -740,7 +750,8 @@ > > > > > vstox,vext,vmsfs,vmiota,vfsqrt,vfrecp,vfcvtitof= ,vldff,\ > > > > > vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfnc= vtitof,\ > > > > > vfncvtftoi,vfncvtftof,vfclass,vimovxv,vfmovfv,v= compress,\ > > > > >- vlsegde,vssegts,vssegtux,vssegtox,vlsegdff") > > > > >+ vlsegde,vssegts,vssegtux,vssegtox,vlsegdff,vbre= v,vbrev8,vrev8,\ > > > > >+ vghsh,vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl= ,vsm4k,vsm3me,vsm3c") > > > > > (const_int 4) > > > > > > > > > > ;; If operands[3] of "vlds" is not vector mode, it is pred_broad= cast. > > > > >@@ -755,13 +766,15 @@ > > > > > vsshift,vnclip,vfalu,vfmul,vfminmax,vfdiv,vfwal= u,vfwmul,\ > > > > > vfsgnj,vfmerge,vired,viwred,vfredu,vfredo,vfwre= du,vfwredo,\ > > > > > vslideup,vslidedown,vislide1up,vislide1down,vfs= lide1up,vfslide1down,\ > > > > >- vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vl= segdox") > > > > >+ vgather,viwmuladd,vfwmuladd,vlsegds,vlsegdux,vl= segdox,vandn,vrol,\ > > > > >+ vror,vwsll,vclmul,vclmulh") > > > > > (const_int 5) > > > > > > > > > > (eq_attr "type" "vicmp,vimuladd,vfcmp,vfmuladd") > > > > > (const_int 6) > > > > > > > > > >- (eq_attr "type" "vmpop,vmffs,vmidx,vssegte") > > > > >+ (eq_attr "type" "vmpop,vmffs,vmidx,vssegte,vclz,vctz,vgmul,vaese= f,vaesem,vaesdf,vaesdm,\ > > > > >+ vaesz,vsm4r") > > > > > (const_int 3)] > > > > > (const_int INVALID_ATTRIBUTE))) > > > > > > > > > >@@ -770,7 +783,8 @@ > > > > > (cond [(eq_attr "type" "vlde,vimov,vfmov,vext,vmiota,vfsqrt,vfrecp,\ > > > > > vfcvtitof,vfcvtftoi,vfwcvtitof,vfwcvtftoi,vfwcv= tftof,\ > > > > > vfncvtitof,vfncvtftoi,vfncvtftof,vfclass,vimovx= v,vfmovfv,\ > > > > >- vcompress,vldff,vlsegde,vlsegdff") > > > > >+ vcompress,vldff,vlsegde,vlsegdff,vbrev,vbrev8,v= rev8,vghsh,\ > > > > >+ vaeskf1,vaeskf2,vsha2ms,vsha2ch,vsha2cl,vsm4k= ,vsm3me,vsm3c") > > > > > (symbol_ref "riscv_vector::get_ta(operands[5])") > > > > > > > > > > ;; If operands[3] of "vlds" is not vector mode, it is pred_broad= cast. > > > > >@@ -786,13 +800,13 @@ > > > > > vfwalu,vfwmul,vfsgnj,vfmerge,vired,viwred,vfred= u,\ > > > > > vfredo,vfwredu,vfwredo,vslideup,vslidedown,visl= ide1up,\ > > > > > vislide1down,vfslide1up,vfslide1down,vgather,vi= wmuladd,vfwmuladd,\ > > > > >- vlsegds,vlsegdux,vlsegdox") > > > > >+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll= ,vclmul,vclmulh") > > > > > (symbol_ref "riscv_vector::get_ta(operands[6])") > > > > > > > > > > (eq_attr "type" "vimuladd,vfmuladd") > > > > > (symbol_ref "riscv_vector::get_ta(operands[7])") > > > > > > > > > >- (eq_attr "type" "vmidx") > > > > >+ (eq_attr "type" "vmidx,vgmul,vaesef,vaesem,vaesdf,vaesdm,vaesz,v= sm4r") > > > > > (symbol_ref "riscv_vector::get_ta(operands[4])")] > > > > > (const_int INVALID_ATTRIBUTE))) > > > > > > > > > >@@ -800,7 +814,7 @@ > > > > > (define_attr "ma" "" > > > > > (cond [(eq_attr "type" "vlde,vext,vmiota,vfsqrt,vfrecp,vfcvtitof,vfcv= tftoi,\ > > > > > vfwcvtitof,vfwcvtftoi,vfwcvtftof,vfncvtitof,vfn= cvtftoi,\ > > > > >- vfncvtftof,vfclass,vldff,vlsegde,vlsegdff") > > > > >+ vfncvtftof,vfclass,vldff,vlsegde,vlsegdff,vbrev= ,vbrev8,vrev8") > > > > > (symbol_ref "riscv_vector::get_ma(operands[6])") > > > > > > > > > > ;; If operands[3] of "vlds" is not vector mode, it is pred_broad= cast. > > > > >@@ -815,7 +829,8 @@ > > > > > vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,\ > > > > > vfwalu,vfwmul,vfsgnj,vfcmp,vslideup,vslidedown,= \ > > > > > vislide1up,vislide1down,vfslide1up,vfslide1down= ,vgather,\ > > > > >- viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox") > > > > >+ viwmuladd,vfwmuladd,vlsegds,vlsegdux,vlsegdox,v= andn,vrol,\ > > > > >+ vror,vwsll,vclmul,vclmulh") > > > > > (symbol_ref "riscv_vector::get_ma(operands[7])") > > > > > > > > > > (eq_attr "type" "vimuladd,vfmuladd") > > > > >@@ -831,9 +846,10 @@ > > > > > vfsqrt,vfrecp,vfmerge,vfcvtitof,vfcvtftoi,vfwcv= titof,\ > > > > > vfwcvtftoi,vfwcvtftof,vfncvtitof,vfncvtftoi,vfn= cvtftof,\ > > > > > vfclass,vired,viwred,vfredu,vfredo,vfwredu,vfwr= edo,\ > > > > >- vimovxv,vfmovfv,vlsegde,vlsegdff,vmiota") > > > > >+ vimovxv,vfmovfv,vlsegde,vlsegdff,vmiota,vbrev,v= brev8,vrev8") > > > > > (const_int 7) > > > > >- (eq_attr "type" "vldm,vstm,vmalu,vmalu") > > > > >+ (eq_attr "type" "vldm,vstm,vmalu,vmalu,vgmul,vaesef,vaesem,vaesd= f,vaesdm,vaesz,\ > > > > >+ vsm4r") > > > > > (const_int 5) > > > > > > > > > > ;; If operands[3] of "vlds" is not vector mode, it is pred_broad= cast. > > > > >@@ -848,18 +864,19 @@ > > > > > vnclip,vicmp,vfalu,vfmul,vfminmax,vfdiv,vfwalu,= vfwmul,\ > > > > > vfsgnj,vfcmp,vslideup,vslidedown,vislide1up,\ > > > > > vislide1down,vfslide1up,vfslide1down,vgather,vi= wmuladd,vfwmuladd,\ > > > > >- vlsegds,vlsegdux,vlsegdox") > > > > >+ vlsegds,vlsegdux,vlsegdox,vandn,vrol,vror,vwsll= ") > > > > > (const_int 8) > > > > >- (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox") > > > > >+ (eq_attr "type" "vstux,vstox,vssegts,vssegtux,vssegtox,vclmul,vc= lmulh") > > > > > (const_int 5) > > > > > > > > > > (eq_attr "type" "vimuladd,vfmuladd") > > > > > (const_int 9) > > > > > > > > > >- (eq_attr "type" "vmsfs,vmidx,vcompress") > > > > >+ (eq_attr "type" "vmsfs,vmidx,vcompress,vghsh,vaeskf1,vaeskf2,vsh= a2ms,vsha2ch,vsha2cl,\ > > > > >+ vsm4k,vsm3me,vsm3c") > > > > > (const_int 6) > > > > > > > > > >- (eq_attr "type" "vmpop,vmffs,vssegte") > > > > >+ (eq_attr "type" "vmpop,vmffs,vssegte,vclz,vctz") > > > > > (const_int 4)] > > > > > (const_int INVALID_ATTRIBUTE))) > > > > > > > > > >-- > > > > >2.17.1 > >