* [PATCH 0/5 V1] RISC-V:Implement Crypto extension's instruction patterns and it's intrinsics
@ 2022-02-23 9:44 shihua
2022-02-23 9:44 ` [PATCH 1/5 V1] RISC-V:Implement instruction patterns for Crypto extension shihua
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: shihua @ 2022-02-23 9:44 UTC (permalink / raw)
To: gcc-patches
Cc: ben.marshall, kito.cheng, cmuellner, palmer, andrew, lazyparser,
jiawei, mjos, LiaoShihua
From: LiaoShihua <shihua@iscas.ac.cn>
This patch set is the implementation of Crypto extension, which includes zbkb, zbkc, zbkx,
zknd, zknh, zkne,zksed and zksh extension.
It includes instruction/md patterns, intrinsic functions, testcases for intrinsic functions,
and test macros.
The definitions of intrinsic functions come from https://github.com/rvkrypto/rvkrypto-fips .
This work is done by Liao Shihua and Wu Siyu.
LiaoShihua (5):
RISC-V:Implement instruction patterns for Crypto extensions
RISC-V:Implement built-in instructions for Crypto extensions
RISC-V:Implement intrinsics for Crypto extensions
RISC-V:Implement testcases for Crypto extensions
RISC-V:Implement architecture extension test macros for Crypto extensions
gcc/config.gcc | 1 +
gcc/config/riscv/crypto.md | 383 +++++++++++++
gcc/config/riscv/predicates.md | 8 +
gcc/config/riscv/riscv-builtins-crypto.def | 93 ++++
gcc/config/riscv/riscv-builtins.cc | 35 ++
gcc/config/riscv/riscv-c.cc | 9 +
gcc/config/riscv/riscv-ftypes.def | 7 +
gcc/config/riscv/riscv.md | 1 +
gcc/config/riscv/riscv_crypto.h | 12 +
gcc/config/riscv/riscv_crypto_scalar.h | 247 +++++++++
gcc/config/riscv/rvk_asm_intrin.h | 187 +++++++
gcc/config/riscv/rvk_emu_intrin.h | 594 +++++++++++++++++++++
gcc/testsuite/gcc.target/riscv/predef-17.c | 59 ++
gcc/testsuite/gcc.target/riscv/zbkb32.c | 34 ++
gcc/testsuite/gcc.target/riscv/zbkb64.c | 21 +
gcc/testsuite/gcc.target/riscv/zbkc32.c | 16 +
gcc/testsuite/gcc.target/riscv/zbkc64.c | 16 +
gcc/testsuite/gcc.target/riscv/zbkx32.c | 16 +
gcc/testsuite/gcc.target/riscv/zbkx64.c | 16 +
gcc/testsuite/gcc.target/riscv/zknd32.c | 18 +
gcc/testsuite/gcc.target/riscv/zknd64.c | 35 ++
gcc/testsuite/gcc.target/riscv/zkne64.c | 29 +
gcc/testsuite/gcc.target/riscv/zknh.c | 28 +
gcc/testsuite/gcc.target/riscv/zknh32.c | 40 ++
gcc/testsuite/gcc.target/riscv/zknh64.c | 29 +
gcc/testsuite/gcc.target/riscv/zksed.c | 20 +
gcc/testsuite/gcc.target/riscv/zksh.c | 17 +
27 files changed, 1971 insertions(+)
create mode 100644 gcc/config/riscv/crypto.md
create mode 100644 gcc/config/riscv/riscv-builtins-crypto.def
create mode 100644 gcc/config/riscv/riscv_crypto.h
create mode 100644 gcc/config/riscv/riscv_crypto_scalar.h
create mode 100644 gcc/config/riscv/rvk_asm_intrin.h
create mode 100644 gcc/config/riscv/rvk_emu_intrin.h
create mode 100644 gcc/testsuite/gcc.target/riscv/predef-17.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkb32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkb64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkc32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkc64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkx32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkx64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zknd32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zknd64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zkne64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zknh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zknh32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zknh64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zksed.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zksh.c
--
2.31.1.windows.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 1/5 V1] RISC-V:Implement instruction patterns for Crypto extension
2022-02-23 9:44 [PATCH 0/5 V1] RISC-V:Implement Crypto extension's instruction patterns and it's intrinsics shihua
@ 2022-02-23 9:44 ` shihua
2022-02-28 16:04 ` Kito Cheng
2022-02-23 9:44 ` [PATCH 2/5 V1] RISC-V:Implement built-in instructions " shihua
` (3 subsequent siblings)
4 siblings, 1 reply; 12+ messages in thread
From: shihua @ 2022-02-23 9:44 UTC (permalink / raw)
To: gcc-patches
Cc: ben.marshall, kito.cheng, cmuellner, palmer, andrew, lazyparser,
jiawei, mjos, LiaoShihua, Wu
From: LiaoShihua <shihua@iscas.ac.cn>
gcc/ChangeLog:
* config/riscv/predicates.md (bs_operand): operand for bs
(rnum_operand):
* config/riscv/riscv.md: include crypto.md
* config/riscv/crypto.md: New file.
Co-Authored-By: Wu <siyu@isrc.iscas.ac.cn>
---
gcc/config/riscv/crypto.md | 383 +++++++++++++++++++++++++++++++++
gcc/config/riscv/predicates.md | 8 +
gcc/config/riscv/riscv.md | 1 +
3 files changed, 392 insertions(+)
create mode 100644 gcc/config/riscv/crypto.md
diff --git a/gcc/config/riscv/crypto.md b/gcc/config/riscv/crypto.md
new file mode 100644
index 00000000000..591066fac3b
--- /dev/null
+++ b/gcc/config/riscv/crypto.md
@@ -0,0 +1,383 @@
+;; Machine description for K extension.
+;; Copyright (C) 2022 Free Software Foundation, Inc.
+;; Contributed by SiYu Wu (siyu@isrc.iscas.ac.cn) and ShiHua Liao (shihua@iscas.ac.cn).
+
+;; This file is part of GCC.
+
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+;; GNU General Public License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3. If not see
+;; <http://www.gnu.org/licenses/>.
+
+(define_c_enum "unspec" [
+ ;;ZBKB unspecs
+ UNSPEC_ROR
+ UNSPEC_ROL
+ UNSPEC_BREV8
+ UNSPEC_BSWAP
+ UNSPEC_ZIP
+ UNSPEC_UNZIP
+
+ ;; Zbkc unspecs
+ UNSPEC_CLMUL
+ UNSPEC_CLMULH
+
+ ;; Zbkx unspecs
+ UNSPEC_XPERM8
+ UNSPEC_XPERM4
+
+ ;; Zknd unspecs
+ UNSPEC_AES_DSI
+ UNSPEC_AES_DSMI
+ UNSPEC_AES_DS
+ UNSPEC_AES_DSM
+ UNSPEC_AES_IM
+ UNSPEC_AES_KS1I
+ UNSPEC_AES_KS2
+
+ ;; Zkne unspecs
+ UNSPEC_AES_ES
+ UNSPEC_AES_ESM
+ UNSPEC_AES_ESI
+ UNSPEC_AES_ESMI
+
+ ;; Zknh unspecs
+ UNSPEC_SHA_256_SIG0
+ UNSPEC_SHA_256_SIG1
+ UNSPEC_SHA_256_SUM0
+ UNSPEC_SHA_256_SUM1
+ UNSPEC_SHA_512_SIG0
+ UNSPEC_SHA_512_SIG0H
+ UNSPEC_SHA_512_SIG0L
+ UNSPEC_SHA_512_SIG1
+ UNSPEC_SHA_512_SIG1H
+ UNSPEC_SHA_512_SIG1L
+ UNSPEC_SHA_512_SUM0
+ UNSPEC_SHA_512_SUM0R
+ UNSPEC_SHA_512_SUM1
+ UNSPEC_SHA_512_SUM1R
+
+ ;; Zksh
+ UNSPEC_SM3_P0
+ UNSPEC_SM3_P1
+
+ ;;Zksed
+ UNSPEC_SM4_ED
+ UNSPEC_SM4_KS
+])
+
+(define_insn "riscv_ror_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")
+ (match_operand:X 2 "register_operand" "r")]
+ UNSPEC_ROR))]
+ "TARGET_ZBKB"
+ "ror\t%0,%1,%2")
+
+(define_insn "riscv_rol_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")
+ (match_operand:X 2 "register_operand" "r")]
+ UNSPEC_ROL))]
+ "TARGET_ZBKB"
+ "rol\t%0,%1,%2")
+
+(define_insn "riscv_brev8_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")]
+ UNSPEC_BREV8))]
+ "TARGET_ZBKB"
+ "brev8\t%0,%1")
+
+(define_insn "riscv_bswap<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")]
+ UNSPEC_BSWAP))]
+ "TARGET_ZBKB"
+ "bswap\t%0,%1")
+
+(define_insn "riscv_zip"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+ UNSPEC_ZIP))]
+ "TARGET_ZBKB && !TARGET_64BIT"
+ "zip\t%0,%1")
+
+(define_insn "riscv_unzip"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+ UNSPEC_UNZIP))]
+ "TARGET_ZBKB && !TARGET_64BIT"
+ "unzip\t%0,%1")
+
+(define_insn "riscv_clmul_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")
+ (match_operand:X 2 "register_operand" "r")]
+ UNSPEC_CLMUL))]
+ "TARGET_ZBKC"
+ "clmul\t%0,%1,%2")
+
+(define_insn "riscv_clmulh_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")
+ (match_operand:X 2 "register_operand" "r")]
+ UNSPEC_CLMULH))]
+ "TARGET_ZBKC"
+ "clmulh\t%0,%1,%2")
+
+(define_insn "riscv_xperm8_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")
+ (match_operand:X 2 "register_operand" "r")]
+ UNSPEC_XPERM8))]
+ "TARGET_ZBKX"
+ "xperm8\t%0,%1,%2")
+
+(define_insn "riscv_xperm4_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")
+ (match_operand:X 2 "register_operand" "r")]
+ UNSPEC_XPERM4))]
+ "TARGET_ZBKX"
+ "xperm4\t%0,%1,%2")
+
+(define_insn "riscv_aes32dsi"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "register_operand" "r")
+ (match_operand:SI 3 "bs_operand" "i")]
+ UNSPEC_AES_DSI))]
+ "TARGET_ZKND && !TARGET_64BIT"
+ "aes32dsi\t%0,%1,%2,%3")
+
+(define_insn "riscv_aes32dsmi"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "register_operand" "r")
+ (match_operand:SI 3 "bs_operand" "i")]
+ UNSPEC_AES_DSMI))]
+ "TARGET_ZKND && !TARGET_64BIT"
+ "aes32dsmi\t%0,%1,%2,%3")
+
+(define_insn "riscv_aes64ds"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")
+ (match_operand:DI 2 "register_operand" "r")]
+ UNSPEC_AES_DS))]
+ "TARGET_ZKND && TARGET_64BIT"
+ "aes64ds\t%0,%1,%2")
+
+(define_insn "riscv_aes64dsm"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")
+ (match_operand:DI 2 "register_operand" "r")]
+ UNSPEC_AES_DSM))]
+ "TARGET_ZKND && TARGET_64BIT"
+ "aes64dsm\t%0,%1,%2")
+
+(define_insn "riscv_aes64im"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")]
+ UNSPEC_AES_IM))]
+ "TARGET_ZKND && TARGET_64BIT"
+ "aes64im\t%0,%1")
+
+(define_insn "riscv_aes64ks1i"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")
+ (match_operand:SI 2 "rnum_operand" "i")]
+ UNSPEC_AES_KS1I))]
+ "(TARGET_ZKND || TARGET_ZKNE) && TARGET_64BIT"
+ "aes64ks1i\t%0,%1,%2")
+
+(define_insn "riscv_aes64ks2"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")
+ (match_operand:DI 2 "register_operand" "r")]
+ UNSPEC_AES_KS2))]
+ "(TARGET_ZKND || TARGET_ZKNE) && TARGET_64BIT"
+ "aes64ks2\t%0,%1,%2")
+
+(define_insn "riscv_aes32esi"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "register_operand" "r")
+ (match_operand:SI 3 "bs_operand" "i")]
+ UNSPEC_AES_ESI))]
+ "TARGET_ZKNE && !TARGET_64BIT"
+ "aes32esi\t%0,%1,%2,%3")
+
+(define_insn "riscv_aes32esmi"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "register_operand" "r")
+ (match_operand:SI 3 "bs_operand" "i")]
+ UNSPEC_AES_ESMI))]
+ "TARGET_ZKNE && !TARGET_64BIT"
+ "aes32esmi\t%0,%1,%2,%3")
+
+(define_insn "riscv_aes64es"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")
+ (match_operand:DI 2 "register_operand" "r")]
+ UNSPEC_AES_ES))]
+ "TARGET_ZKNE && TARGET_64BIT"
+ "aes64es\t%0,%1,%2")
+
+(define_insn "riscv_aes64esm"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")
+ (match_operand:DI 2 "register_operand" "r")]
+ UNSPEC_AES_ESM))]
+ "TARGET_ZKNE && TARGET_64BIT"
+ "aes64esm\t%0,%1,%2")
+
+;; Zknh - SHA256
+
+(define_insn "riscv_sha256sig0_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")]
+ UNSPEC_SHA_256_SIG0))]
+ "TARGET_ZKNH"
+ "sha256sig0\t%0,%1")
+
+(define_insn "riscv_sha256sig1_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")]
+ UNSPEC_SHA_256_SIG1))]
+ "TARGET_ZKNH"
+ "sha256sig1\t%0,%1")
+
+(define_insn "riscv_sha256sum0_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")]
+ UNSPEC_SHA_256_SUM0))]
+ "TARGET_ZKNH"
+ "sha256sum0\t%0,%1")
+
+(define_insn "riscv_sha256sum1_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")]
+ UNSPEC_SHA_256_SUM1))]
+ "TARGET_ZKNH"
+ "sha256sum1\t%0,%1")
+
+(define_insn "riscv_sha512sig0h"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "register_operand" "r")]
+ UNSPEC_SHA_512_SIG0H))]
+ "TARGET_ZKNH && !TARGET_64BIT"
+ "sha512sig0h\t%0,%1,%2")
+
+(define_insn "riscv_sha512sig0l"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "register_operand" "r")]
+ UNSPEC_SHA_512_SIG0L))]
+ "TARGET_ZKNH && !TARGET_64BIT"
+ "sha512sig0l\t%0,%1,%2")
+
+(define_insn "riscv_sha512sig1h"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "register_operand" "r")]
+ UNSPEC_SHA_512_SIG1H))]
+ "TARGET_ZKNH && !TARGET_64BIT"
+ "sha512sig1h\t%0,%1,%2")
+
+(define_insn "riscv_sha512sig1l"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "register_operand" "r")]
+ UNSPEC_SHA_512_SIG1L))]
+ "TARGET_ZKNH && !TARGET_64BIT"
+ "sha512sig1l\t%0,%1,%2")
+
+(define_insn "riscv_sha512sum0r"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "register_operand" "r")]
+ UNSPEC_SHA_512_SUM0R))]
+ "TARGET_ZKNH && !TARGET_64BIT"
+ "sha512sum0r\t%0,%1,%2")
+
+(define_insn "riscv_sha512sum1r"
+ [(set (match_operand:SI 0 "register_operand" "=r")
+ (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+ (match_operand:SI 2 "register_operand" "r")]
+ UNSPEC_SHA_512_SUM1R))]
+ "TARGET_ZKNH && !TARGET_64BIT"
+ "sha512sum1r\t%0,%1,%2")
+
+(define_insn "riscv_sha512sig0"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")]
+ UNSPEC_SHA_512_SIG0))]
+ "TARGET_ZKNH && TARGET_64BIT"
+ "sha512sig0\t%0,%1")
+
+(define_insn "riscv_sha512sig1"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")]
+ UNSPEC_SHA_512_SIG1))]
+ "TARGET_ZKNH && TARGET_64BIT"
+ "sha512sig1\t%0,%1")
+
+(define_insn "riscv_sha512sum0"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")]
+ UNSPEC_SHA_512_SUM0))]
+ "TARGET_ZKNH && TARGET_64BIT"
+ "sha512sum0\t%0,%1")
+
+(define_insn "riscv_sha512sum1"
+ [(set (match_operand:DI 0 "register_operand" "=r")
+ (unspec:DI [(match_operand:DI 1 "register_operand" "r")]
+ UNSPEC_SHA_512_SUM1))]
+ "TARGET_ZKNH && TARGET_64BIT"
+ "sha512sum1\t%0,%1")
+
+(define_insn "riscv_sm3p0_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")]
+ UNSPEC_SM3_P0))]
+ "TARGET_ZKSH"
+ "sm3p0\t%0,%1")
+
+(define_insn "riscv_sm3p1_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")]
+ UNSPEC_SM3_P1))]
+ "TARGET_ZKSH"
+ "sm3p1\t%0,%1")
+
+;; Zksed
+
+(define_insn "riscv_sm4ed_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")
+ (match_operand:X 2 "register_operand" "r")
+ (match_operand:SI 3 "bs_operand" "i")]
+ UNSPEC_SM4_ED))]
+ "TARGET_ZKSED"
+ "sm4ed\t%0,%1,%2,%3")
+
+(define_insn "riscv_sm4ks_<mode>"
+ [(set (match_operand:X 0 "register_operand" "=r")
+ (unspec:X [(match_operand:X 1 "register_operand" "r")
+ (match_operand:X 2 "register_operand" "r")
+ (match_operand:SI 3 "bs_operand" "i")]
+ UNSPEC_SM4_KS))]
+ "TARGET_ZKSED"
+ "sm4ks\t%0,%1,%2,%3")
\ No newline at end of file
diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
index 97cdbdf053b..7e0e86651c0 100644
--- a/gcc/config/riscv/predicates.md
+++ b/gcc/config/riscv/predicates.md
@@ -239,3 +239,11 @@
(define_predicate "const63_operand"
(and (match_code "const_int")
(match_test "INTVAL (op) == 63")))
+
+(define_predicate "bs_operand"
+ (and (match_code "const_int")
+ (match_test "INTVAL (op) < 4")))
+
+(define_predicate "rnum_operand"
+ (and (match_code "const_int")
+ (match_test "INTVAL (op) < 11")))
diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
index b3c5bce842a..59bfecb6341 100644
--- a/gcc/config/riscv/riscv.md
+++ b/gcc/config/riscv/riscv.md
@@ -2864,6 +2864,7 @@
[(set_attr "length" "12")])
(include "bitmanip.md")
+(include "crypto.md")
(include "sync.md")
(include "peephole.md")
(include "pic.md")
--
2.31.1.windows.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/5 V1] RISC-V:Implement built-in instructions for Crypto extension
2022-02-23 9:44 [PATCH 0/5 V1] RISC-V:Implement Crypto extension's instruction patterns and it's intrinsics shihua
2022-02-23 9:44 ` [PATCH 1/5 V1] RISC-V:Implement instruction patterns for Crypto extension shihua
@ 2022-02-23 9:44 ` shihua
2022-02-23 9:44 ` [PATCH 3/5 V1] RISC-V:Implement intrinsics " shihua
` (2 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: shihua @ 2022-02-23 9:44 UTC (permalink / raw)
To: gcc-patches
Cc: ben.marshall, kito.cheng, cmuellner, palmer, andrew, lazyparser,
jiawei, mjos, LiaoShihua, Wu
From: LiaoShihua <shihua@iscas.ac.cn>
gcc/ChangeLog:
* config/riscv/riscv-builtins.cc (RISCV_FTYPE_NAME2): Defined new function prototypes.
(RISCV_FTYPE_NAME3): Ditto.
(AVAIL): Defined new riscv_builtin_avail for crypto extension.
(RISCV_ATYPE_SI): Defined new argument type.
(RISCV_ATYPE_DI): Ditto.
(RISCV_FTYPE_ATYPES2): Defined new RISCV_FTYPE_ATYPESN
(RISCV_FTYPE_ATYPES3): Ditto.
* config/riscv/riscv-ftypes.def (1): Defined new prototypes for RISC-V built-in functions.
(2): Ditto.
(3): Ditto.
* config/riscv/riscv-builtins-crypto.def: Defined new RISC-V built-in functions for crypto extension.
Co-Authored-By: Wu <siyu@isrc.iscas.ac.cn>
---
gcc/config/riscv/riscv-builtins-crypto.def | 93 ++++++++++++++++++++++
gcc/config/riscv/riscv-builtins.cc | 35 ++++++++
gcc/config/riscv/riscv-ftypes.def | 7 ++
3 files changed, 135 insertions(+)
create mode 100644 gcc/config/riscv/riscv-builtins-crypto.def
diff --git a/gcc/config/riscv/riscv-builtins-crypto.def b/gcc/config/riscv/riscv-builtins-crypto.def
new file mode 100644
index 00000000000..91dcf457dd5
--- /dev/null
+++ b/gcc/config/riscv/riscv-builtins-crypto.def
@@ -0,0 +1,93 @@
+/* Builtin definitions for K extension
+ Copyright (C) 2022 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3. If not see
+<http://www.gnu.org/licenses/>. */
+
+// Zbkb
+RISCV_BUILTIN (ror_si, "ror_32", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkb32),
+RISCV_BUILTIN (ror_di, "ror_64", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkb64),
+RISCV_BUILTIN (rol_si, "rol_32", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkb32),
+RISCV_BUILTIN (rol_di, "rol_64", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkb64),
+RISCV_BUILTIN (bswapsi, "bswap32", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkb32),
+RISCV_BUILTIN (bswapdi, "bswap64", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkb64),
+RISCV_BUILTIN (zip, "zip_32", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zbkb32),
+RISCV_BUILTIN (unzip, "unzip_32", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zbkb32),
+RISCV_BUILTIN (brev8_si, "brev8_32", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zbkb32),
+RISCV_BUILTIN (brev8_di, "brev8_64", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zbkb64),
+
+//Zbkc
+RISCV_BUILTIN (clmul_si, "clmul_32", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkc32),
+RISCV_BUILTIN (clmul_di, "clmul_64", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkc64),
+RISCV_BUILTIN (clmulh_si, "clmulh_32", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkc32),
+RISCV_BUILTIN (clmulh_di, "clmulh_64", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkc64),
+
+// Zbkx
+RISCV_BUILTIN (xperm4_si, "xperm4_32", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkx32),
+RISCV_BUILTIN (xperm4_di, "xperm4_64", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkx64),
+RISCV_BUILTIN (xperm8_si, "xperm8_32", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkx32),
+RISCV_BUILTIN (xperm8_di, "xperm8_64", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkx64),
+
+// Zknd
+DIRECT_BUILTIN (aes32dsi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zknd32),
+DIRECT_BUILTIN (aes32dsmi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zknd32),
+DIRECT_BUILTIN (aes64ds, RISCV_DI_FTYPE_DI_DI, crypto_zknd64),
+DIRECT_BUILTIN (aes64dsm, RISCV_DI_FTYPE_DI_DI, crypto_zknd64),
+DIRECT_BUILTIN (aes64im, RISCV_DI_FTYPE_DI, crypto_zknd64),
+DIRECT_BUILTIN (aes64ks1i, RISCV_DI_FTYPE_DI_SI, crypto_zkne_or_zknd),
+DIRECT_BUILTIN (aes64ks2, RISCV_DI_FTYPE_DI_DI, crypto_zkne_or_zknd),
+
+// Zkne
+DIRECT_BUILTIN (aes32esi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zkne32),
+DIRECT_BUILTIN (aes32esmi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zkne32),
+DIRECT_BUILTIN (aes64es, RISCV_DI_FTYPE_DI_DI, crypto_zkne64),
+DIRECT_BUILTIN (aes64esm, RISCV_DI_FTYPE_DI_DI, crypto_zkne64),
+
+// Zknh - SHA256
+RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
+RISCV_BUILTIN (sha256sig0_di, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
+RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
+RISCV_BUILTIN (sha256sig1_di, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
+RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
+RISCV_BUILTIN (sha256sum0_di, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
+RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
+RISCV_BUILTIN (sha256sum1_di, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
+
+// Zknh - SHA512 (RV32)
+DIRECT_BUILTIN (sha512sig0h, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sig0l, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sig1h, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sig1l, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sum0r, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sum1r, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
+
+// Zknh - SHA512 (RV64)
+DIRECT_BUILTIN (sha512sig0, RISCV_DI_FTYPE_DI, crypto_zknh64),
+DIRECT_BUILTIN (sha512sig1, RISCV_DI_FTYPE_DI, crypto_zknh64),
+DIRECT_BUILTIN (sha512sum0, RISCV_DI_FTYPE_DI, crypto_zknh64),
+DIRECT_BUILTIN (sha512sum1, RISCV_DI_FTYPE_DI, crypto_zknh64),
+
+// Zksh
+RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zksh32),
+RISCV_BUILTIN (sm3p0_di, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zksh64),
+RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zksh32),
+RISCV_BUILTIN (sm3p1_di, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zksh64),
+
+// Zksed
+RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI_SI, crypto_zksed32),
+RISCV_BUILTIN (sm4ed_di, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI_SI, crypto_zksed64),
+RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI_SI, crypto_zksed32),
+RISCV_BUILTIN (sm4ks_di, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI_SI, crypto_zksed64),
diff --git a/gcc/config/riscv/riscv-builtins.cc b/gcc/config/riscv/riscv-builtins.cc
index 0658f8d3047..66419cb40d0 100644
--- a/gcc/config/riscv/riscv-builtins.cc
+++ b/gcc/config/riscv/riscv-builtins.cc
@@ -40,6 +40,8 @@ along with GCC; see the file COPYING3. If not see
/* Macros to create an enumeration identifier for a function prototype. */
#define RISCV_FTYPE_NAME0(A) RISCV_##A##_FTYPE
#define RISCV_FTYPE_NAME1(A, B) RISCV_##A##_FTYPE_##B
+#define RISCV_FTYPE_NAME2(A, B, C) RISCV_##A##_FTYPE_##B##_##C
+#define RISCV_FTYPE_NAME3(A, B, C, D) RISCV_##A##_FTYPE_##B##_##C##_##D
/* Classifies the prototype of a built-in function. */
enum riscv_function_type {
@@ -87,6 +89,31 @@ struct riscv_builtin_description {
AVAIL (hard_float, TARGET_HARD_FLOAT)
+AVAIL (crypto_zbkb32, TARGET_ZBKB && !TARGET_64BIT)
+AVAIL (crypto_zbkb64, TARGET_ZBKB && TARGET_64BIT)
+
+AVAIL (crypto_zbkc32, TARGET_ZBKC && !TARGET_64BIT)
+AVAIL (crypto_zbkc64, TARGET_ZBKC && TARGET_64BIT)
+
+AVAIL (crypto_zbkx32, TARGET_ZBKX && !TARGET_64BIT)
+AVAIL (crypto_zbkx64, TARGET_ZBKX && TARGET_64BIT)
+
+AVAIL (crypto_zknd32, TARGET_ZKND && !TARGET_64BIT)
+AVAIL (crypto_zknd64, TARGET_ZKND && TARGET_64BIT)
+
+AVAIL (crypto_zkne32, TARGET_ZKNE && !TARGET_64BIT)
+AVAIL (crypto_zkne64, TARGET_ZKNE && TARGET_64BIT)
+AVAIL (crypto_zkne_or_zknd, (TARGET_ZKNE || TARGET_ZKND) && TARGET_64BIT)
+
+AVAIL (crypto_zknh32, TARGET_ZKNH && !TARGET_64BIT)
+AVAIL (crypto_zknh64, TARGET_ZKNH && TARGET_64BIT)
+
+AVAIL (crypto_zksh32, TARGET_ZKSH && !TARGET_64BIT)
+AVAIL (crypto_zksh64, TARGET_ZKSH && TARGET_64BIT)
+
+AVAIL (crypto_zksed32, TARGET_ZKSED && !TARGET_64BIT)
+AVAIL (crypto_zksed64, TARGET_ZKSED && TARGET_64BIT)
+
/* Construct a riscv_builtin_description from the given arguments.
INSN is the name of the associated instruction pattern, without the
@@ -119,6 +146,8 @@ AVAIL (hard_float, TARGET_HARD_FLOAT)
/* Argument types. */
#define RISCV_ATYPE_VOID void_type_node
#define RISCV_ATYPE_USI unsigned_intSI_type_node
+#define RISCV_ATYPE_SI intSI_type_node
+#define RISCV_ATYPE_DI intDI_type_node
/* RISCV_FTYPE_ATYPESN takes N RISCV_FTYPES-like type codes and lists
their associated RISCV_ATYPEs. */
@@ -126,8 +155,14 @@ AVAIL (hard_float, TARGET_HARD_FLOAT)
RISCV_ATYPE_##A
#define RISCV_FTYPE_ATYPES1(A, B) \
RISCV_ATYPE_##A, RISCV_ATYPE_##B
+#define RISCV_FTYPE_ATYPES2(A, B, C) \
+ RISCV_ATYPE_##A, RISCV_ATYPE_##B, RISCV_ATYPE_##C
+#define RISCV_FTYPE_ATYPES3(A, B, C, D) \
+ RISCV_ATYPE_##A, RISCV_ATYPE_##B, RISCV_ATYPE_##C, RISCV_ATYPE_##D
static const struct riscv_builtin_description riscv_builtins[] = {
+ #include "riscv-builtins-crypto.def"
+
DIRECT_BUILTIN (frflags, RISCV_USI_FTYPE, hard_float),
DIRECT_NO_TARGET_BUILTIN (fsflags, RISCV_VOID_FTYPE_USI, hard_float)
};
diff --git a/gcc/config/riscv/riscv-ftypes.def b/gcc/config/riscv/riscv-ftypes.def
index 2214c496f9b..21a20f227b2 100644
--- a/gcc/config/riscv/riscv-ftypes.def
+++ b/gcc/config/riscv/riscv-ftypes.def
@@ -28,3 +28,10 @@ along with GCC; see the file COPYING3. If not see
DEF_RISCV_FTYPE (0, (USI))
DEF_RISCV_FTYPE (1, (VOID, USI))
+DEF_RISCV_FTYPE (1, (SI, SI))
+DEF_RISCV_FTYPE (1, (DI, DI))
+DEF_RISCV_FTYPE (2, (SI, SI, SI))
+DEF_RISCV_FTYPE (2, (DI, DI, DI))
+DEF_RISCV_FTYPE (2, (DI, DI, SI))
+DEF_RISCV_FTYPE (3, (SI, SI, SI, SI))
+DEF_RISCV_FTYPE (3, (DI, DI, DI, SI))
--
2.31.1.windows.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 3/5 V1] RISC-V:Implement intrinsics for Crypto extension
2022-02-23 9:44 [PATCH 0/5 V1] RISC-V:Implement Crypto extension's instruction patterns and it's intrinsics shihua
2022-02-23 9:44 ` [PATCH 1/5 V1] RISC-V:Implement instruction patterns for Crypto extension shihua
2022-02-23 9:44 ` [PATCH 2/5 V1] RISC-V:Implement built-in instructions " shihua
@ 2022-02-23 9:44 ` shihua
2022-02-28 15:34 ` Kito Cheng
2022-02-23 9:44 ` [PATCH 4/5 V1] RISC-V:Implement testcases " shihua
2022-02-23 9:44 ` [PATCH 5/5 V1] RISC-V:Implement architecture extension test macros " shihua
4 siblings, 1 reply; 12+ messages in thread
From: shihua @ 2022-02-23 9:44 UTC (permalink / raw)
To: gcc-patches
Cc: ben.marshall, kito.cheng, cmuellner, palmer, andrew, lazyparser,
jiawei, mjos, LiaoShihua
From: LiaoShihua <shihua@iscas.ac.cn>
These headers are in https://github.com/rvkrypto/rvkrypto-fips .
gcc/ChangeLog:
* config.gcc: Add extra_headers.
* config/riscv/riscv_crypto.h: New file.
* config/riscv/riscv_crypto_scalar.h: New file.
* config/riscv/rvk_asm_intrin.h: New file.
* config/riscv/rvk_emu_intrin.h: New file.
Co-Authored-By: mjosaarinen <mjos@iki.fi>
---
gcc/config.gcc | 1 +
gcc/config/riscv/riscv_crypto.h | 12 +
gcc/config/riscv/riscv_crypto_scalar.h | 247 ++++++++++
gcc/config/riscv/rvk_asm_intrin.h | 187 ++++++++
gcc/config/riscv/rvk_emu_intrin.h | 594 +++++++++++++++++++++++++
5 files changed, 1041 insertions(+)
create mode 100644 gcc/config/riscv/riscv_crypto.h
create mode 100644 gcc/config/riscv/riscv_crypto_scalar.h
create mode 100644 gcc/config/riscv/rvk_asm_intrin.h
create mode 100644 gcc/config/riscv/rvk_emu_intrin.h
diff --git a/gcc/config.gcc b/gcc/config.gcc
index 2cc5aeec9e4..caf673f1cb0 100644
--- a/gcc/config.gcc
+++ b/gcc/config.gcc
@@ -510,6 +510,7 @@ pru-*-*)
riscv*)
cpu_type=riscv
extra_objs="riscv-builtins.o riscv-c.o riscv-sr.o riscv-shorten-memrefs.o"
+ extra_headers="riscv_crypto.h riscv_crypto_scalar.h rvk_asm_intrin.h rvk_emu_intrin.h"
d_target_objs="riscv-d.o"
;;
rs6000*-*-*)
diff --git a/gcc/config/riscv/riscv_crypto.h b/gcc/config/riscv/riscv_crypto.h
new file mode 100644
index 00000000000..d06c777b7af
--- /dev/null
+++ b/gcc/config/riscv/riscv_crypto.h
@@ -0,0 +1,12 @@
+// riscv_crypto.h
+// 2022-02-12 Markku-Juhani O. Saarinen <mjos@pqshield.com>
+// Copyright (c) 2022, PQShield Ltd. All rights reserved.
+
+// === Master crypto intrinsics header. Currently Just includes scalar crypto.
+
+#ifndef _RISCV_CRYPTO_H
+#define _RISCV_CRYPTO_H
+
+#include "riscv_crypto_scalar.h"
+
+#endif // _RISCV_CRYPTO_H
\ No newline at end of file
diff --git a/gcc/config/riscv/riscv_crypto_scalar.h b/gcc/config/riscv/riscv_crypto_scalar.h
new file mode 100644
index 00000000000..0ed627856fd
--- /dev/null
+++ b/gcc/config/riscv/riscv_crypto_scalar.h
@@ -0,0 +1,247 @@
+// riscv_crypto_scalar.h
+// 2021-11-08 Markku-Juhani O. Saarinen <mjos@pqshield.com>
+// Copyright (c) 2021, PQShield Ltd. All rights reserved.
+
+// === Scalar crypto: General mapping from intrinsics to compiler builtins,
+// inline assembler, or to an (insecure) porting / emulation layer.
+
+/*
+ * _rv_*(...)
+ * RV32/64 intrinsics that return the "long" data type
+ *
+ * _rv32_*(...)
+ * RV32/64 intrinsics that return the "int32_t" data type
+ *
+ * _rv64_*(...)
+ * RV64-only intrinsics that return the "int64_t" data type
+ *
+ */
+
+#ifndef _RISCV_CRYPTO_SCALAR_H
+#define _RISCV_CRYPTO_SCALAR_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#if !defined(__riscv_xlen) && !defined(RVKINTRIN_EMULATE)
+#warning "Target is not RISC-V. Enabling insecure emulation."
+#define RVKINTRIN_EMULATE 1
+#endif
+
+#if defined(RVKINTRIN_EMULATE)
+
+// intrinsics via emulation (insecure -- porting / debug option)
+#include "rvk_emu_intrin.h"
+#define _RVK_INTRIN_IMPL(s) _rvk_emu_##s
+
+#elif defined(RVKINTRIN_ASSEMBLER)
+
+// intrinsics via inline assembler (builtins not available)
+#include "rvk_asm_intrin.h"
+#define _RVK_INTRIN_IMPL(s) _rvk_asm_##s
+#else
+
+// intrinsics via compiler builtins
+#include <stdint.h>
+#define _RVK_INTRIN_IMPL(s) __builtin_riscv_##s
+
+#endif
+
+// set type if not already set
+#if !defined(RVKINTRIN_RV32) && !defined(RVKINTRIN_RV64)
+#if __riscv_xlen == 32
+#define RVKINTRIN_RV32
+#elif __riscv_xlen == 64
+#define RVKINTRIN_RV64
+#else
+#error "__riscv_xlen not valid."
+#endif
+#endif
+
+// Mappings to implementation
+
+// === (mapping) Zbkb: Bitmanipulation instructions for Cryptography
+
+static inline int32_t _rv32_ror(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(ror_32)(rs1, rs2); } // ROR[W] ROR[W]I
+
+static inline int32_t _rv32_rol(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(rol_32)(rs1, rs2); } // ROL[W] ROR[W]I
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rv64_ror(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(ror_64)(rs1, rs2); } // ROR or RORI
+
+static inline int64_t _rv64_rol(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(rol_64)(rs1, rs2); } // ROL or RORI
+#endif
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rv32_brev8(int32_t rs1)
+ { return _RVK_INTRIN_IMPL(brev8_32)(rs1); } // BREV8 (GREVI)
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rv64_brev8(int64_t rs1)
+ { return _RVK_INTRIN_IMPL(brev8_64)(rs1); } // BREV8 (GREVI)
+#endif
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rv32_zip(int32_t rs1)
+ { return _RVK_INTRIN_IMPL(zip_32)(rs1); } // ZIP (SHFLI)
+
+static inline int32_t _rv32_unzip(int32_t rs1)
+ { return _RVK_INTRIN_IMPL(unzip_32)(rs1); } // UNZIP (UNSHFLI)
+#endif
+
+// === (mapping) Zbkc: Carry-less multiply instructions
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rv32_clmul(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(clmul_32)(rs1, rs2); } // CLMUL
+
+static inline int32_t _rv32_clmulh(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(clmulh_32)(rs1, rs2); } // CLMULH
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rv64_clmul(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(clmul_64)(rs1, rs2); } // CLMUL
+
+static inline int64_t _rv64_clmulh(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(clmulh_64)(rs1, rs2); } // CLMULH
+#endif
+
+// === (mapping) Zbkx: Crossbar permutation instructions
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rv32_xperm8(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(xperm8_32)(rs1, rs2); } // XPERM8
+
+static inline int32_t _rv32_xperm4(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(xperm4_32)(rs1, rs2); } // XPERM4
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rv64_xperm8(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(xperm8_64)(rs1, rs2); } // XPERM8
+
+static inline int64_t _rv64_xperm4(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(xperm4_64)(rs1, rs2); } // XPERM4
+#endif
+
+// === (mapping) Zknd: NIST Suite: AES Decryption
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rv32_aes32dsi(int32_t rs1, int32_t rs2, int bs)
+ { return _RVK_INTRIN_IMPL(aes32dsi)(rs1, rs2, bs); } // AES32DSI
+
+static inline int32_t _rv32_aes32dsmi(int32_t rs1, int32_t rs2, int bs)
+ { return _RVK_INTRIN_IMPL(aes32dsmi)(rs1, rs2, bs); } // AES32DSMI
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rv64_aes64ds(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(aes64ds)(rs1, rs2); } // AES64DS
+
+static inline int64_t _rv64_aes64dsm(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(aes64dsm)(rs1, rs2); } // AES64DSM
+
+static inline int64_t _rv64_aes64im(int64_t rs1)
+ { return _RVK_INTRIN_IMPL(aes64im)(rs1); } // AES64IM
+
+static inline int64_t _rv64_aes64ks1i(int64_t rs1, int rnum)
+ { return _RVK_INTRIN_IMPL(aes64ks1i)(rs1, rnum); } // AES64KS1I
+
+static inline int64_t _rv64_aes64ks2(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(aes64ks2)(rs1, rs2); } // AES64KS2
+#endif
+
+// === (mapping) Zkne: NIST Suite: AES Encryption
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rv32_aes32esi(int32_t rs1, int32_t rs2, int bs)
+ { return _RVK_INTRIN_IMPL(aes32esi)(rs1, rs2, bs); } // AES32ESI
+
+static inline int32_t _rv32_aes32esmi(int32_t rs1, int32_t rs2, int bs)
+ { return _RVK_INTRIN_IMPL(aes32esmi)(rs1, rs2, bs); } // AES32ESMI
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rv64_aes64es(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(aes64es)(rs1, rs2); } // AES64ES
+
+static inline int64_t _rv64_aes64esm(int64_t rs1, int64_t rs2)
+ { return _RVK_INTRIN_IMPL(aes64esm)(rs1, rs2); } // AES64ESM
+#endif
+
+// === (mapping) Zknh: NIST Suite: Hash Function Instructions
+
+static inline long _rv_sha256sig0(long rs1)
+ { return _RVK_INTRIN_IMPL(sha256sig0)(rs1); } // SHA256SIG0
+
+static inline long _rv_sha256sig1(long rs1)
+ { return _RVK_INTRIN_IMPL(sha256sig1)(rs1); } // SHA256SIG1
+
+static inline long _rv_sha256sum0(long rs1)
+ { return _RVK_INTRIN_IMPL(sha256sum0)(rs1); } // SHA256SUM0
+
+static inline long _rv_sha256sum1(long rs1)
+ { return _RVK_INTRIN_IMPL(sha256sum1)(rs1); } // SHA256SUM1
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rv32_sha512sig0h(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(sha512sig0h)(rs1, rs2); } // SHA512SIG0H
+
+static inline int32_t _rv32_sha512sig0l(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(sha512sig0l)(rs1, rs2); } // SHA512SIG0L
+
+static inline int32_t _rv32_sha512sig1h(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(sha512sig1h)(rs1, rs2); } // SHA512SIG1H
+
+static inline int32_t _rv32_sha512sig1l(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(sha512sig1l)(rs1, rs2); } // SHA512SIG1L
+
+static inline int32_t _rv32_sha512sum0r(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(sha512sum0r)(rs1, rs2); } // SHA512SUM0R
+
+static inline int32_t _rv32_sha512sum1r(int32_t rs1, int32_t rs2)
+ { return _RVK_INTRIN_IMPL(sha512sum1r)(rs1, rs2); } // SHA512SUM1R
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rv64_sha512sig0(int64_t rs1)
+ { return _RVK_INTRIN_IMPL(sha512sig0)(rs1); } // SHA512SIG0
+
+static inline int64_t _rv64_sha512sig1(int64_t rs1)
+ { return _RVK_INTRIN_IMPL(sha512sig1)(rs1); } // SHA512SIG1
+
+static inline int64_t _rv64_sha512sum0(int64_t rs1)
+ { return _RVK_INTRIN_IMPL(sha512sum0)(rs1); } // SHA512SUM0
+
+static inline int64_t _rv64_sha512sum1(int64_t rs1)
+ { return _RVK_INTRIN_IMPL(sha512sum1)(rs1); } // SHA512SUM1
+#endif
+
+// === (mapping) Zksed: ShangMi Suite: SM4 Block Cipher Instructions
+
+static inline long _rv_sm4ks(int32_t rs1, int32_t rs2, int bs)
+ { return _RVK_INTRIN_IMPL(sm4ks)(rs1, rs2, bs); } // SM4KS
+
+static inline long _rv_sm4ed(int32_t rs1, int32_t rs2, int bs)
+ { return _RVK_INTRIN_IMPL(sm4ed)(rs1, rs2, bs); } // SM4ED
+
+// === (mapping) Zksh: ShangMi Suite: SM3 Hash Function Instructions
+
+static inline long _rv_sm3p0(long rs1)
+ { return _RVK_INTRIN_IMPL(sm3p0)(rs1); } // SM3P0
+
+static inline long _rv_sm3p1(long rs1)
+ { return _RVK_INTRIN_IMPL(sm3p1)(rs1); } // SM3P1
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif // _RISCV_CRYPTO_SCALAR_H
\ No newline at end of file
diff --git a/gcc/config/riscv/rvk_asm_intrin.h b/gcc/config/riscv/rvk_asm_intrin.h
new file mode 100644
index 00000000000..a9a088d1fd6
--- /dev/null
+++ b/gcc/config/riscv/rvk_asm_intrin.h
@@ -0,0 +1,187 @@
+// rvk_asm_intrin.h
+// 2021-11-08 Markku-Juhani O. Saarinen <mjos@pqshield.com>
+// Copyright (c) 2021, PQShield Ltd. All rights reserved.
+
+// === Inline assembler definitions for scalar cryptography intrinsics.
+
+#ifndef _RVK_ASM_INTRIN_H
+#define _RVK_ASM_INTRIN_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#if __riscv_xlen == 32
+#define RVKINTRIN_RV32
+#elif __riscv_xlen == 64
+#define RVKINTRIN_RV64
+#else
+#error "__riscv_xlen not valid."
+#endif
+
+// === (inline) Zbkb: Bitmanipulation instructions for Cryptography
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rvk_asm_ror_32(int32_t rs1, int32_t rs2)
+ { int32_t rd; if (__builtin_constant_p(rs2)) __asm__ ("rori %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(31 & rs2)); else __asm__ ("ror %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int32_t _rvk_asm_rol_32(int32_t rs1, int32_t rs2)
+ { int32_t rd; if (__builtin_constant_p(rs2)) __asm__ ("rori %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(31 & -rs2)); else __asm__ ("rol %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int32_t _rvk_asm_ror_32(int32_t rs1, int32_t rs2)
+ { int32_t rd; if (__builtin_constant_p(rs2)) __asm__ ("roriw %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(31 & rs2)); else __asm__ ("rorw %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int32_t _rvk_asm_rol_32(int32_t rs1, int32_t rs2)
+ { int32_t rd; if (__builtin_constant_p(rs2)) __asm__ ("roriw %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(31 & -rs2)); else __asm__ ("rolw %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int64_t _rvk_asm_ror_64(int64_t rs1, int64_t rs2)
+ { int64_t rd; if (__builtin_constant_p(rs2)) __asm__ ("rori %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(63 & rs2)); else __asm__ ("ror %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int64_t _rvk_asm_rol_64(int64_t rs1, int64_t rs2)
+ { int64_t rd; if (__builtin_constant_p(rs2)) __asm__ ("rori %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(63 & -rs2)); else __asm__ ("rol %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+#endif
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rvk_asm_brev8_32(int32_t rs1)
+ { int32_t rd; __asm__ ("grevi %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(7)); return rd; }
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rvk_asm_brev8_64(int64_t rs1)
+ { int64_t rd; __asm__ ("grevi %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(7)); return rd; }
+#endif
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rvk_asm_zip_32(int32_t rs1)
+ { int32_t rd; __asm__ ("shfli %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(15)); return rd; }
+static inline int32_t _rvk_asm_unzip_32(int32_t rs1)
+ { int32_t rd; __asm__ ("unshfli %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(15)); return rd; }
+#endif
+
+// === (inline) Zbkc: Carry-less multiply instructions
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rvk_asm_clmul_32(int32_t rs1, int32_t rs2)
+ { int32_t rd; __asm__ ("clmul %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int32_t _rvk_asm_clmulh_32(int32_t rs1, int32_t rs2)
+ { int32_t rd; __asm__ ("clmulh %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rvk_asm_clmul_64(int64_t rs1, int64_t rs2)
+ { int64_t rd; __asm__ ("clmul %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int64_t _rvk_asm_clmulh_64(int64_t rs1, int64_t rs2)
+ { int64_t rd; __asm__ ("clmulh %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+#endif
+
+// === (inline) Zbkx: Crossbar permutation instructions
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rvk_asm_xperm8_32(int32_t rs1, int32_t rs2)
+ { int32_t rd; __asm__ ("xperm8 %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int32_t _rvk_asm_xperm4_32(int32_t rs1, int32_t rs2)
+ { int32_t rd; __asm__ ("xperm4 %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rvk_asm_xperm8_64(int64_t rs1, int64_t rs2)
+ { int64_t rd; __asm__ ("xperm8 %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int64_t _rvk_asm_xperm4_64(int64_t rs1, int64_t rs2)
+ { int64_t rd; __asm__ ("xperm4 %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+#endif
+
+// === (inline) Zknd: NIST Suite: AES Decryption
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rvk_asm_aes32dsi(int32_t rs1, int32_t rs2, int bs)
+ { int32_t rd; __asm__("aes32dsi %0, %1, %2, %3" : "=r"(rd) : "r"(rs1), "r"(rs2), "i"(bs)); return rd; }
+static inline int32_t _rvk_asm_aes32dsmi(int32_t rs1, int32_t rs2, int bs)
+ { int32_t rd; __asm__("aes32dsmi %0, %1, %2, %3" : "=r"(rd) : "r"(rs1), "r"(rs2), "i"(bs)); return rd; }
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rvk_asm_aes64ds(int64_t rs1, int64_t rs2)
+ { int64_t rd; __asm__("aes64ds %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int64_t _rvk_asm_aes64dsm(int64_t rs1, int64_t rs2)
+ { int64_t rd; __asm__("aes64dsm %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int64_t _rvk_asm_aes64im(int64_t rs1)
+ { int64_t rd; __asm__("aes64im %0, %1 " : "=r"(rd) : "r"(rs1)); return rd; }
+static inline int64_t _rvk_asm_aes64ks1i(int64_t rs1, int rnum)
+ { int64_t rd; __asm__("aes64ks1i %0, %1, %2" : "=r"(rd) : "r"(rs1), "i"(rnum)); return rd; }
+static inline int64_t _rvk_asm_aes64ks2(int64_t rs1, int64_t rs2)
+ { int64_t rd; __asm__("aes64ks2 %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+#endif
+
+// === (inline) Zkne: NIST Suite: AES Encryption
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rvk_asm_aes32esi(int32_t rs1, int32_t rs2, int bs)
+ { int32_t rd; __asm__("aes32esi %0, %1, %2, %3" : "=r"(rd) : "r"(rs1), "r"(rs2), "i"(bs)); return rd; }
+static inline int32_t _rvk_asm_aes32esmi(int32_t rs1, int32_t rs2, int bs)
+ { int32_t rd; __asm__("aes32esmi %0, %1, %2, %3" : "=r"(rd) : "r"(rs1), "r"(rs2), "i"(bs)); return rd; }
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rvk_asm_aes64es(int64_t rs1, int64_t rs2)
+ { int64_t rd; __asm__("aes64es %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int64_t _rvk_asm_aes64esm(int64_t rs1, int64_t rs2)
+ { int64_t rd; __asm__("aes64esm %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+#endif
+
+// === (inline) Zknh: NIST Suite: Hash Function Instructions
+
+static inline long _rvk_asm_sha256sig0(long rs1)
+ { long rd; __asm__ ("sha256sig0 %0, %1" : "=r"(rd) : "r"(rs1)); return rd; }
+static inline long _rvk_asm_sha256sig1(long rs1)
+ { long rd; __asm__ ("sha256sig1 %0, %1" : "=r"(rd) : "r"(rs1)); return rd; }
+static inline long _rvk_asm_sha256sum0(long rs1)
+ { long rd; __asm__ ("sha256sum0 %0, %1" : "=r"(rd) : "r"(rs1)); return rd; }
+static inline long _rvk_asm_sha256sum1(long rs1)
+ { long rd; __asm__ ("sha256sum1 %0, %1" : "=r"(rd) : "r"(rs1)); return rd; }
+
+#ifdef RVKINTRIN_RV32
+static inline int32_t _rvk_asm_sha512sig0h(int32_t rs1, int32_t rs2)
+ { int32_t rd; __asm__ ("sha512sig0h %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int32_t _rvk_asm_sha512sig0l(int32_t rs1, int32_t rs2)
+ { int32_t rd; __asm__ ("sha512sig0l %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int32_t _rvk_asm_sha512sig1h(int32_t rs1, int32_t rs2)
+ { int32_t rd; __asm__ ("sha512sig1h %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int32_t _rvk_asm_sha512sig1l(int32_t rs1, int32_t rs2)
+ { int32_t rd; __asm__ ("sha512sig1l %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int32_t _rvk_asm_sha512sum0r(int32_t rs1, int32_t rs2)
+ { int32_t rd; __asm__ ("sha512sum0r %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+static inline int32_t _rvk_asm_sha512sum1r(int32_t rs1, int32_t rs2)
+ { int32_t rd; __asm__ ("sha512sum1r %0, %1, %2" : "=r"(rd) : "r"(rs1), "r"(rs2)); return rd; }
+#endif
+
+#ifdef RVKINTRIN_RV64
+static inline int64_t _rvk_asm_sha512sig0(int64_t rs1)
+ { int64_t rd; __asm__ ("sha512sig0 %0, %1" : "=r"(rd) : "r"(rs1)); return rd; }
+static inline int64_t _rvk_asm_sha512sig1(int64_t rs1)
+ { int64_t rd; __asm__ ("sha512sig1 %0, %1" : "=r"(rd) : "r"(rs1)); return rd; }
+static inline int64_t _rvk_asm_sha512sum0(int64_t rs1)
+ { int64_t rd; __asm__ ("sha512sum0 %0, %1" : "=r"(rd) : "r"(rs1)); return rd; }
+static inline int64_t _rvk_asm_sha512sum1(int64_t rs1)
+ { int64_t rd; __asm__ ("sha512sum1 %0, %1" : "=r"(rd) : "r"(rs1)); return rd; }
+#endif
+
+// === (inline) Zksed: ShangMi Suite: SM4 Block Cipher Instructions
+
+static inline long _rvk_asm_sm4ks(int32_t rs1, int32_t rs2, int bs)
+ { long rd; __asm__("sm4ks %0, %1, %2, %3" : "=r"(rd) : "r"(rs1), "r"(rs2), "i"(bs)); return rd; }
+static inline long _rvk_asm_sm4ed(int32_t rs1, int32_t rs2, int bs)
+ { long rd; __asm__("sm4ed %0, %1, %2, %3" : "=r"(rd) : "r"(rs1), "r"(rs2), "i"(bs)); return rd; }
+
+// === (inline) Zksh: ShangMi Suite: SM3 Hash Function Instructions
+
+static inline long _rvk_asm_sm3p0(long rs1)
+ { long rd; __asm__("sm3p0 %0, %1" : "=r"(rd) : "r"(rs1)); return rd; }
+static inline long _rvk_asm_sm3p1(long rs1)
+ { long rd; __asm__("sm3p1 %0, %1" : "=r"(rd) : "r"(rs1)); return rd; }
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif // _RVK_ASM_INTRIN_H
\ No newline at end of file
diff --git a/gcc/config/riscv/rvk_emu_intrin.h b/gcc/config/riscv/rvk_emu_intrin.h
new file mode 100644
index 00000000000..9b6e874696a
--- /dev/null
+++ b/gcc/config/riscv/rvk_emu_intrin.h
@@ -0,0 +1,594 @@
+// rvk_emu_intrin.h
+// 2021-02-13 Markku-Juhani O. Saarinen <mjos@pqshield.com>
+// Copyright (c) 2021, PQShield Ltd. All rights reserved.
+
+// === Platform-independent emulation for scalar cryptography intrinsics.
+// Requires tables in rvk_emu_intrin.c (prefix _rvk_emu)
+
+#ifndef _RVK_EMU_INTRIN_H
+#define _RVK_EMU_INTRIN_H
+
+#ifdef RVKINTRIN_EMULATE
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <limits.h>
+#include <stdint.h>
+
+// === RVKINTRIN_EMULATE ==============================================
+
+#if UINT_MAX != 0xffffffffU
+# error "<rvk_emu_intrin.h> supports systems with sizeof(int) = 4."
+#endif
+
+#if (ULLONG_MAX == 0xffffffffLLU) || (ULLONG_MAX != 0xffffffffffffffffLLU)
+# error "<rvk_emu_intrin.h> supports systems with sizeof(long long) = 8."
+#endif
+
+#if !defined(RVKINTRIN_RV32) && !defined(RVKINTRIN_RV64)
+#if UINT_MAX == ULONG_MAX
+# define RVKINTRIN_RV32
+#else
+# define RVKINTRIN_RV64
+#endif
+#endif
+
+// === (emulated) Zbkb: Bitmanipulation instructions for Cryptography
+
+// shift helpers (that mask/limit the amount of shift)
+
+static inline int32_t _rvk_emu_sll_32(int32_t rs1, int32_t rs2)
+ { return rs1 << (rs2 & 31); }
+static inline int32_t _rvk_emu_srl_32(int32_t rs1, int32_t rs2)
+ { return (uint32_t)rs1 >> (rs2 & 31); }
+static inline int64_t _rvk_emu_sll_64(int64_t rs1, int64_t rs2)
+ { return rs1 << (rs2 & 63); }
+static inline int64_t _rvk_emu_srl_64(int64_t rs1, int64_t rs2)
+ { return (uint64_t)rs1 >> (rs2 & 63); }
+
+// rotate (a part of the extension). no separate intrinsic for rori
+
+static inline int32_t _rvk_emu_rol_32(int32_t rs1, int32_t rs2)
+ { return _rvk_emu_sll_32(rs1, rs2) | _rvk_emu_srl_32(rs1, -rs2); }
+static inline int32_t _rvk_emu_ror_32(int32_t rs1, int32_t rs2)
+ { return _rvk_emu_srl_32(rs1, rs2) | _rvk_emu_sll_32(rs1, -rs2); }
+
+static inline int64_t _rvk_emu_rol_64(int64_t rs1, int64_t rs2)
+ { return _rvk_emu_sll_64(rs1, rs2) | _rvk_emu_srl_64(rs1, -rs2); }
+static inline int64_t _rvk_emu_ror_64(int64_t rs1, int64_t rs2)
+ { return _rvk_emu_srl_64(rs1, rs2) | _rvk_emu_sll_64(rs1, -rs2); }
+
+// brev8, rev8
+
+static inline int32_t _rvk_emu_grev_32(int32_t rs1, int32_t rs2)
+{
+ uint32_t x = rs1;
+ int shamt = rs2 & 31;
+ if (shamt & 1) x = ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1);
+ if (shamt & 2) x = ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2);
+ if (shamt & 4) x = ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4);
+ if (shamt & 8) x = ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8);
+ if (shamt & 16) x = ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16);
+ return x;
+}
+
+static inline int64_t _rvk_emu_grev_64(int64_t rs1, int64_t rs2)
+{
+ uint64_t x = rs1;
+ int shamt = rs2 & 63;
+ if (shamt & 1)
+ x = ((x & 0x5555555555555555LL) << 1) |
+ ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
+ if (shamt & 2)
+ x = ((x & 0x3333333333333333LL) << 2) |
+ ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
+ if (shamt & 4)
+ x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
+ ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
+ if (shamt & 8)
+ x = ((x & 0x00FF00FF00FF00FFLL) << 8) |
+ ((x & 0xFF00FF00FF00FF00LL) >> 8);
+ if (shamt & 16)
+ x = ((x & 0x0000FFFF0000FFFFLL) << 16) |
+ ((x & 0xFFFF0000FFFF0000LL) >> 16);
+ if (shamt & 32)
+ x = ((x & 0x00000000FFFFFFFFLL) << 32) |
+ ((x & 0xFFFFFFFF00000000LL) >> 32);
+ return x;
+}
+
+static inline int32_t _rvk_emu_brev8_32(int32_t rs1)
+ { return _rvk_emu_grev_32(rs1, 7); }
+
+static inline int64_t _rvk_emu_brev8_64(int64_t rs1)
+ { return _rvk_emu_grev_64(rs1, 7); }
+
+// shuffle (zip and unzip, RV32 only)
+
+static inline uint32_t _rvk_emu_shuffle32_stage(uint32_t src, uint32_t maskL, uint32_t maskR, int N)
+{
+ uint32_t x = src & ~(maskL | maskR);
+ x |= ((src << N) & maskL) | ((src >> N) & maskR);
+ return x;
+}
+static inline int32_t _rvk_emu_shfl_32(int32_t rs1, int32_t rs2)
+{
+ uint32_t x = rs1;
+ int shamt = rs2 & 15;
+
+ if (shamt & 8) x = _rvk_emu_shuffle32_stage(x, 0x00ff0000, 0x0000ff00, 8);
+ if (shamt & 4) x = _rvk_emu_shuffle32_stage(x, 0x0f000f00, 0x00f000f0, 4);
+ if (shamt & 2) x = _rvk_emu_shuffle32_stage(x, 0x30303030, 0x0c0c0c0c, 2);
+ if (shamt & 1) x = _rvk_emu_shuffle32_stage(x, 0x44444444, 0x22222222, 1);
+
+ return x;
+}
+
+static inline int32_t _rvk_emu_unshfl_32(int32_t rs1, int32_t rs2)
+{
+ uint32_t x = rs1;
+ int shamt = rs2 & 15;
+
+ if (shamt & 1) x = _rvk_emu_shuffle32_stage(x, 0x44444444, 0x22222222, 1);
+ if (shamt & 2) x = _rvk_emu_shuffle32_stage(x, 0x30303030, 0x0c0c0c0c, 2);
+ if (shamt & 4) x = _rvk_emu_shuffle32_stage(x, 0x0f000f00, 0x00f000f0, 4);
+ if (shamt & 8) x = _rvk_emu_shuffle32_stage(x, 0x00ff0000, 0x0000ff00, 8);
+
+ return x;
+}
+
+static inline int32_t _rvk_emu_zip_32(int32_t rs1)
+ { return _rvk_emu_shfl_32(rs1, 15); }
+static inline int32_t _rvk_emu_unzip_32(int32_t rs1)
+ { return _rvk_emu_unshfl_32(rs1, 15); }
+
+// === (emulated) Zbkc: Carry-less multiply instructions
+
+static inline int32_t _rvk_emu_clmul_32(int32_t rs1, int32_t rs2)
+{
+ uint32_t a = rs1, b = rs2, x = 0;
+ for (int i = 0; i < 32; i++) {
+ if ((b >> i) & 1)
+ x ^= a << i;
+ }
+ return x;
+}
+
+static inline int32_t _rvk_emu_clmulh_32(int32_t rs1, int32_t rs2)
+{
+ uint32_t a = rs1, b = rs2, x = 0;
+ for (int i = 1; i < 32; i++) {
+ if ((b >> i) & 1)
+ x ^= a >> (32-i);
+ }
+ return x;
+}
+
+static inline int64_t _rvk_emu_clmul_64(int64_t rs1, int64_t rs2)
+{
+ uint64_t a = rs1, b = rs2, x = 0;
+
+ for (int i = 0; i < 64; i++) {
+ if ((b >> i) & 1)
+ x ^= a << i;
+ }
+ return x;
+}
+
+static inline int64_t _rvk_emu_clmulh_64(int64_t rs1, int64_t rs2)
+{
+ uint64_t a = rs1, b = rs2, x = 0;
+
+ for (int i = 1; i < 64; i++) {
+ if ((b >> i) & 1)
+ x ^= a >> (64-i);
+ }
+ return x;
+}
+
+// === (emulated) Zbkx: Crossbar permutation instructions
+
+static inline uint32_t _rvk_emu_xperm32(uint32_t rs1, uint32_t rs2, int sz_log2)
+{
+ uint32_t r = 0;
+ uint32_t sz = 1LL << sz_log2;
+ uint32_t mask = (1LL << sz) - 1;
+ for (int i = 0; i < 32; i += sz) {
+ uint32_t pos = ((rs2 >> i) & mask) << sz_log2;
+ if (pos < 32)
+ r |= ((rs1 >> pos) & mask) << i;
+ }
+ return r;
+}
+
+static inline int32_t _rvk_emu_xperm4_32(int32_t rs1, int32_t rs2)
+ { return _rvk_emu_xperm32(rs1, rs2, 2); }
+
+static inline int32_t _rvk_emu_xperm8_32(int32_t rs1, int32_t rs2)
+ { return _rvk_emu_xperm32(rs1, rs2, 3); }
+
+static inline uint64_t _rvk_emu_xperm64(uint64_t rs1, uint64_t rs2, int sz_log2)
+{
+ uint64_t r = 0;
+ uint64_t sz = 1LL << sz_log2;
+ uint64_t mask = (1LL << sz) - 1;
+ for (int i = 0; i < 64; i += sz) {
+ uint64_t pos = ((rs2 >> i) & mask) << sz_log2;
+ if (pos < 64)
+ r |= ((rs1 >> pos) & mask) << i;
+ }
+ return r;
+}
+
+static inline int64_t _rvk_emu_xperm4_64(int64_t rs1, int64_t rs2)
+ { return _rvk_emu_xperm64(rs1, rs2, 2); }
+
+static inline int64_t _rvk_emu_xperm8_64(int64_t rs1, int64_t rs2)
+ { return _rvk_emu_xperm64(rs1, rs2, 3); }
+
+/*
+ * _rvk_emu_*(...)
+ * Some INTERNAL tables (rvk_emu.c) and functions.
+ */
+
+extern const uint8_t _rvk_emu_aes_fwd_sbox[256]; // AES Forward S-Box
+extern const uint8_t _rvk_emu_aes_inv_sbox[256]; // AES Inverse S-Box
+extern const uint8_t _rvk_emu_sm4_sbox[256]; // SM4 S-Box
+
+// rvk_emu internal: multiply by 0x02 in AES's GF(256) - LFSR style.
+
+static inline uint8_t _rvk_emu_aes_xtime(uint8_t x)
+{
+ return (x << 1) ^ ((x & 0x80) ? 0x11B : 0x00);
+}
+
+// rvk_emu internal: AES forward MixColumns 8->32 bits
+
+static inline uint32_t _rvk_emu_aes_fwd_mc_8(uint32_t x)
+{
+ uint32_t x2;
+
+ x2 = _rvk_emu_aes_xtime(x); // double x
+ x = ((x ^ x2) << 24) | // 0x03 MixCol MDS Matrix
+ (x << 16) | // 0x01
+ (x << 8) | // 0x01
+ x2; // 0x02
+
+ return x;
+}
+
+// rvk_emu internal: AES forward MixColumns 32->32 bits
+
+static inline uint32_t _rvk_emu_aes_fwd_mc_32(uint32_t x)
+{
+ return _rvk_emu_aes_fwd_mc_8(x & 0xFF) ^
+ _rvk_emu_rol_32(_rvk_emu_aes_fwd_mc_8((x >> 8) & 0xFF), 8) ^
+ _rvk_emu_rol_32(_rvk_emu_aes_fwd_mc_8((x >> 16) & 0xFF), 16) ^
+ _rvk_emu_rol_32(_rvk_emu_aes_fwd_mc_8((x >> 24) & 0xFF), 24);
+}
+
+// rvk_emu internal: AES inverse MixColumns 8->32 bits
+
+static inline uint32_t _rvk_emu_aes_inv_mc_8(uint32_t x)
+{
+ uint32_t x2, x4, x8;
+
+ x2 = _rvk_emu_aes_xtime(x); // double x
+ x4 = _rvk_emu_aes_xtime(x2); // double to 4*x
+ x8 = _rvk_emu_aes_xtime(x4); // double to 8*x
+
+ x = ((x ^ x2 ^ x8) << 24) | // 0x0B Inv MixCol MDS Matrix
+ ((x ^ x4 ^ x8) << 16) | // 0x0D
+ ((x ^ x8) << 8) | // 0x09
+ (x2 ^ x4 ^ x8); // 0x0E
+
+ return x;
+}
+
+// rvk_emu internal: AES inverse MixColumns 32->32 bits
+
+static inline uint32_t _rvk_emu_aes_inv_mc_32(uint32_t x)
+{
+ return _rvk_emu_aes_inv_mc_8(x & 0xFF) ^
+ _rvk_emu_rol_32(_rvk_emu_aes_inv_mc_8((x >> 8) & 0xFF), 8) ^
+ _rvk_emu_rol_32(_rvk_emu_aes_inv_mc_8((x >> 16) & 0xFF), 16) ^
+ _rvk_emu_rol_32(_rvk_emu_aes_inv_mc_8((x >> 24) & 0xFF), 24);
+}
+
+// === (emulated) Zknd: NIST Suite: AES Decryption
+
+static inline int32_t _rvk_emu_aes32dsi(int32_t rs1, int32_t rs2, uint8_t bs)
+{
+ int32_t x;
+
+ bs = (bs & 3) << 3; // byte select
+ x = (rs2 >> bs) & 0xFF;
+ x = _rvk_emu_aes_inv_sbox[x]; // AES inverse s-box
+
+ return rs1 ^ _rvk_emu_rol_32(x, bs);
+}
+
+static inline int32_t _rvk_emu_aes32dsmi(int32_t rs1, int32_t rs2, uint8_t bs)
+{
+ int32_t x;
+
+ bs = (bs & 3) << 3; // byte select
+ x = (rs2 >> bs) & 0xFF;
+ x = _rvk_emu_aes_inv_sbox[x]; // AES inverse s-box
+ x = _rvk_emu_aes_inv_mc_8(x); // inverse MixColumns
+
+ return rs1 ^ _rvk_emu_rol_32(x, bs);
+}
+
+static inline int64_t _rvk_emu_aes64ds(int64_t rs1, int64_t rs2)
+{
+ // Half of inverse ShiftRows and SubBytes (last round)
+ return ((int64_t) _rvk_emu_aes_inv_sbox[rs1 & 0xFF]) |
+ (((int64_t) _rvk_emu_aes_inv_sbox[(rs2 >> 40) & 0xFF]) << 8) |
+ (((int64_t) _rvk_emu_aes_inv_sbox[(rs2 >> 16) & 0xFF]) << 16) |
+ (((int64_t) _rvk_emu_aes_inv_sbox[(rs1 >> 56) & 0xFF]) << 24) |
+ (((int64_t) _rvk_emu_aes_inv_sbox[(rs1 >> 32) & 0xFF]) << 32) |
+ (((int64_t) _rvk_emu_aes_inv_sbox[(rs1 >> 8) & 0xFF]) << 40) |
+ (((int64_t) _rvk_emu_aes_inv_sbox[(rs2 >> 48) & 0xFF]) << 48) |
+ (((int64_t) _rvk_emu_aes_inv_sbox[(rs2 >> 24) & 0xFF]) << 56);
+}
+
+static inline int64_t _rvk_emu_aes64im(int64_t rs1)
+{
+ return ((int64_t) _rvk_emu_aes_inv_mc_32(rs1)) |
+ (((int64_t) _rvk_emu_aes_inv_mc_32(rs1 >> 32)) << 32);
+}
+
+static inline int64_t _rvk_emu_aes64dsm(int64_t rs1, int64_t rs2)
+{
+ int64_t x;
+
+ x = _rvk_emu_aes64ds(rs1, rs2); // Inverse ShiftRows, SubBytes
+ x = _rvk_emu_aes64im(x); // Inverse MixColumns
+ return x;
+}
+
+static inline int64_t _rvk_emu_aes64ks1i(int64_t rs1, int rnum)
+{
+ // AES Round Constants
+ const uint8_t aes_rcon[] = {
+ 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1B, 0x36
+ };
+
+ uint32_t t, rc;
+
+ t = rs1 >> 32; // high word
+ rc = 0;
+
+ if (rnum < 10) { // 10: don't do it
+ t = _rvk_emu_ror_32(t, 8);
+ rc = aes_rcon[rnum]; // round constant
+ }
+ // SubWord
+ t = ((uint32_t) _rvk_emu_aes_fwd_sbox[t & 0xFF]) |
+ (((uint32_t) _rvk_emu_aes_fwd_sbox[(t >> 8) & 0xFF]) << 8) |
+ (((uint32_t) _rvk_emu_aes_fwd_sbox[(t >> 16) & 0xFF]) << 16) |
+ (((uint32_t) _rvk_emu_aes_fwd_sbox[(t >> 24) & 0xFF]) << 24);
+
+ t ^= rc;
+
+ return ((int64_t) t) | (((int64_t) t) << 32);
+}
+
+static inline int64_t _rvk_emu_aes64ks2(int64_t rs1, int64_t rs2)
+{
+ uint32_t t;
+
+ t = (rs1 >> 32) ^ (rs2 & 0xFFFFFFFF); // wrap 32 bits
+
+ return ((int64_t) t) ^ // low 32 bits
+ (((int64_t) t) << 32) ^ (rs2 & 0xFFFFFFFF00000000ULL);
+}
+
+// === (emulated) Zkne: NIST Suite: AES Encryption
+
+static inline int32_t _rvk_emu_aes32esi(int32_t rs1, int32_t rs2, uint8_t bs)
+{
+ int32_t x;
+
+ bs = (bs & 3) << 3; // byte select
+ x = (rs2 >> bs) & 0xFF;
+ x = _rvk_emu_aes_fwd_sbox[x]; // AES forward s-box
+
+ return rs1 ^ _rvk_emu_rol_32(x, bs);
+}
+
+static inline int32_t _rvk_emu_aes32esmi(int32_t rs1, int32_t rs2, uint8_t bs)
+{
+ uint32_t x;
+
+ bs = (bs & 3) << 3; // byte select
+ x = (rs2 >> bs) & 0xFF;
+ x = _rvk_emu_aes_fwd_sbox[x]; // AES forward s-box
+ x = _rvk_emu_aes_fwd_mc_8(x); // forward MixColumns
+
+ return rs1 ^ _rvk_emu_rol_32(x, bs);
+}
+
+static inline int64_t _rvk_emu_aes64es(int64_t rs1, int64_t rs2)
+{
+ // Half of forward ShiftRows and SubBytes (last round)
+ return ((int64_t) _rvk_emu_aes_fwd_sbox[rs1 & 0xFF]) |
+ (((int64_t) _rvk_emu_aes_fwd_sbox[(rs1 >> 40) & 0xFF]) << 8) |
+ (((int64_t) _rvk_emu_aes_fwd_sbox[(rs2 >> 16) & 0xFF]) << 16) |
+ (((int64_t) _rvk_emu_aes_fwd_sbox[(rs2 >> 56) & 0xFF]) << 24) |
+ (((int64_t) _rvk_emu_aes_fwd_sbox[(rs1 >> 32) & 0xFF]) << 32) |
+ (((int64_t) _rvk_emu_aes_fwd_sbox[(rs2 >> 8) & 0xFF]) << 40) |
+ (((int64_t) _rvk_emu_aes_fwd_sbox[(rs2 >> 48) & 0xFF]) << 48) |
+ (((int64_t) _rvk_emu_aes_fwd_sbox[(rs1 >> 24) & 0xFF]) << 56);
+}
+
+static inline int64_t _rvk_emu_aes64esm(int64_t rs1, int64_t rs2)
+{
+ int64_t x;
+
+ x = _rvk_emu_aes64es(rs1, rs2); // ShiftRows and SubBytes
+ x = ((int64_t) _rvk_emu_aes_fwd_mc_32(x)) | // MixColumns
+ (((int64_t) _rvk_emu_aes_fwd_mc_32(x >> 32)) << 32);
+ return x;
+}
+
+// === (emulated) Zknh: NIST Suite: Hash Function Instructions
+
+static inline long _rvk_emu_sha256sig0(long rs1)
+{
+ int32_t x;
+
+ x = _rvk_emu_ror_32(rs1, 7) ^ _rvk_emu_ror_32(rs1, 18) ^
+ _rvk_emu_srl_32(rs1, 3);
+ return (long) x;
+}
+
+static inline long _rvk_emu_sha256sig1(long rs1)
+{
+ int32_t x;
+
+ x = _rvk_emu_ror_32(rs1, 17) ^ _rvk_emu_ror_32(rs1, 19) ^
+ _rvk_emu_srl_32(rs1, 10);
+ return (long) x;
+}
+
+static inline long _rvk_emu_sha256sum0(long rs1)
+{
+ int32_t x;
+
+ x = _rvk_emu_ror_32(rs1, 2) ^ _rvk_emu_ror_32(rs1, 13) ^
+ _rvk_emu_ror_32(rs1, 22);
+ return (long) x;
+}
+
+static inline long _rvk_emu_sha256sum1(long rs1)
+{
+ int32_t x;
+
+ x = _rvk_emu_ror_32(rs1, 6) ^ _rvk_emu_ror_32(rs1, 11) ^
+ _rvk_emu_ror_32(rs1, 25);
+ return (long) x;
+}
+
+static inline int32_t _rvk_emu_sha512sig0h(int32_t rs1, int32_t rs2)
+{
+ return _rvk_emu_srl_32(rs1, 1) ^ _rvk_emu_srl_32(rs1, 7) ^
+ _rvk_emu_srl_32(rs1, 8) ^ _rvk_emu_sll_32(rs2, 31) ^
+ _rvk_emu_sll_32(rs2, 24);
+}
+
+static inline int32_t _rvk_emu_sha512sig0l(int32_t rs1, int32_t rs2)
+{
+ return _rvk_emu_srl_32(rs1, 1) ^ _rvk_emu_srl_32(rs1, 7) ^
+ _rvk_emu_srl_32(rs1, 8) ^ _rvk_emu_sll_32(rs2, 31) ^
+ _rvk_emu_sll_32(rs2, 25) ^ _rvk_emu_sll_32(rs2, 24);
+}
+
+static inline int32_t _rvk_emu_sha512sig1h(int32_t rs1, int32_t rs2)
+{
+ return _rvk_emu_sll_32(rs1, 3) ^ _rvk_emu_srl_32(rs1, 6) ^
+ _rvk_emu_srl_32(rs1, 19) ^ _rvk_emu_srl_32(rs2, 29) ^
+ _rvk_emu_sll_32(rs2, 13);
+}
+
+static inline int32_t _rvk_emu_sha512sig1l(int32_t rs1, int32_t rs2)
+{
+ return _rvk_emu_sll_32(rs1, 3) ^ _rvk_emu_srl_32(rs1, 6) ^
+ _rvk_emu_srl_32(rs1,19) ^ _rvk_emu_srl_32(rs2, 29) ^
+ _rvk_emu_sll_32(rs2, 26) ^ _rvk_emu_sll_32(rs2, 13);
+}
+
+static inline int32_t _rvk_emu_sha512sum0r(int32_t rs1, int32_t rs2)
+{
+ return _rvk_emu_sll_32(rs1, 25) ^ _rvk_emu_sll_32(rs1, 30) ^
+ _rvk_emu_srl_32(rs1, 28) ^ _rvk_emu_srl_32(rs2, 7) ^
+ _rvk_emu_srl_32(rs2, 2) ^ _rvk_emu_sll_32(rs2, 4);
+}
+
+static inline int32_t _rvk_emu_sha512sum1r(int32_t rs1, int32_t rs2)
+{
+ return _rvk_emu_sll_32(rs1, 23) ^ _rvk_emu_srl_32(rs1,14) ^
+ _rvk_emu_srl_32(rs1, 18) ^ _rvk_emu_srl_32(rs2, 9) ^
+ _rvk_emu_sll_32(rs2, 18) ^ _rvk_emu_sll_32(rs2, 14);
+}
+
+static inline int64_t _rvk_emu_sha512sig0(int64_t rs1)
+{
+ return _rvk_emu_ror_64(rs1, 1) ^ _rvk_emu_ror_64(rs1, 8) ^
+ _rvk_emu_srl_64(rs1,7);
+}
+
+static inline int64_t _rvk_emu_sha512sig1(int64_t rs1)
+{
+ return _rvk_emu_ror_64(rs1, 19) ^ _rvk_emu_ror_64(rs1, 61) ^
+ _rvk_emu_srl_64(rs1, 6);
+}
+
+static inline int64_t _rvk_emu_sha512sum0(int64_t rs1)
+{
+ return _rvk_emu_ror_64(rs1, 28) ^ _rvk_emu_ror_64(rs1, 34) ^
+ _rvk_emu_ror_64(rs1, 39);
+}
+
+static inline int64_t _rvk_emu_sha512sum1(int64_t rs1)
+{
+ return _rvk_emu_ror_64(rs1, 14) ^ _rvk_emu_ror_64(rs1, 18) ^
+ _rvk_emu_ror_64(rs1, 41);
+}
+
+// === (emulated) Zksed: ShangMi Suite: SM4 Block Cipher Instructions
+
+static inline long _rvk_emu_sm4ed(long rs1, long rs2, uint8_t bs)
+{
+ int32_t x;
+
+ bs = (bs & 3) << 3; // byte select
+ x = (rs2 >> bs) & 0xFF;
+ x = _rvk_emu_sm4_sbox[x]; // SM4 s-box
+
+ // SM4 linear transform L
+ x = x ^ (x << 8) ^ (x << 2) ^ (x << 18) ^
+ ((x & 0x3F) << 26) ^ ((x & 0xC0) << 10);
+ x = rs1 ^ _rvk_emu_rol_32(x, bs);
+ return (long) x;
+}
+
+static inline long _rvk_emu_sm4ks(long rs1, long rs2, uint8_t bs)
+{
+ int32_t x;
+
+ bs = (bs & 3) << 3; // byte select
+ x = (rs2 >> bs) & 0xFF;
+ x = _rvk_emu_sm4_sbox[x]; // SM4 s-box
+
+ // SM4 transform L' (key)
+ x = x ^ ((x & 0x07) << 29) ^ ((x & 0xFE) << 7) ^
+ ((x & 1) << 23) ^ ((x & 0xF8) << 13);
+ x = rs1 ^ _rvk_emu_rol_32(x, bs);
+ return (long) x;
+}
+
+// === (emulated) Zksh: ShangMi Suite: SM3 Hash Function Instructions
+
+static inline int32_t _rvk_emu_sm3p0(long rs1)
+{
+ int32_t x;
+
+ x = rs1 ^ _rvk_emu_rol_32(rs1, 9) ^ _rvk_emu_rol_32(rs1, 17);
+ return (long) x;
+}
+
+static inline int32_t _rvk_emu_sm3p1(long rs1)
+{
+ int32_t x;
+
+ x = rs1 ^ _rvk_emu_rol_32(rs1, 15) ^ _rvk_emu_rol_32(rs1, 23);
+ return (long) x;
+}
+
+
+#endif // RVKINTRIN_EMULATE
+#endif // _RVK_EMU_INTRIN_H
\ No newline at end of file
--
2.31.1.windows.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 4/5 V1] RISC-V:Implement testcases for Crypto extension
2022-02-23 9:44 [PATCH 0/5 V1] RISC-V:Implement Crypto extension's instruction patterns and it's intrinsics shihua
` (2 preceding siblings ...)
2022-02-23 9:44 ` [PATCH 3/5 V1] RISC-V:Implement intrinsics " shihua
@ 2022-02-23 9:44 ` shihua
2022-03-01 13:00 ` Kito Cheng
2022-02-23 9:44 ` [PATCH 5/5 V1] RISC-V:Implement architecture extension test macros " shihua
4 siblings, 1 reply; 12+ messages in thread
From: shihua @ 2022-02-23 9:44 UTC (permalink / raw)
To: gcc-patches
Cc: ben.marshall, kito.cheng, cmuellner, palmer, andrew, lazyparser,
jiawei, mjos, LiaoShihua
From: LiaoShihua <shihua@iscas.ac.cn>
These testcases use intrinsics .
gcc/testsuite/ChangeLog:
* gcc.target/riscv/zbkb32.c: New test.
* gcc.target/riscv/zbkb64.c: New test.
* gcc.target/riscv/zbkc32.c: New test.
* gcc.target/riscv/zbkc64.c: New test.
* gcc.target/riscv/zbkx32.c: New test.
* gcc.target/riscv/zbkx64.c: New test.
* gcc.target/riscv/zknd32.c: New test.
* gcc.target/riscv/zknd64.c: New test.
* gcc.target/riscv/zkne64.c: New test.
* gcc.target/riscv/zknh.c: New test.
* gcc.target/riscv/zknh32.c: New test.
* gcc.target/riscv/zknh64.c: New test.
* gcc.target/riscv/zksed.c: New test.
* gcc.target/riscv/zksh.c: New test.
---
gcc/testsuite/gcc.target/riscv/zbkb32.c | 34 +++++++++++++++++++++
gcc/testsuite/gcc.target/riscv/zbkb64.c | 21 +++++++++++++
gcc/testsuite/gcc.target/riscv/zbkc32.c | 16 ++++++++++
gcc/testsuite/gcc.target/riscv/zbkc64.c | 16 ++++++++++
gcc/testsuite/gcc.target/riscv/zbkx32.c | 16 ++++++++++
gcc/testsuite/gcc.target/riscv/zbkx64.c | 16 ++++++++++
gcc/testsuite/gcc.target/riscv/zknd32.c | 18 +++++++++++
gcc/testsuite/gcc.target/riscv/zknd64.c | 35 ++++++++++++++++++++++
gcc/testsuite/gcc.target/riscv/zkne64.c | 29 ++++++++++++++++++
gcc/testsuite/gcc.target/riscv/zknh.c | 28 +++++++++++++++++
gcc/testsuite/gcc.target/riscv/zknh32.c | 40 +++++++++++++++++++++++++
gcc/testsuite/gcc.target/riscv/zknh64.c | 29 ++++++++++++++++++
gcc/testsuite/gcc.target/riscv/zksed.c | 20 +++++++++++++
gcc/testsuite/gcc.target/riscv/zksh.c | 17 +++++++++++
14 files changed, 335 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkb32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkb64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkc32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkc64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkx32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zbkx64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zknd32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zknd64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zkne64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zknh.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zknh32.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zknh64.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zksed.c
create mode 100644 gcc/testsuite/gcc.target/riscv/zksh.c
diff --git a/gcc/testsuite/gcc.target/riscv/zbkb32.c b/gcc/testsuite/gcc.target/riscv/zbkb32.c
new file mode 100644
index 00000000000..5bf588d58b4
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zbkb32.c
@@ -0,0 +1,34 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv32gc_zbkb -mabi=ilp32" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int32_t foo1(int32_t rs1, int32_t rs2)
+{
+ return _rv32_ror(rs1,rs2);
+}
+
+int32_t foo2(int32_t rs1, int32_t rs2)
+{
+ return _rv32_rol(rs1,rs2);
+}
+
+int32_t foo3(int32_t rs1)
+{
+ return _rv32_brev8(rs1);
+}
+
+int32_t foo4(int32_t rs1)
+{
+ return _rv32_zip(rs1);
+}
+
+int32_t foo5(int32_t rs1)
+{
+ return _rv32_unzip(rs1);
+}
+
+/* { dg-final { scan-assembler-times "ror" 1 } } */
+/* { dg-final { scan-assembler-times "rol" 1 } } */
+/* { dg-final { scan-assembler-times "brev8" 1 } } */
+/* { dg-final { scan-assembler-times "zip" 2 } } */
+/* { dg-final { scan-assembler-times "unzip" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zbkb64.c b/gcc/testsuite/gcc.target/riscv/zbkb64.c
new file mode 100644
index 00000000000..2cd76a29750
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zbkb64.c
@@ -0,0 +1,21 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv64gc_zbkb -mabi=lp64" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int64_t foo1(int64_t rs1, int64_t rs2)
+{
+ return _rv64_ror(rs1,rs2);
+}
+
+int64_t foo2(int64_t rs1, int64_t rs2)
+{
+ return _rv64_rol(rs1,rs2);
+}
+
+int64_t foo3(int64_t rs1, int64_t rs2)
+{
+ return _rv64_brev8(rs1);
+}
+/* { dg-final { scan-assembler-times "ror" 1 } } */
+/* { dg-final { scan-assembler-times "rol" 1 } } */
+/* { dg-final { scan-assembler-times "brev8" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zbkc32.c b/gcc/testsuite/gcc.target/riscv/zbkc32.c
new file mode 100644
index 00000000000..237085bfc7d
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zbkc32.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv32gc_zbkc -mabi=ilp32" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int32_t foo1(int32_t rs1, int32_t rs2)
+{
+ return _rv32_clmul(rs1,rs2);
+}
+
+int32_t foo2(int32_t rs1, int32_t rs2)
+{
+ return _rv32_clmulh(rs1,rs2);
+}
+
+/* { dg-final { scan-assembler-times "clmul" 2 } } */
+/* { dg-final { scan-assembler-times "clmulh" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zbkc64.c b/gcc/testsuite/gcc.target/riscv/zbkc64.c
new file mode 100644
index 00000000000..50e39423519
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zbkc64.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv64gc_zbkc -mabi=lp64" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int64_t foo1(int64_t rs1, int64_t rs2)
+{
+ return _rv64_clmul(rs1,rs2);
+}
+
+int64_t foo2(int64_t rs1, int64_t rs2)
+{
+ return _rv64_clmulh(rs1,rs2);
+}
+
+/* { dg-final { scan-assembler-times "clmul" 2 } } */
+/* { dg-final { scan-assembler-times "clmulh" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zbkx32.c b/gcc/testsuite/gcc.target/riscv/zbkx32.c
new file mode 100644
index 00000000000..992ae64a11b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zbkx32.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv32gc_zbkx -mabi=ilp32" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int32_t foo3(int32_t rs1, int32_t rs2)
+{
+ return _rv32_xperm8(rs1,rs2);
+}
+
+int32_t foo4(int32_t rs1, int32_t rs2)
+{
+ return _rv32_xperm4(rs1,rs2);
+}
+
+/* { dg-final { scan-assembler-times "xperm8" 1 } } */
+/* { dg-final { scan-assembler-times "xperm4" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zbkx64.c b/gcc/testsuite/gcc.target/riscv/zbkx64.c
new file mode 100644
index 00000000000..b8ec89d3c61
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zbkx64.c
@@ -0,0 +1,16 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv64gc_zbkx -mabi=lp64" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int64_t foo1(int64_t rs1, int64_t rs2)
+{
+ return _rv64_xperm8(rs1,rs2);
+}
+
+int64_t foo2(int64_t rs1, int64_t rs2)
+{
+ return _rv64_xperm4(rs1,rs2);
+}
+
+/* { dg-final { scan-assembler-times "xperm8" 1 } } */
+/* { dg-final { scan-assembler-times "xperm4" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zknd32.c b/gcc/testsuite/gcc.target/riscv/zknd32.c
new file mode 100644
index 00000000000..c7a109f333b
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zknd32.c
@@ -0,0 +1,18 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv32gc_zknd -mabi=ilp32" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int32_t foo1(int32_t rs1, int32_t rs2, int bs)
+{
+ bs = 1;
+ return _rv32_aes32dsi(rs1,rs2,bs);
+}
+
+int32_t foo2(int32_t rs1, int32_t rs2, int bs)
+{
+ bs = 0;
+ return _rv32_aes32dsmi(rs1,rs2,bs);
+}
+
+/* { dg-final { scan-assembler-times "aes32dsi" 1 } } */
+/* { dg-final { scan-assembler-times "aes32dsmi" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zknd64.c b/gcc/testsuite/gcc.target/riscv/zknd64.c
new file mode 100644
index 00000000000..a7f4b20dead
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zknd64.c
@@ -0,0 +1,35 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv64gc_zknd -mabi=lp64" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int64_t foo1(int64_t rs1, int64_t rs2)
+{
+ return _rv64_aes64ds(rs1,rs2);
+}
+
+int64_t foo2(int64_t rs1, int64_t rs2)
+{
+ return _rv64_aes64dsm(rs1,rs2);
+}
+
+int64_t foo3(int64_t rs1, int rnum)
+{
+ rnum = 8;
+ return _rv64_aes64ks1i(rs1,rnum);
+}
+
+int64_t foo4(int64_t rs1, int64_t rs2)
+{
+ return _rv64_aes64ks2(rs1,rs2);
+}
+
+int64_t foo5(int64_t rs1)
+{
+ return _rv64_aes64im(rs1);
+}
+
+/* { dg-final { scan-assembler-times "aes64ds" 2 } } */
+/* { dg-final { scan-assembler-times "aes64dsm" 1 } } */
+/* { dg-final { scan-assembler-times "aes64ks1i" 1 } } */
+/* { dg-final { scan-assembler-times "aes64ks2" 1 } } */
+/* { dg-final { scan-assembler-times "aes64im" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zkne64.c b/gcc/testsuite/gcc.target/riscv/zkne64.c
new file mode 100644
index 00000000000..69dbdcbb3ad
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zkne64.c
@@ -0,0 +1,29 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv64gc_zkne -mabi=lp64" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int64_t foo1(int64_t rs1, int64_t rs2)
+{
+ return _rv64_aes64es(rs1,rs2);
+}
+
+int64_t foo2(int64_t rs1, int64_t rs2)
+{
+ return _rv64_aes64esm(rs1,rs2);
+}
+
+int64_t foo3(int64_t rs1, int rnum)
+{
+ rnum = 8;
+ return _rv64_aes64ks1i(rs1,rnum);
+}
+
+int64_t foo4(int64_t rs1, int64_t rs2)
+{
+ return _rv64_aes64ks2(rs1,rs2);
+}
+
+/* { dg-final { scan-assembler-times "aes64es" 2 } } */
+/* { dg-final { scan-assembler-times "aes64esm" 1 } } */
+/* { dg-final { scan-assembler-times "aes64ks1i" 1 } } */
+/* { dg-final { scan-assembler-times "aes64ks2" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zknh.c b/gcc/testsuite/gcc.target/riscv/zknh.c
new file mode 100644
index 00000000000..a2ea0809e63
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zknh.c
@@ -0,0 +1,28 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv64gc_zknh -mabi=lp64" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+long foo1(long rs1)
+{
+ return _rv_sha256sig0(rs1);
+}
+
+long foo2(long rs1)
+{
+ return _rv_sha256sig1(rs1);
+}
+
+long foo3(long rs1)
+{
+ return _rv_sha256sum0(rs1);
+}
+
+long foo4(long rs1)
+{
+ return _rv_sha256sum1(rs1);
+}
+
+/* { dg-final { scan-assembler-times "sha256sig0" 1 } } */
+/* { dg-final { scan-assembler-times "sha256sig1" 1 } } */
+/* { dg-final { scan-assembler-times "sha256sum0" 1 } } */
+/* { dg-final { scan-assembler-times "sha256sum1" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zknh32.c b/gcc/testsuite/gcc.target/riscv/zknh32.c
new file mode 100644
index 00000000000..2ef961c19a3
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zknh32.c
@@ -0,0 +1,40 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv32gc_zknh -mabi=ilp32" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int32_t foo1(int32_t rs1, int32_t rs2)
+{
+ return _rv32_sha512sig0h(rs1,rs2);
+}
+
+int32_t foo2(int32_t rs1, int32_t rs2)
+{
+ return _rv32_sha512sig0l(rs1,rs2);
+}
+
+int32_t foo3(int32_t rs1, int32_t rs2)
+{
+ return _rv32_sha512sig1h(rs1,rs2);
+}
+
+int32_t foo4(int32_t rs1, int32_t rs2)
+{
+ return _rv32_sha512sig1l(rs1,rs2);
+}
+
+int32_t foo5(int32_t rs1, int32_t rs2)
+{
+ return _rv32_sha512sum0r(rs1,rs2);
+}
+
+int32_t foo6(int32_t rs1, int32_t rs2)
+{
+ return _rv32_sha512sum1r(rs1,rs2);
+}
+
+/* { dg-final { scan-assembler-times "sha512sig0h" 1 } } */
+/* { dg-final { scan-assembler-times "sha512sig0l" 1 } } */
+/* { dg-final { scan-assembler-times "sha512sig1h" 1 } } */
+/* { dg-final { scan-assembler-times "sha512sig1l" 1 } } */
+/* { dg-final { scan-assembler-times "sha512sum0r" 1 } } */
+/* { dg-final { scan-assembler-times "sha512sum1r" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zknh64.c b/gcc/testsuite/gcc.target/riscv/zknh64.c
new file mode 100644
index 00000000000..7f1eb76fe5c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zknh64.c
@@ -0,0 +1,29 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv64gc_zknh -mabi=lp64" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+int64_t foo1(int64_t rs1)
+{
+ return _rv64_sha512sig0(rs1);
+}
+
+int64_t foo2(int64_t rs1)
+{
+ return _rv64_sha512sig1(rs1);
+}
+
+int64_t foo3(int64_t rs1)
+{
+ return _rv64_sha512sum0(rs1);
+}
+
+int64_t foo4(int64_t rs1)
+{
+ return _rv64_sha512sum1(rs1);
+}
+
+
+/* { dg-final { scan-assembler-times "sha512sig0" 1 } } */
+/* { dg-final { scan-assembler-times "sha512sig1" 1 } } */
+/* { dg-final { scan-assembler-times "sha512sum0" 1 } } */
+/* { dg-final { scan-assembler-times "sha512sum1" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zksed.c b/gcc/testsuite/gcc.target/riscv/zksed.c
new file mode 100644
index 00000000000..02bf96d56b1
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zksed.c
@@ -0,0 +1,20 @@
+
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv64gc_zksed -mabi=lp64" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+long foo1(int32_t rs1, int32_t rs2, int bs)
+{
+ bs = 1;
+ return _rv_sm4ks(rs1,rs2,bs);
+}
+
+long foo2(int32_t rs1, int32_t rs2, int bs)
+{
+ bs = 2;
+ return _rv_sm4ed(rs1,rs2,bs);
+}
+
+
+/* { dg-final { scan-assembler-times "sm4ks" 1 } } */
+/* { dg-final { scan-assembler-times "sm4ed" 1 } } */
\ No newline at end of file
diff --git a/gcc/testsuite/gcc.target/riscv/zksh.c b/gcc/testsuite/gcc.target/riscv/zksh.c
new file mode 100644
index 00000000000..ec47ed93221
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zksh.c
@@ -0,0 +1,17 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv64gc_zksh -mabi=lp64" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+#include"riscv_crypto.h"
+long foo1(long rs1)
+{
+ return _rv_sm3p0(rs1);
+}
+
+long foo2(long rs1)
+{
+ return _rv_sm3p1(rs1);
+}
+
+
+/* { dg-final { scan-assembler-times "sm3p0" 1 } } */
+/* { dg-final { scan-assembler-times "sm3p1" 1 } } */
\ No newline at end of file
--
2.31.1.windows.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 5/5 V1] RISC-V:Implement architecture extension test macros for Crypto extension
2022-02-23 9:44 [PATCH 0/5 V1] RISC-V:Implement Crypto extension's instruction patterns and it's intrinsics shihua
` (3 preceding siblings ...)
2022-02-23 9:44 ` [PATCH 4/5 V1] RISC-V:Implement testcases " shihua
@ 2022-02-23 9:44 ` shihua
2022-02-24 9:55 ` Kito Cheng
4 siblings, 1 reply; 12+ messages in thread
From: shihua @ 2022-02-23 9:44 UTC (permalink / raw)
To: gcc-patches
Cc: ben.marshall, kito.cheng, cmuellner, palmer, andrew, lazyparser,
jiawei, mjos, LiaoShihua
From: LiaoShihua <shihua@iscas.ac.cn>
gcc/ChangeLog:
* config/riscv/riscv-c.cc (riscv_cpu_cpp_builtins):Add __riscv_zks, __riscv_zk, __riscv_zkn
gcc/testsuite/ChangeLog:
* gcc.target/riscv/predef-17.c: New test.
---
gcc/config/riscv/riscv-c.cc | 9 ++++
gcc/testsuite/gcc.target/riscv/predef-17.c | 59 ++++++++++++++++++++++
2 files changed, 68 insertions(+)
create mode 100644 gcc/testsuite/gcc.target/riscv/predef-17.c
diff --git a/gcc/config/riscv/riscv-c.cc b/gcc/config/riscv/riscv-c.cc
index 73c62f41274..d6c153e8d7c 100644
--- a/gcc/config/riscv/riscv-c.cc
+++ b/gcc/config/riscv/riscv-c.cc
@@ -63,6 +63,15 @@ riscv_cpu_cpp_builtins (cpp_reader *pfile)
builtin_define ("__riscv_fdiv");
builtin_define ("__riscv_fsqrt");
}
+
+ if (TARGET_ZBKB && TARGET_ZBKC && TARGET_ZBKX && TARGET_ZKNE && TARGET_ZKND && TARGET_ZKNH)
+ {
+ builtin_define ("__riscv_zk");
+ builtin_define ("__riscv_zkn");
+ }
+
+ if (TARGET_ZBKB && TARGET_ZBKC && TARGET_ZBKX && TARGET_ZKSED && TARGET_ZKSH)
+ builtin_define ("__riscv_zks");
switch (riscv_abi)
{
diff --git a/gcc/testsuite/gcc.target/riscv/predef-17.c b/gcc/testsuite/gcc.target/riscv/predef-17.c
new file mode 100644
index 00000000000..4366dee1016
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/predef-17.c
@@ -0,0 +1,59 @@
+/* { dg-do compile } */
+/* { dg-options "-march=rv64i_zbkb_zbkc_zbkx_zknd_zkne_zknh_zksed_zksh -mabi=lp64 -mcmodel=medlow -misa-spec=2.2" } */
+
+int main () {
+
+#ifndef __riscv_arch_test
+#error "__riscv_arch_test"
+#endif
+
+#if __riscv_xlen != 64
+#error "__riscv_xlen"
+#endif
+
+#if !defined(__riscv_i)
+#error "__riscv_i"
+#endif
+
+#if !defined(__riscv_zk)
+#error "__riscv_zk"
+#endif
+
+#if !defined(__riscv_zkn)
+#error "__riscv_zkn"
+#endif
+
+#if !defined(__riscv_zks)
+#error "__riscv_zks"
+#endif
+
+#if !defined(__riscv_zbkb)
+#error "__riscv_zbkb"
+#endif
+
+#if !defined(__riscv_zbkc)
+#error "__riscv_zbkc"
+#endif
+
+#if !defined(__riscv_zbkx)
+#error "__riscv_zbkx"
+#endif
+
+#if !defined(__riscv_zknd)
+#error "__riscv_zknd"
+#endif
+
+#if !defined(__riscv_zkne)
+#error "__riscv_zkne"
+#endif
+
+#if !defined(__riscv_zknh)
+#error "__riscv_zknh"
+#endif
+
+#if !defined(__riscv_zksh)
+#error "__riscv_zksh"
+#endif
+
+ return 0;
+}
\ No newline at end of file
--
2.31.1.windows.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 5/5 V1] RISC-V:Implement architecture extension test macros for Crypto extension
2022-02-23 9:44 ` [PATCH 5/5 V1] RISC-V:Implement architecture extension test macros " shihua
@ 2022-02-24 9:55 ` Kito Cheng
2022-02-28 15:56 ` Kito Cheng
0 siblings, 1 reply; 12+ messages in thread
From: Kito Cheng @ 2022-02-24 9:55 UTC (permalink / raw)
To: shihua
Cc: GCC Patches, ben.marshall, Christoph Muellner, Andrew Waterman,
jiawei, mjos, Kito Cheng
I would suggest implementing that in riscv_subset_list::parse so that
it also affect the ELF attribute emission.
On Wed, Feb 23, 2022 at 5:44 PM <shihua@iscas.ac.cn> wrote:
>
> From: LiaoShihua <shihua@iscas.ac.cn>
>
> gcc/ChangeLog:
>
> * config/riscv/riscv-c.cc (riscv_cpu_cpp_builtins):Add __riscv_zks, __riscv_zk, __riscv_zkn
>
> gcc/testsuite/ChangeLog:
>
> * gcc.target/riscv/predef-17.c: New test.
>
> ---
> gcc/config/riscv/riscv-c.cc | 9 ++++
> gcc/testsuite/gcc.target/riscv/predef-17.c | 59 ++++++++++++++++++++++
> 2 files changed, 68 insertions(+)
> create mode 100644 gcc/testsuite/gcc.target/riscv/predef-17.c
>
> diff --git a/gcc/config/riscv/riscv-c.cc b/gcc/config/riscv/riscv-c.cc
> index 73c62f41274..d6c153e8d7c 100644
> --- a/gcc/config/riscv/riscv-c.cc
> +++ b/gcc/config/riscv/riscv-c.cc
> @@ -63,6 +63,15 @@ riscv_cpu_cpp_builtins (cpp_reader *pfile)
> builtin_define ("__riscv_fdiv");
> builtin_define ("__riscv_fsqrt");
> }
> +
> + if (TARGET_ZBKB && TARGET_ZBKC && TARGET_ZBKX && TARGET_ZKNE && TARGET_ZKND && TARGET_ZKNH)
> + {
> + builtin_define ("__riscv_zk");
> + builtin_define ("__riscv_zkn");
> + }
> +
> + if (TARGET_ZBKB && TARGET_ZBKC && TARGET_ZBKX && TARGET_ZKSED && TARGET_ZKSH)
> + builtin_define ("__riscv_zks");
>
> switch (riscv_abi)
> {
> diff --git a/gcc/testsuite/gcc.target/riscv/predef-17.c b/gcc/testsuite/gcc.target/riscv/predef-17.c
> new file mode 100644
> index 00000000000..4366dee1016
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/predef-17.c
> @@ -0,0 +1,59 @@
> +/* { dg-do compile } */
> +/* { dg-options "-march=rv64i_zbkb_zbkc_zbkx_zknd_zkne_zknh_zksed_zksh -mabi=lp64 -mcmodel=medlow -misa-spec=2.2" } */
> +
> +int main () {
> +
> +#ifndef __riscv_arch_test
> +#error "__riscv_arch_test"
> +#endif
> +
> +#if __riscv_xlen != 64
> +#error "__riscv_xlen"
> +#endif
> +
> +#if !defined(__riscv_i)
> +#error "__riscv_i"
> +#endif
> +
> +#if !defined(__riscv_zk)
> +#error "__riscv_zk"
> +#endif
> +
> +#if !defined(__riscv_zkn)
> +#error "__riscv_zkn"
> +#endif
> +
> +#if !defined(__riscv_zks)
> +#error "__riscv_zks"
> +#endif
> +
> +#if !defined(__riscv_zbkb)
> +#error "__riscv_zbkb"
> +#endif
> +
> +#if !defined(__riscv_zbkc)
> +#error "__riscv_zbkc"
> +#endif
> +
> +#if !defined(__riscv_zbkx)
> +#error "__riscv_zbkx"
> +#endif
> +
> +#if !defined(__riscv_zknd)
> +#error "__riscv_zknd"
> +#endif
> +
> +#if !defined(__riscv_zkne)
> +#error "__riscv_zkne"
> +#endif
> +
> +#if !defined(__riscv_zknh)
> +#error "__riscv_zknh"
> +#endif
> +
> +#if !defined(__riscv_zksh)
> +#error "__riscv_zksh"
> +#endif
> +
> + return 0;
> +}
> \ No newline at end of file
> --
> 2.31.1.windows.1
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 3/5 V1] RISC-V:Implement intrinsics for Crypto extension
2022-02-23 9:44 ` [PATCH 3/5 V1] RISC-V:Implement intrinsics " shihua
@ 2022-02-28 15:34 ` Kito Cheng
0 siblings, 0 replies; 12+ messages in thread
From: Kito Cheng @ 2022-02-28 15:34 UTC (permalink / raw)
To: shihua
Cc: GCC Patches, ben.marshall, Christoph Muellner, Andrew Waterman,
jiawei, mjos, Kito Cheng
Those header files have license issues that should relicinced to GPL,
and don't put rvk_asm_intrin.h rvk_emu_intrin.h, since they are not
too meaningful when we have compiler support.
General comment:
- Use /* */ rather than //, that gives much more compatibility, that
is illegal for c89.
- Add a new line at the end of file, that prevents something like "\
No newline at end of file" in the diff.
> --- /dev/null
> +++ b/gcc/config/riscv/riscv_crypto_scalar.h
> @@ -0,0 +1,247 @@
> +// riscv_crypto_scalar.h
> +// 2021-11-08 Markku-Juhani O. Saarinen <mjos@pqshield.com>
> +// Copyright (c) 2021, PQShield Ltd. All rights reserved.
> +
> +// === Scalar crypto: General mapping from intrinsics to compiler builtins,
> +// inline assembler, or to an (insecure) porting / emulation layer.
> +
> +/*
> + * _rv_*(...)
> + * RV32/64 intrinsics that return the "long" data type
> + *
> + * _rv32_*(...)
> + * RV32/64 intrinsics that return the "int32_t" data type
> + *
> + * _rv64_*(...)
> + * RV64-only intrinsics that return the "int64_t" data type
> + *
> + */
> +
> +#ifndef _RISCV_CRYPTO_SCALAR_H
> +#define _RISCV_CRYPTO_SCALAR_H
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#if !defined(__riscv_xlen) && !defined(RVKINTRIN_EMULATE)
> +#warning "Target is not RISC-V. Enabling insecure emulation."
> +#define RVKINTRIN_EMULATE 1
> +#endif
> +
> +#if defined(RVKINTRIN_EMULATE)
> +
> +// intrinsics via emulation (insecure -- porting / debug option)
> +#include "rvk_emu_intrin.h"
> +#define _RVK_INTRIN_IMPL(s) _rvk_emu_##s
> +
> +#elif defined(RVKINTRIN_ASSEMBLER)
> +
> +// intrinsics via inline assembler (builtins not available)
> +#include "rvk_asm_intrin.h"
> +#define _RVK_INTRIN_IMPL(s) _rvk_asm_##s
> +#else
> +
> +// intrinsics via compiler builtins
> +#include <stdint.h>
> +#define _RVK_INTRIN_IMPL(s) __builtin_riscv_##s
> +
> +#endif
Drop rvk_emu_intrin.h and rvk_asm_intrin.h here.
> +
> +// set type if not already set
> +#if !defined(RVKINTRIN_RV32) && !defined(RVKINTRIN_RV64)
...
> +static inline long _rv_sm3p0(long rs1)
> + { return _RVK_INTRIN_IMPL(sm3p0)(rs1); } // SM3P0
> +
> +static inline long _rv_sm3p1(long rs1)
> + { return _RVK_INTRIN_IMPL(sm3p1)(rs1); } // SM3P1
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
#undef _RVK_INTRIN_IMPL before end of this header to prevent
introducing unexpected symbols.
> +#endif // _RISCV_CRYPTO_SCALAR_H
> \ No newline at end of file
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 5/5 V1] RISC-V:Implement architecture extension test macros for Crypto extension
2022-02-24 9:55 ` Kito Cheng
@ 2022-02-28 15:56 ` Kito Cheng
0 siblings, 0 replies; 12+ messages in thread
From: Kito Cheng @ 2022-02-28 15:56 UTC (permalink / raw)
To: shihua
Cc: GCC Patches, ben.marshall, Christoph Muellner, Andrew Waterman,
jiawei, mjos, Kito Cheng
and could you separate this from this patch series, I would like to
include this into GCC 12, and defer other stuffs to GCC 13
On Thu, Feb 24, 2022 at 5:55 PM Kito Cheng <kito.cheng@gmail.com> wrote:
>
> I would suggest implementing that in riscv_subset_list::parse so that
> it also affect the ELF attribute emission.
>
> On Wed, Feb 23, 2022 at 5:44 PM <shihua@iscas.ac.cn> wrote:
> >
> > From: LiaoShihua <shihua@iscas.ac.cn>
> >
> > gcc/ChangeLog:
> >
> > * config/riscv/riscv-c.cc (riscv_cpu_cpp_builtins):Add __riscv_zks, __riscv_zk, __riscv_zkn
> >
> > gcc/testsuite/ChangeLog:
> >
> > * gcc.target/riscv/predef-17.c: New test.
> >
> > ---
> > gcc/config/riscv/riscv-c.cc | 9 ++++
> > gcc/testsuite/gcc.target/riscv/predef-17.c | 59 ++++++++++++++++++++++
> > 2 files changed, 68 insertions(+)
> > create mode 100644 gcc/testsuite/gcc.target/riscv/predef-17.c
> >
> > diff --git a/gcc/config/riscv/riscv-c.cc b/gcc/config/riscv/riscv-c.cc
> > index 73c62f41274..d6c153e8d7c 100644
> > --- a/gcc/config/riscv/riscv-c.cc
> > +++ b/gcc/config/riscv/riscv-c.cc
> > @@ -63,6 +63,15 @@ riscv_cpu_cpp_builtins (cpp_reader *pfile)
> > builtin_define ("__riscv_fdiv");
> > builtin_define ("__riscv_fsqrt");
> > }
> > +
> > + if (TARGET_ZBKB && TARGET_ZBKC && TARGET_ZBKX && TARGET_ZKNE && TARGET_ZKND && TARGET_ZKNH)
> > + {
> > + builtin_define ("__riscv_zk");
> > + builtin_define ("__riscv_zkn");
> > + }
> > +
> > + if (TARGET_ZBKB && TARGET_ZBKC && TARGET_ZBKX && TARGET_ZKSED && TARGET_ZKSH)
> > + builtin_define ("__riscv_zks");
> >
> > switch (riscv_abi)
> > {
> > diff --git a/gcc/testsuite/gcc.target/riscv/predef-17.c b/gcc/testsuite/gcc.target/riscv/predef-17.c
> > new file mode 100644
> > index 00000000000..4366dee1016
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/riscv/predef-17.c
> > @@ -0,0 +1,59 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-march=rv64i_zbkb_zbkc_zbkx_zknd_zkne_zknh_zksed_zksh -mabi=lp64 -mcmodel=medlow -misa-spec=2.2" } */
> > +
> > +int main () {
> > +
> > +#ifndef __riscv_arch_test
> > +#error "__riscv_arch_test"
> > +#endif
> > +
> > +#if __riscv_xlen != 64
> > +#error "__riscv_xlen"
> > +#endif
> > +
> > +#if !defined(__riscv_i)
> > +#error "__riscv_i"
> > +#endif
> > +
> > +#if !defined(__riscv_zk)
> > +#error "__riscv_zk"
> > +#endif
> > +
> > +#if !defined(__riscv_zkn)
> > +#error "__riscv_zkn"
> > +#endif
> > +
> > +#if !defined(__riscv_zks)
> > +#error "__riscv_zks"
> > +#endif
> > +
> > +#if !defined(__riscv_zbkb)
> > +#error "__riscv_zbkb"
> > +#endif
> > +
> > +#if !defined(__riscv_zbkc)
> > +#error "__riscv_zbkc"
> > +#endif
> > +
> > +#if !defined(__riscv_zbkx)
> > +#error "__riscv_zbkx"
> > +#endif
> > +
> > +#if !defined(__riscv_zknd)
> > +#error "__riscv_zknd"
> > +#endif
> > +
> > +#if !defined(__riscv_zkne)
> > +#error "__riscv_zkne"
> > +#endif
> > +
> > +#if !defined(__riscv_zknh)
> > +#error "__riscv_zknh"
> > +#endif
> > +
> > +#if !defined(__riscv_zksh)
> > +#error "__riscv_zksh"
> > +#endif
> > +
> > + return 0;
> > +}
> > \ No newline at end of file
> > --
> > 2.31.1.windows.1
> >
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/5 V1] RISC-V:Implement instruction patterns for Crypto extension
2022-02-23 9:44 ` [PATCH 1/5 V1] RISC-V:Implement instruction patterns for Crypto extension shihua
@ 2022-02-28 16:04 ` Kito Cheng
0 siblings, 0 replies; 12+ messages in thread
From: Kito Cheng @ 2022-02-28 16:04 UTC (permalink / raw)
To: shihua
Cc: GCC Patches, ben.marshall, Christoph Muellner, Andrew Waterman,
jiawei, mjos, Kito Cheng
On Wed, Feb 23, 2022 at 5:46 PM <shihua@iscas.ac.cn> wrote:
>
> From: LiaoShihua <shihua@iscas.ac.cn>
>
>
> gcc/ChangeLog:
>
> * config/riscv/predicates.md (bs_operand): operand for bs
> (rnum_operand):
> * config/riscv/riscv.md: include crypto.md
> * config/riscv/crypto.md: New file.
>
> Co-Authored-By: Wu <siyu@isrc.iscas.ac.cn>
> ---
> gcc/config/riscv/crypto.md | 383 +++++++++++++++++++++++++++++++++
> gcc/config/riscv/predicates.md | 8 +
> gcc/config/riscv/riscv.md | 1 +
> 3 files changed, 392 insertions(+)
> create mode 100644 gcc/config/riscv/crypto.md
>
> diff --git a/gcc/config/riscv/crypto.md b/gcc/config/riscv/crypto.md
> new file mode 100644
> index 00000000000..591066fac3b
> --- /dev/null
> +++ b/gcc/config/riscv/crypto.md
> @@ -0,0 +1,383 @@
> +;; Machine description for K extension.
> +;; Copyright (C) 2022 Free Software Foundation, Inc.
> +;; Contributed by SiYu Wu (siyu@isrc.iscas.ac.cn) and ShiHua Liao (shihua@iscas.ac.cn).
> +
> +;; This file is part of GCC.
> +
> +;; GCC is free software; you can redistribute it and/or modify
> +;; it under the terms of the GNU General Public License as published by
> +;; the Free Software Foundation; either version 3, or (at your option)
> +;; any later version.
> +
> +;; GCC is distributed in the hope that it will be useful,
> +;; but WITHOUT ANY WARRANTY; without even the implied warranty of
> +;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> +;; GNU General Public License for more details.
> +
> +;; You should have received a copy of the GNU General Public License
> +;; along with GCC; see the file COPYING3. If not see
> +;; <http://www.gnu.org/licenses/>.
> +
> +(define_c_enum "unspec" [
> + ;;ZBKB unspecs
> + UNSPEC_ROR
> + UNSPEC_ROL
We have standard patterns for ROR and ROL, so I think we don't need
unspec for those two.
...
> +(define_insn "riscv_ror_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")
> + (match_operand:X 2 "register_operand" "r")]
> + UNSPEC_ROR))]
> + "TARGET_ZBKB"
> + "ror\t%0,%1,%2")
>
> +
> +(define_insn "riscv_rol_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")
> + (match_operand:X 2 "register_operand" "r")]
> + UNSPEC_ROL))]
> + "TARGET_ZBKB"
> + "rol\t%0,%1,%2")
riscv_ror_<mode> and riscv_rol_<mode> can be removed.
> +
> +(define_insn "riscv_brev8_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")]
> + UNSPEC_BREV8))]
> + "TARGET_ZBKB"
> + "brev8\t%0,%1")
> +
> +(define_insn "riscv_bswap<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")]
> + UNSPEC_BSWAP))]
> + "TARGET_ZBKB"
> + "bswap\t%0,%1")
> +
> +(define_insn "riscv_zip"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
> + UNSPEC_ZIP))]
> + "TARGET_ZBKB && !TARGET_64BIT"
> + "zip\t%0,%1")
> +
> +(define_insn "riscv_unzip"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
> + UNSPEC_UNZIP))]
> + "TARGET_ZBKB && !TARGET_64BIT"
> + "unzip\t%0,%1")
> +
> +(define_insn "riscv_clmul_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")
> + (match_operand:X 2 "register_operand" "r")]
> + UNSPEC_CLMUL))]
> + "TARGET_ZBKC"
> + "clmul\t%0,%1,%2")
> +
> +(define_insn "riscv_clmulh_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")
> + (match_operand:X 2 "register_operand" "r")]
> + UNSPEC_CLMULH))]
> + "TARGET_ZBKC"
> + "clmulh\t%0,%1,%2")
> +
> +(define_insn "riscv_xperm8_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")
> + (match_operand:X 2 "register_operand" "r")]
> + UNSPEC_XPERM8))]
> + "TARGET_ZBKX"
> + "xperm8\t%0,%1,%2")
> +
> +(define_insn "riscv_xperm4_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")
> + (match_operand:X 2 "register_operand" "r")]
> + UNSPEC_XPERM4))]
> + "TARGET_ZBKX"
> + "xperm4\t%0,%1,%2")
> +
> +(define_insn "riscv_aes32dsi"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> + (match_operand:SI 2 "register_operand" "r")
> + (match_operand:SI 3 "bs_operand" "i")]
> + UNSPEC_AES_DSI))]
> + "TARGET_ZKND && !TARGET_64BIT"
> + "aes32dsi\t%0,%1,%2,%3")
> +
> +(define_insn "riscv_aes32dsmi"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> + (match_operand:SI 2 "register_operand" "r")
> + (match_operand:SI 3 "bs_operand" "i")]
> + UNSPEC_AES_DSMI))]
> + "TARGET_ZKND && !TARGET_64BIT"
> + "aes32dsmi\t%0,%1,%2,%3")
> +
> +(define_insn "riscv_aes64ds"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")
> + (match_operand:DI 2 "register_operand" "r")]
> + UNSPEC_AES_DS))]
> + "TARGET_ZKND && TARGET_64BIT"
> + "aes64ds\t%0,%1,%2")
> +
> +(define_insn "riscv_aes64dsm"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")
> + (match_operand:DI 2 "register_operand" "r")]
> + UNSPEC_AES_DSM))]
> + "TARGET_ZKND && TARGET_64BIT"
> + "aes64dsm\t%0,%1,%2")
> +
> +(define_insn "riscv_aes64im"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")]
> + UNSPEC_AES_IM))]
> + "TARGET_ZKND && TARGET_64BIT"
> + "aes64im\t%0,%1")
> +
> +(define_insn "riscv_aes64ks1i"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")
> + (match_operand:SI 2 "rnum_operand" "i")]
> + UNSPEC_AES_KS1I))]
> + "(TARGET_ZKND || TARGET_ZKNE) && TARGET_64BIT"
> + "aes64ks1i\t%0,%1,%2")
> +
> +(define_insn "riscv_aes64ks2"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")
> + (match_operand:DI 2 "register_operand" "r")]
> + UNSPEC_AES_KS2))]
> + "(TARGET_ZKND || TARGET_ZKNE) && TARGET_64BIT"
> + "aes64ks2\t%0,%1,%2")
> +
> +(define_insn "riscv_aes32esi"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> + (match_operand:SI 2 "register_operand" "r")
> + (match_operand:SI 3 "bs_operand" "i")]
> + UNSPEC_AES_ESI))]
> + "TARGET_ZKNE && !TARGET_64BIT"
> + "aes32esi\t%0,%1,%2,%3")
> +
> +(define_insn "riscv_aes32esmi"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> + (match_operand:SI 2 "register_operand" "r")
> + (match_operand:SI 3 "bs_operand" "i")]
> + UNSPEC_AES_ESMI))]
> + "TARGET_ZKNE && !TARGET_64BIT"
> + "aes32esmi\t%0,%1,%2,%3")
> +
> +(define_insn "riscv_aes64es"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")
> + (match_operand:DI 2 "register_operand" "r")]
> + UNSPEC_AES_ES))]
> + "TARGET_ZKNE && TARGET_64BIT"
> + "aes64es\t%0,%1,%2")
> +
> +(define_insn "riscv_aes64esm"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")
> + (match_operand:DI 2 "register_operand" "r")]
> + UNSPEC_AES_ESM))]
> + "TARGET_ZKNE && TARGET_64BIT"
> + "aes64esm\t%0,%1,%2")
> +
> +;; Zknh - SHA256
> +
> +(define_insn "riscv_sha256sig0_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")]
> + UNSPEC_SHA_256_SIG0))]
> + "TARGET_ZKNH"
> + "sha256sig0\t%0,%1")
> +
> +(define_insn "riscv_sha256sig1_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")]
> + UNSPEC_SHA_256_SIG1))]
> + "TARGET_ZKNH"
> + "sha256sig1\t%0,%1")
> +
> +(define_insn "riscv_sha256sum0_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")]
> + UNSPEC_SHA_256_SUM0))]
> + "TARGET_ZKNH"
> + "sha256sum0\t%0,%1")
> +
> +(define_insn "riscv_sha256sum1_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")]
> + UNSPEC_SHA_256_SUM1))]
> + "TARGET_ZKNH"
> + "sha256sum1\t%0,%1")
> +
> +(define_insn "riscv_sha512sig0h"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> + (match_operand:SI 2 "register_operand" "r")]
> + UNSPEC_SHA_512_SIG0H))]
> + "TARGET_ZKNH && !TARGET_64BIT"
> + "sha512sig0h\t%0,%1,%2")
> +
> +(define_insn "riscv_sha512sig0l"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> + (match_operand:SI 2 "register_operand" "r")]
> + UNSPEC_SHA_512_SIG0L))]
> + "TARGET_ZKNH && !TARGET_64BIT"
> + "sha512sig0l\t%0,%1,%2")
> +
> +(define_insn "riscv_sha512sig1h"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> + (match_operand:SI 2 "register_operand" "r")]
> + UNSPEC_SHA_512_SIG1H))]
> + "TARGET_ZKNH && !TARGET_64BIT"
> + "sha512sig1h\t%0,%1,%2")
> +
> +(define_insn "riscv_sha512sig1l"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> + (match_operand:SI 2 "register_operand" "r")]
> + UNSPEC_SHA_512_SIG1L))]
> + "TARGET_ZKNH && !TARGET_64BIT"
> + "sha512sig1l\t%0,%1,%2")
> +
> +(define_insn "riscv_sha512sum0r"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> + (match_operand:SI 2 "register_operand" "r")]
> + UNSPEC_SHA_512_SUM0R))]
> + "TARGET_ZKNH && !TARGET_64BIT"
> + "sha512sum0r\t%0,%1,%2")
> +
> +(define_insn "riscv_sha512sum1r"
> + [(set (match_operand:SI 0 "register_operand" "=r")
> + (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> + (match_operand:SI 2 "register_operand" "r")]
> + UNSPEC_SHA_512_SUM1R))]
> + "TARGET_ZKNH && !TARGET_64BIT"
> + "sha512sum1r\t%0,%1,%2")
> +
> +(define_insn "riscv_sha512sig0"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")]
> + UNSPEC_SHA_512_SIG0))]
> + "TARGET_ZKNH && TARGET_64BIT"
> + "sha512sig0\t%0,%1")
> +
> +(define_insn "riscv_sha512sig1"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")]
> + UNSPEC_SHA_512_SIG1))]
> + "TARGET_ZKNH && TARGET_64BIT"
> + "sha512sig1\t%0,%1")
> +
> +(define_insn "riscv_sha512sum0"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")]
> + UNSPEC_SHA_512_SUM0))]
> + "TARGET_ZKNH && TARGET_64BIT"
> + "sha512sum0\t%0,%1")
> +
> +(define_insn "riscv_sha512sum1"
> + [(set (match_operand:DI 0 "register_operand" "=r")
> + (unspec:DI [(match_operand:DI 1 "register_operand" "r")]
> + UNSPEC_SHA_512_SUM1))]
> + "TARGET_ZKNH && TARGET_64BIT"
> + "sha512sum1\t%0,%1")
> +
> +(define_insn "riscv_sm3p0_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")]
> + UNSPEC_SM3_P0))]
> + "TARGET_ZKSH"
> + "sm3p0\t%0,%1")
> +
> +(define_insn "riscv_sm3p1_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")]
> + UNSPEC_SM3_P1))]
> + "TARGET_ZKSH"
> + "sm3p1\t%0,%1")
> +
> +;; Zksed
> +
> +(define_insn "riscv_sm4ed_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")
> + (match_operand:X 2 "register_operand" "r")
> + (match_operand:SI 3 "bs_operand" "i")]
> + UNSPEC_SM4_ED))]
> + "TARGET_ZKSED"
> + "sm4ed\t%0,%1,%2,%3")
> +
> +(define_insn "riscv_sm4ks_<mode>"
> + [(set (match_operand:X 0 "register_operand" "=r")
> + (unspec:X [(match_operand:X 1 "register_operand" "r")
> + (match_operand:X 2 "register_operand" "r")
> + (match_operand:SI 3 "bs_operand" "i")]
> + UNSPEC_SM4_KS))]
> + "TARGET_ZKSED"
> + "sm4ks\t%0,%1,%2,%3")
> \ No newline at end of file
> diff --git a/gcc/config/riscv/predicates.md b/gcc/config/riscv/predicates.md
> index 97cdbdf053b..7e0e86651c0 100644
> --- a/gcc/config/riscv/predicates.md
> +++ b/gcc/config/riscv/predicates.md
> @@ -239,3 +239,11 @@
> (define_predicate "const63_operand"
> (and (match_code "const_int")
> (match_test "INTVAL (op) == 63")))
> +
> +(define_predicate "bs_operand"
> + (and (match_code "const_int")
> + (match_test "INTVAL (op) < 4")))
> +
> +(define_predicate "rnum_operand"
> + (and (match_code "const_int")
> + (match_test "INTVAL (op) < 11")))
> diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md
> index b3c5bce842a..59bfecb6341 100644
> --- a/gcc/config/riscv/riscv.md
> +++ b/gcc/config/riscv/riscv.md
> @@ -2864,6 +2864,7 @@
> [(set_attr "length" "12")])
>
> (include "bitmanip.md")
> +(include "crypto.md")
> (include "sync.md")
> (include "peephole.md")
> (include "pic.md")
> --
> 2.31.1.windows.1
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5 V1] RISC-V:Implement testcases for Crypto extension
2022-02-23 9:44 ` [PATCH 4/5 V1] RISC-V:Implement testcases " shihua
@ 2022-03-01 13:00 ` Kito Cheng
2022-03-01 13:49 ` Kito Cheng
0 siblings, 1 reply; 12+ messages in thread
From: Kito Cheng @ 2022-03-01 13:00 UTC (permalink / raw)
To: 廖仕华
Cc: GCC Patches, ben.marshall, Christoph Muellner, Andrew Waterman,
jiawei, mjos, Kito Cheng
Just one general review comment for this patch:
- Add newline at end of this file to prevent "\ No newline at end of
file" in the git commit log
- I saw you've skip -g and -flto, I guess that because that cause fail
since -g or -flto might add few more line to make the
scan-assembler-times, I would suggest you add more "characteristic" to
your scan pattern then you can drop those skip-if
e.g.
Rewrite
+/* { dg-final { scan-assembler-times "sm3p0" 1 } } */
into
+/* { dg-final { scan-assembler-times "\tsm3p0\t" 1 } } */
Then it should be able to pass with -g and -flto
On Wed, Feb 23, 2022 at 5:47 PM <shihua@iscas.ac.cn> wrote:
>
> From: LiaoShihua <shihua@iscas.ac.cn>
>
> These testcases use intrinsics .
>
> gcc/testsuite/ChangeLog:
>
> * gcc.target/riscv/zbkb32.c: New test.
> * gcc.target/riscv/zbkb64.c: New test.
> * gcc.target/riscv/zbkc32.c: New test.
> * gcc.target/riscv/zbkc64.c: New test.
> * gcc.target/riscv/zbkx32.c: New test.
> * gcc.target/riscv/zbkx64.c: New test.
> * gcc.target/riscv/zknd32.c: New test.
> * gcc.target/riscv/zknd64.c: New test.
> * gcc.target/riscv/zkne64.c: New test.
> * gcc.target/riscv/zknh.c: New test.
> * gcc.target/riscv/zknh32.c: New test.
> * gcc.target/riscv/zknh64.c: New test.
> * gcc.target/riscv/zksed.c: New test.
> * gcc.target/riscv/zksh.c: New test.
>
> ---
> gcc/testsuite/gcc.target/riscv/zbkb32.c | 34 +++++++++++++++++++++
> gcc/testsuite/gcc.target/riscv/zbkb64.c | 21 +++++++++++++
> gcc/testsuite/gcc.target/riscv/zbkc32.c | 16 ++++++++++
> gcc/testsuite/gcc.target/riscv/zbkc64.c | 16 ++++++++++
> gcc/testsuite/gcc.target/riscv/zbkx32.c | 16 ++++++++++
> gcc/testsuite/gcc.target/riscv/zbkx64.c | 16 ++++++++++
> gcc/testsuite/gcc.target/riscv/zknd32.c | 18 +++++++++++
> gcc/testsuite/gcc.target/riscv/zknd64.c | 35 ++++++++++++++++++++++
> gcc/testsuite/gcc.target/riscv/zkne64.c | 29 ++++++++++++++++++
> gcc/testsuite/gcc.target/riscv/zknh.c | 28 +++++++++++++++++
> gcc/testsuite/gcc.target/riscv/zknh32.c | 40 +++++++++++++++++++++++++
> gcc/testsuite/gcc.target/riscv/zknh64.c | 29 ++++++++++++++++++
> gcc/testsuite/gcc.target/riscv/zksed.c | 20 +++++++++++++
> gcc/testsuite/gcc.target/riscv/zksh.c | 17 +++++++++++
> 14 files changed, 335 insertions(+)
> create mode 100644 gcc/testsuite/gcc.target/riscv/zbkb32.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zbkb64.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zbkc32.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zbkc64.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zbkx32.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zbkx64.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zknd32.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zknd64.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zkne64.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zknh.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zknh32.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zknh64.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zksed.c
> create mode 100644 gcc/testsuite/gcc.target/riscv/zksh.c
>
> diff --git a/gcc/testsuite/gcc.target/riscv/zbkb32.c b/gcc/testsuite/gcc.target/riscv/zbkb32.c
> new file mode 100644
> index 00000000000..5bf588d58b4
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zbkb32.c
> @@ -0,0 +1,34 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv32gc_zbkb -mabi=ilp32" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int32_t foo1(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_ror(rs1,rs2);
> +}
> +
> +int32_t foo2(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_rol(rs1,rs2);
> +}
> +
> +int32_t foo3(int32_t rs1)
> +{
> + return _rv32_brev8(rs1);
> +}
> +
> +int32_t foo4(int32_t rs1)
> +{
> + return _rv32_zip(rs1);
> +}
> +
> +int32_t foo5(int32_t rs1)
> +{
> + return _rv32_unzip(rs1);
> +}
> +
> +/* { dg-final { scan-assembler-times "ror" 1 } } */
> +/* { dg-final { scan-assembler-times "rol" 1 } } */
> +/* { dg-final { scan-assembler-times "brev8" 1 } } */
> +/* { dg-final { scan-assembler-times "zip" 2 } } */
> +/* { dg-final { scan-assembler-times "unzip" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zbkb64.c b/gcc/testsuite/gcc.target/riscv/zbkb64.c
> new file mode 100644
> index 00000000000..2cd76a29750
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zbkb64.c
> @@ -0,0 +1,21 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv64gc_zbkb -mabi=lp64" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int64_t foo1(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_ror(rs1,rs2);
> +}
> +
> +int64_t foo2(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_rol(rs1,rs2);
> +}
> +
> +int64_t foo3(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_brev8(rs1);
> +}
> +/* { dg-final { scan-assembler-times "ror" 1 } } */
> +/* { dg-final { scan-assembler-times "rol" 1 } } */
> +/* { dg-final { scan-assembler-times "brev8" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zbkc32.c b/gcc/testsuite/gcc.target/riscv/zbkc32.c
> new file mode 100644
> index 00000000000..237085bfc7d
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zbkc32.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv32gc_zbkc -mabi=ilp32" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int32_t foo1(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_clmul(rs1,rs2);
> +}
> +
> +int32_t foo2(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_clmulh(rs1,rs2);
> +}
> +
> +/* { dg-final { scan-assembler-times "clmul" 2 } } */
> +/* { dg-final { scan-assembler-times "clmulh" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zbkc64.c b/gcc/testsuite/gcc.target/riscv/zbkc64.c
> new file mode 100644
> index 00000000000..50e39423519
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zbkc64.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv64gc_zbkc -mabi=lp64" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int64_t foo1(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_clmul(rs1,rs2);
> +}
> +
> +int64_t foo2(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_clmulh(rs1,rs2);
> +}
> +
> +/* { dg-final { scan-assembler-times "clmul" 2 } } */
> +/* { dg-final { scan-assembler-times "clmulh" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zbkx32.c b/gcc/testsuite/gcc.target/riscv/zbkx32.c
> new file mode 100644
> index 00000000000..992ae64a11b
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zbkx32.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv32gc_zbkx -mabi=ilp32" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int32_t foo3(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_xperm8(rs1,rs2);
> +}
> +
> +int32_t foo4(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_xperm4(rs1,rs2);
> +}
> +
> +/* { dg-final { scan-assembler-times "xperm8" 1 } } */
> +/* { dg-final { scan-assembler-times "xperm4" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zbkx64.c b/gcc/testsuite/gcc.target/riscv/zbkx64.c
> new file mode 100644
> index 00000000000..b8ec89d3c61
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zbkx64.c
> @@ -0,0 +1,16 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv64gc_zbkx -mabi=lp64" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int64_t foo1(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_xperm8(rs1,rs2);
> +}
> +
> +int64_t foo2(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_xperm4(rs1,rs2);
> +}
> +
> +/* { dg-final { scan-assembler-times "xperm8" 1 } } */
> +/* { dg-final { scan-assembler-times "xperm4" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zknd32.c b/gcc/testsuite/gcc.target/riscv/zknd32.c
> new file mode 100644
> index 00000000000..c7a109f333b
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zknd32.c
> @@ -0,0 +1,18 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv32gc_zknd -mabi=ilp32" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int32_t foo1(int32_t rs1, int32_t rs2, int bs)
> +{
> + bs = 1;
> + return _rv32_aes32dsi(rs1,rs2,bs);
> +}
> +
> +int32_t foo2(int32_t rs1, int32_t rs2, int bs)
> +{
> + bs = 0;
> + return _rv32_aes32dsmi(rs1,rs2,bs);
> +}
> +
> +/* { dg-final { scan-assembler-times "aes32dsi" 1 } } */
> +/* { dg-final { scan-assembler-times "aes32dsmi" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zknd64.c b/gcc/testsuite/gcc.target/riscv/zknd64.c
> new file mode 100644
> index 00000000000..a7f4b20dead
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zknd64.c
> @@ -0,0 +1,35 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv64gc_zknd -mabi=lp64" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int64_t foo1(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_aes64ds(rs1,rs2);
> +}
> +
> +int64_t foo2(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_aes64dsm(rs1,rs2);
> +}
> +
> +int64_t foo3(int64_t rs1, int rnum)
> +{
> + rnum = 8;
> + return _rv64_aes64ks1i(rs1,rnum);
> +}
> +
> +int64_t foo4(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_aes64ks2(rs1,rs2);
> +}
> +
> +int64_t foo5(int64_t rs1)
> +{
> + return _rv64_aes64im(rs1);
> +}
> +
> +/* { dg-final { scan-assembler-times "aes64ds" 2 } } */
> +/* { dg-final { scan-assembler-times "aes64dsm" 1 } } */
> +/* { dg-final { scan-assembler-times "aes64ks1i" 1 } } */
> +/* { dg-final { scan-assembler-times "aes64ks2" 1 } } */
> +/* { dg-final { scan-assembler-times "aes64im" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zkne64.c b/gcc/testsuite/gcc.target/riscv/zkne64.c
> new file mode 100644
> index 00000000000..69dbdcbb3ad
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zkne64.c
> @@ -0,0 +1,29 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv64gc_zkne -mabi=lp64" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int64_t foo1(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_aes64es(rs1,rs2);
> +}
> +
> +int64_t foo2(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_aes64esm(rs1,rs2);
> +}
> +
> +int64_t foo3(int64_t rs1, int rnum)
> +{
> + rnum = 8;
> + return _rv64_aes64ks1i(rs1,rnum);
> +}
> +
> +int64_t foo4(int64_t rs1, int64_t rs2)
> +{
> + return _rv64_aes64ks2(rs1,rs2);
> +}
> +
> +/* { dg-final { scan-assembler-times "aes64es" 2 } } */
> +/* { dg-final { scan-assembler-times "aes64esm" 1 } } */
> +/* { dg-final { scan-assembler-times "aes64ks1i" 1 } } */
> +/* { dg-final { scan-assembler-times "aes64ks2" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zknh.c b/gcc/testsuite/gcc.target/riscv/zknh.c
> new file mode 100644
> index 00000000000..a2ea0809e63
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zknh.c
> @@ -0,0 +1,28 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv64gc_zknh -mabi=lp64" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +long foo1(long rs1)
> +{
> + return _rv_sha256sig0(rs1);
> +}
> +
> +long foo2(long rs1)
> +{
> + return _rv_sha256sig1(rs1);
> +}
> +
> +long foo3(long rs1)
> +{
> + return _rv_sha256sum0(rs1);
> +}
> +
> +long foo4(long rs1)
> +{
> + return _rv_sha256sum1(rs1);
> +}
> +
> +/* { dg-final { scan-assembler-times "sha256sig0" 1 } } */
> +/* { dg-final { scan-assembler-times "sha256sig1" 1 } } */
> +/* { dg-final { scan-assembler-times "sha256sum0" 1 } } */
> +/* { dg-final { scan-assembler-times "sha256sum1" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zknh32.c b/gcc/testsuite/gcc.target/riscv/zknh32.c
> new file mode 100644
> index 00000000000..2ef961c19a3
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zknh32.c
> @@ -0,0 +1,40 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv32gc_zknh -mabi=ilp32" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int32_t foo1(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_sha512sig0h(rs1,rs2);
> +}
> +
> +int32_t foo2(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_sha512sig0l(rs1,rs2);
> +}
> +
> +int32_t foo3(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_sha512sig1h(rs1,rs2);
> +}
> +
> +int32_t foo4(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_sha512sig1l(rs1,rs2);
> +}
> +
> +int32_t foo5(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_sha512sum0r(rs1,rs2);
> +}
> +
> +int32_t foo6(int32_t rs1, int32_t rs2)
> +{
> + return _rv32_sha512sum1r(rs1,rs2);
> +}
> +
> +/* { dg-final { scan-assembler-times "sha512sig0h" 1 } } */
> +/* { dg-final { scan-assembler-times "sha512sig0l" 1 } } */
> +/* { dg-final { scan-assembler-times "sha512sig1h" 1 } } */
> +/* { dg-final { scan-assembler-times "sha512sig1l" 1 } } */
> +/* { dg-final { scan-assembler-times "sha512sum0r" 1 } } */
> +/* { dg-final { scan-assembler-times "sha512sum1r" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zknh64.c b/gcc/testsuite/gcc.target/riscv/zknh64.c
> new file mode 100644
> index 00000000000..7f1eb76fe5c
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zknh64.c
> @@ -0,0 +1,29 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv64gc_zknh -mabi=lp64" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +int64_t foo1(int64_t rs1)
> +{
> + return _rv64_sha512sig0(rs1);
> +}
> +
> +int64_t foo2(int64_t rs1)
> +{
> + return _rv64_sha512sig1(rs1);
> +}
> +
> +int64_t foo3(int64_t rs1)
> +{
> + return _rv64_sha512sum0(rs1);
> +}
> +
> +int64_t foo4(int64_t rs1)
> +{
> + return _rv64_sha512sum1(rs1);
> +}
> +
> +
> +/* { dg-final { scan-assembler-times "sha512sig0" 1 } } */
> +/* { dg-final { scan-assembler-times "sha512sig1" 1 } } */
> +/* { dg-final { scan-assembler-times "sha512sum0" 1 } } */
> +/* { dg-final { scan-assembler-times "sha512sum1" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zksed.c b/gcc/testsuite/gcc.target/riscv/zksed.c
> new file mode 100644
> index 00000000000..02bf96d56b1
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zksed.c
> @@ -0,0 +1,20 @@
> +
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv64gc_zksed -mabi=lp64" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +long foo1(int32_t rs1, int32_t rs2, int bs)
> +{
> + bs = 1;
> + return _rv_sm4ks(rs1,rs2,bs);
> +}
> +
> +long foo2(int32_t rs1, int32_t rs2, int bs)
> +{
> + bs = 2;
> + return _rv_sm4ed(rs1,rs2,bs);
> +}
> +
> +
> +/* { dg-final { scan-assembler-times "sm4ks" 1 } } */
> +/* { dg-final { scan-assembler-times "sm4ed" 1 } } */
> \ No newline at end of file
> diff --git a/gcc/testsuite/gcc.target/riscv/zksh.c b/gcc/testsuite/gcc.target/riscv/zksh.c
> new file mode 100644
> index 00000000000..ec47ed93221
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zksh.c
> @@ -0,0 +1,17 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv64gc_zksh -mabi=lp64" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +#include"riscv_crypto.h"
> +long foo1(long rs1)
> +{
> + return _rv_sm3p0(rs1);
> +}
> +
> +long foo2(long rs1)
> +{
> + return _rv_sm3p1(rs1);
> +}
> +
> +
> +/* { dg-final { scan-assembler-times "sm3p0" 1 } } */
> +/* { dg-final { scan-assembler-times "sm3p1" 1 } } */
> \ No newline at end of file
> --
> 2.31.1.windows.1
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 4/5 V1] RISC-V:Implement testcases for Crypto extension
2022-03-01 13:00 ` Kito Cheng
@ 2022-03-01 13:49 ` Kito Cheng
0 siblings, 0 replies; 12+ messages in thread
From: Kito Cheng @ 2022-03-01 13:49 UTC (permalink / raw)
To: 廖仕华
Cc: GCC Patches, ben.marshall, Christoph Muellner, Andrew Waterman,
jiawei, mjos, Kito Cheng
> > diff --git a/gcc/testsuite/gcc.target/riscv/zbkb64.c b/gcc/testsuite/gcc.target/riscv/zbkb64.c
> > new file mode 100644
> > index 00000000000..2cd76a29750
> > --- /dev/null
> > +++ b/gcc/testsuite/gcc.target/riscv/zbkb64.c
> > @@ -0,0 +1,21 @@
> > +/* { dg-do compile } */
> > +/* { dg-options "-O2 -march=rv64gc_zbkb -mabi=lp64" } */
> > +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> > +#include"riscv_crypto.h"
> > +int64_t foo1(int64_t rs1, int64_t rs2)
> > +{
> > + return _rv64_ror(rs1,rs2);
> > +}
> > +
> > +int64_t foo2(int64_t rs1, int64_t rs2)
> > +{
> > + return _rv64_rol(rs1,rs2);
> > +}
> > +
> > +int64_t foo3(int64_t rs1, int64_t rs2)
> > +{
> > + return _rv64_brev8(rs1);
> > +}
> > +/* { dg-final { scan-assembler-times "ror" 1 } } */
> > +/* { dg-final { scan-assembler-times "rol" 1 } } */
> > +/* { dg-final { scan-assembler-times "brev8" 1 } } */
> > \ No newline at end of file
Could you also add _rv32_variant test to zbkb64.c?
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2022-03-01 13:49 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-23 9:44 [PATCH 0/5 V1] RISC-V:Implement Crypto extension's instruction patterns and it's intrinsics shihua
2022-02-23 9:44 ` [PATCH 1/5 V1] RISC-V:Implement instruction patterns for Crypto extension shihua
2022-02-28 16:04 ` Kito Cheng
2022-02-23 9:44 ` [PATCH 2/5 V1] RISC-V:Implement built-in instructions " shihua
2022-02-23 9:44 ` [PATCH 3/5 V1] RISC-V:Implement intrinsics " shihua
2022-02-28 15:34 ` Kito Cheng
2022-02-23 9:44 ` [PATCH 4/5 V1] RISC-V:Implement testcases " shihua
2022-03-01 13:00 ` Kito Cheng
2022-03-01 13:49 ` Kito Cheng
2022-02-23 9:44 ` [PATCH 5/5 V1] RISC-V:Implement architecture extension test macros " shihua
2022-02-24 9:55 ` Kito Cheng
2022-02-28 15:56 ` Kito Cheng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).