public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types
@ 2023-09-07  2:17 Tsukasa OI
  2023-09-07  2:17 ` [RFC PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned Tsukasa OI
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Tsukasa OI @ 2023-09-07  2:17 UTC (permalink / raw)
  To: Tsukasa OI, Kito Cheng, Palmer Dabbelt, Andrew Waterman,
	Jim Wilson, Jeff Law
  Cc: gcc-patches

Hello,

Many RISC-V builtins operate in signed integer types but in many cases,
they are better to be unsigned.

There are a few reasons to do that:

1.  Being more natural
    For bit manipulation operations, the direct input and the result should
    have an unsigned type.
    e.g. __builtin_bswap16
        Both input and output should be (and are) unsigned.
    e.g. __builtin_popcount
        The input should be (and is) unsigned.
        The output is a bit count and is in int (signed integer).
2.  In parity with LLVM 17
    LLVM made similar changes to this patch set in commit 599421ae36c3
    ("[RISCV] Use unsigned instead of signed types for Zk* and Zb*
    builtins.") by Craig Topper.
    Note that shift amount / round number argument types are changed to
    unsigned in this LLVM commit, I did the same.
3.  Minimum compatibility breakage
    This change rarely causes warnings even if both -Wall and -Wextra are
    specified.  In fact, only applying PATCH 1/2 does not cause any
    additional test failures.  PATCH 2/2 only contains testsuite changes
    (uses correct types as the builtin uses).

But not completely compatible.  For instance, we will notice when operating
with for instance C++'s "auto" / "decltype" or overload resolution.

So, I would like to hear your thoughts first.

Note that, the only reason I separated this RFC patch set to two patches
is to demonstrate that no additional warnings occur even if only PATCH 1/2
is applied.  If approved or leaves an RFC PATCH, both will be merged.


p.s.

LLVM has another type of different builtin types (with the same name),
scalar cryptography builtins that operate in 32-bit integers, not XLEN-
specific (applies to SHA-256, SM3 and SM4).  For those cases, GCC prefers
XLEN-specific integer type but LLVM 17 always prefers uint32_t.

This is a result of LLVM commit 599421ae36c3 ("[RISCV] Re-define sha256,
Zksed, and Zksh intrinsics to use i32 types.").

Because just changing the width causes errors on GCC, even if I change them
to uint32_t, that would be in a different patch set.


Sincerely,
Tsukasa




Tsukasa OI (2):
  RISC-V: Make bit manipulation value / round number and shift amount
    types for builtins unsigned
  RISC-V: Update testsuite for type-changed builtins

 gcc/config/riscv/riscv-builtins.cc            |   7 +-
 gcc/config/riscv/riscv-cmo.def                |  16 +--
 gcc/config/riscv/riscv-ftypes.def             |  24 ++--
 gcc/config/riscv/riscv-scalar-crypto.def      | 104 +++++++++---------
 gcc/testsuite/gcc.target/riscv/zbc32.c        |   6 +-
 gcc/testsuite/gcc.target/riscv/zbc64.c        |   6 +-
 gcc/testsuite/gcc.target/riscv/zbkb32.c       |  10 +-
 gcc/testsuite/gcc.target/riscv/zbkb64.c       |   8 +-
 gcc/testsuite/gcc.target/riscv/zbkc32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zbkc64.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zbkx32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zbkx64.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zknd32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zknd64.c       |  10 +-
 gcc/testsuite/gcc.target/riscv/zkne32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zkne64.c       |   8 +-
 gcc/testsuite/gcc.target/riscv/zknh-sha256.c  |   8 +-
 .../gcc.target/riscv/zknh-sha512-32.c         |  12 +-
 .../gcc.target/riscv/zknh-sha512-64.c         |   8 +-
 gcc/testsuite/gcc.target/riscv/zksed32.c      |   4 +-
 gcc/testsuite/gcc.target/riscv/zksed64.c      |   4 +-
 gcc/testsuite/gcc.target/riscv/zksh32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zksh64.c       |   4 +-
 23 files changed, 133 insertions(+), 134 deletions(-)


base-commit: af88776caa20342482b11ccb580742a46c621250
-- 
2.42.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned
  2023-09-07  2:17 [RFC PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types Tsukasa OI
@ 2023-09-07  2:17 ` Tsukasa OI
  2023-09-07  2:17 ` [RFC PATCH 2/2] RISC-V: Update testsuite for type-changed builtins Tsukasa OI
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 12+ messages in thread
From: Tsukasa OI @ 2023-09-07  2:17 UTC (permalink / raw)
  To: Tsukasa OI, Kito Cheng, Palmer Dabbelt, Andrew Waterman,
	Jim Wilson, Jeff Law
  Cc: gcc-patches

From: Tsukasa OI <research_trasio@irq.a4lg.com>

For bit manipulation operations, inputs and the manipulated output are
better to be unsigned like other target-independent builtins like
__builtin_bswap32 and __builtin_popcount.

Although this is not completely compatible as before (as the type changes),
most code will run normally, even without warnings (with -Wall -Wextra).

Round numbers and shift amount on the scalar crypto instructions are
changed to unsigned in parity with LLVM 17 commit 599421ae36c3
("[RISCV] Use unsigned instead of signed types for Zk* and Zb* builtins.").

gcc/ChangeLog:

	* config/riscv/riscv-builtins.cc (RISCV_ATYPE_UQI): New for
	uint8_t.  (RISCV_ATYPE_UHI): New for uint16_t.
	(RISCV_ATYPE_QI, RISCV_ATYPE_HI, RISCV_ATYPE_SI, RISCV_ATYPE_DI):
	Removed as no longer used.
	(RISCV_ATYPE_UDI): New for uint64_t.
	* config/riscv/riscv-cmo.def: Make types unsigned for not working
	"zicbop_cbo_prefetchi" and working bit manipulation clmul builtin
	argument/return types.
	* config/riscv/riscv-ftypes.def: Make bit manipulation, round
	number and shift amount types unsigned.
	* config/riscv/riscv-scalar-crypto.def: Ditto.
---
 gcc/config/riscv/riscv-builtins.cc       |   7 +-
 gcc/config/riscv/riscv-cmo.def           |  16 ++--
 gcc/config/riscv/riscv-ftypes.def        |  24 +++---
 gcc/config/riscv/riscv-scalar-crypto.def | 104 +++++++++++------------
 4 files changed, 75 insertions(+), 76 deletions(-)

diff --git a/gcc/config/riscv/riscv-builtins.cc b/gcc/config/riscv/riscv-builtins.cc
index 8afe7b7e97d3..f6b06b3c16ac 100644
--- a/gcc/config/riscv/riscv-builtins.cc
+++ b/gcc/config/riscv/riscv-builtins.cc
@@ -155,11 +155,10 @@ AVAIL (hint_pause, (!0))
 
 /* Argument types.  */
 #define RISCV_ATYPE_VOID void_type_node
+#define RISCV_ATYPE_UQI unsigned_intQI_type_node
+#define RISCV_ATYPE_UHI unsigned_intHI_type_node
 #define RISCV_ATYPE_USI unsigned_intSI_type_node
-#define RISCV_ATYPE_QI intQI_type_node
-#define RISCV_ATYPE_HI intHI_type_node
-#define RISCV_ATYPE_SI intSI_type_node
-#define RISCV_ATYPE_DI intDI_type_node
+#define RISCV_ATYPE_UDI unsigned_intDI_type_node
 #define RISCV_ATYPE_VOID_PTR ptr_type_node
 
 /* RISCV_FTYPE_ATYPESN takes N RISCV_FTYPES-like type codes and lists
diff --git a/gcc/config/riscv/riscv-cmo.def b/gcc/config/riscv/riscv-cmo.def
index b92044dc6ff9..ff713b78e19e 100644
--- a/gcc/config/riscv/riscv-cmo.def
+++ b/gcc/config/riscv/riscv-cmo.def
@@ -13,15 +13,15 @@ RISCV_BUILTIN (zero_si, "zicboz_cbo_zero", RISCV_BUILTIN_DIRECT_NO_TARGET, RISCV
 RISCV_BUILTIN (zero_di, "zicboz_cbo_zero", RISCV_BUILTIN_DIRECT_NO_TARGET, RISCV_VOID_FTYPE_VOID_PTR, zero64),
 
 // zicbop
-RISCV_BUILTIN (prefetchi_si, "zicbop_cbo_prefetchi", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, prefetchi32),
-RISCV_BUILTIN (prefetchi_di, "zicbop_cbo_prefetchi", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, prefetchi64),
+RISCV_BUILTIN (prefetchi_si, "zicbop_cbo_prefetchi", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, prefetchi32),
+RISCV_BUILTIN (prefetchi_di, "zicbop_cbo_prefetchi", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, prefetchi64),
 
 // zbkc or zbc
-RISCV_BUILTIN (clmul_si, "clmul", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, clmul_zbkc32_or_zbc32),
-RISCV_BUILTIN (clmul_di, "clmul", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, clmul_zbkc64_or_zbc64),
-RISCV_BUILTIN (clmulh_si, "clmulh", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, clmul_zbkc32_or_zbc32),
-RISCV_BUILTIN (clmulh_di, "clmulh", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, clmul_zbkc64_or_zbc64),
+RISCV_BUILTIN (clmul_si, "clmul", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI, clmul_zbkc32_or_zbc32),
+RISCV_BUILTIN (clmul_di, "clmul", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI, clmul_zbkc64_or_zbc64),
+RISCV_BUILTIN (clmulh_si, "clmulh", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI, clmul_zbkc32_or_zbc32),
+RISCV_BUILTIN (clmulh_di, "clmulh", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI, clmul_zbkc64_or_zbc64),
 
 // zbc
-RISCV_BUILTIN (clmulr_si, "clmulr", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, clmulr_zbc32),
-RISCV_BUILTIN (clmulr_di, "clmulr", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, clmulr_zbc64),
+RISCV_BUILTIN (clmulr_si, "clmulr", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI, clmulr_zbc32),
+RISCV_BUILTIN (clmulr_di, "clmulr", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI, clmulr_zbc64),
diff --git a/gcc/config/riscv/riscv-ftypes.def b/gcc/config/riscv/riscv-ftypes.def
index 3b518195a29c..366861ce640e 100644
--- a/gcc/config/riscv/riscv-ftypes.def
+++ b/gcc/config/riscv/riscv-ftypes.def
@@ -30,15 +30,15 @@ DEF_RISCV_FTYPE (0, (USI))
 DEF_RISCV_FTYPE (0, (VOID))
 DEF_RISCV_FTYPE (1, (VOID, USI))
 DEF_RISCV_FTYPE (1, (VOID, VOID_PTR))
-DEF_RISCV_FTYPE (1, (SI, SI))
-DEF_RISCV_FTYPE (1, (DI, DI))
-DEF_RISCV_FTYPE (2, (SI, QI, QI))
-DEF_RISCV_FTYPE (2, (SI, HI, HI))
-DEF_RISCV_FTYPE (2, (SI, SI, SI))
-DEF_RISCV_FTYPE (2, (DI, QI, QI))
-DEF_RISCV_FTYPE (2, (DI, HI, HI))
-DEF_RISCV_FTYPE (2, (DI, SI, SI))
-DEF_RISCV_FTYPE (2, (DI, DI, SI))
-DEF_RISCV_FTYPE (2, (DI, DI, DI))
-DEF_RISCV_FTYPE (3, (SI, SI, SI, SI))
-DEF_RISCV_FTYPE (3, (DI, DI, DI, SI))
+DEF_RISCV_FTYPE (1, (USI, USI))
+DEF_RISCV_FTYPE (1, (UDI, UDI))
+DEF_RISCV_FTYPE (2, (USI, UQI, UQI))
+DEF_RISCV_FTYPE (2, (USI, UHI, UHI))
+DEF_RISCV_FTYPE (2, (USI, USI, USI))
+DEF_RISCV_FTYPE (2, (UDI, UQI, UQI))
+DEF_RISCV_FTYPE (2, (UDI, UHI, UHI))
+DEF_RISCV_FTYPE (2, (UDI, USI, USI))
+DEF_RISCV_FTYPE (2, (UDI, UDI, USI))
+DEF_RISCV_FTYPE (2, (UDI, UDI, UDI))
+DEF_RISCV_FTYPE (3, (USI, USI, USI, USI))
+DEF_RISCV_FTYPE (3, (UDI, UDI, UDI, USI))
diff --git a/gcc/config/riscv/riscv-scalar-crypto.def b/gcc/config/riscv/riscv-scalar-crypto.def
index c2caed5151db..db86ec9fd78a 100644
--- a/gcc/config/riscv/riscv-scalar-crypto.def
+++ b/gcc/config/riscv/riscv-scalar-crypto.def
@@ -18,71 +18,71 @@ along with GCC; see the file COPYING3.  If not see
 <http://www.gnu.org/licenses/>.  */
 
 // ZBKB
-RISCV_BUILTIN (pack_sihi, "pack", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_HI_HI, crypto_zbkb32),
-RISCV_BUILTIN (pack_disi, "pack", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_SI_SI, crypto_zbkb64),
+RISCV_BUILTIN (pack_sihi, "pack", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_UHI_UHI, crypto_zbkb32),
+RISCV_BUILTIN (pack_disi, "pack", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_USI_USI, crypto_zbkb64),
 
-RISCV_BUILTIN (packh_si, "packh", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_QI_QI, crypto_zbkb32),
-RISCV_BUILTIN (packh_di, "packh", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_QI_QI, crypto_zbkb64),
+RISCV_BUILTIN (packh_si, "packh", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_UQI_UQI, crypto_zbkb32),
+RISCV_BUILTIN (packh_di, "packh", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UQI_UQI, crypto_zbkb64),
 
-RISCV_BUILTIN (packw, "packw", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_HI_HI, crypto_zbkb64),
+RISCV_BUILTIN (packw, "packw", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UHI_UHI, crypto_zbkb64),
 
-RISCV_BUILTIN (zip, "zip", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zbkb32),
-RISCV_BUILTIN (unzip, "unzip", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zbkb32),
+RISCV_BUILTIN (zip, "zip", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zbkb32),
+RISCV_BUILTIN (unzip, "unzip", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zbkb32),
 
-RISCV_BUILTIN (brev8_si, "brev8", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zbkb32),
-RISCV_BUILTIN (brev8_di, "brev8", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zbkb64),
+RISCV_BUILTIN (brev8_si, "brev8", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zbkb32),
+RISCV_BUILTIN (brev8_di, "brev8", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zbkb64),
 
 // ZBKX
-RISCV_BUILTIN (xperm4_si, "xperm4", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkx32),
-RISCV_BUILTIN (xperm4_di, "xperm4", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkx64),
-RISCV_BUILTIN (xperm8_si, "xperm8", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkx32),
-RISCV_BUILTIN (xperm8_di, "xperm8", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkx64),
+RISCV_BUILTIN (xperm4_si, "xperm4", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI, crypto_zbkx32),
+RISCV_BUILTIN (xperm4_di, "xperm4", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI, crypto_zbkx64),
+RISCV_BUILTIN (xperm8_si, "xperm8", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI, crypto_zbkx32),
+RISCV_BUILTIN (xperm8_di, "xperm8", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI, crypto_zbkx64),
 
 // ZKND
-DIRECT_BUILTIN (aes32dsi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zknd32),
-DIRECT_BUILTIN (aes32dsmi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zknd32),
-DIRECT_BUILTIN (aes64ds, RISCV_DI_FTYPE_DI_DI, crypto_zknd64),
-DIRECT_BUILTIN (aes64dsm, RISCV_DI_FTYPE_DI_DI, crypto_zknd64),
-DIRECT_BUILTIN (aes64im, RISCV_DI_FTYPE_DI, crypto_zknd64),
-DIRECT_BUILTIN (aes64ks1i, RISCV_DI_FTYPE_DI_SI, crypto_zkne_or_zknd),
-DIRECT_BUILTIN (aes64ks2, RISCV_DI_FTYPE_DI_DI, crypto_zkne_or_zknd),
+DIRECT_BUILTIN (aes32dsi, RISCV_USI_FTYPE_USI_USI_USI, crypto_zknd32),
+DIRECT_BUILTIN (aes32dsmi, RISCV_USI_FTYPE_USI_USI_USI, crypto_zknd32),
+DIRECT_BUILTIN (aes64ds, RISCV_UDI_FTYPE_UDI_UDI, crypto_zknd64),
+DIRECT_BUILTIN (aes64dsm, RISCV_UDI_FTYPE_UDI_UDI, crypto_zknd64),
+DIRECT_BUILTIN (aes64im, RISCV_UDI_FTYPE_UDI, crypto_zknd64),
+DIRECT_BUILTIN (aes64ks1i, RISCV_UDI_FTYPE_UDI_USI, crypto_zkne_or_zknd),
+DIRECT_BUILTIN (aes64ks2, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne_or_zknd),
 
 // ZKNE
-DIRECT_BUILTIN (aes32esi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zkne32),
-DIRECT_BUILTIN (aes32esmi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zkne32),
-DIRECT_BUILTIN (aes64es, RISCV_DI_FTYPE_DI_DI, crypto_zkne64),
-DIRECT_BUILTIN (aes64esm, RISCV_DI_FTYPE_DI_DI, crypto_zkne64),
+DIRECT_BUILTIN (aes32esi, RISCV_USI_FTYPE_USI_USI_USI, crypto_zkne32),
+DIRECT_BUILTIN (aes32esmi, RISCV_USI_FTYPE_USI_USI_USI, crypto_zkne32),
+DIRECT_BUILTIN (aes64es, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne64),
+DIRECT_BUILTIN (aes64esm, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne64),
 
 // ZKNH
-RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
-RISCV_BUILTIN (sha256sig0_di, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
-RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
-RISCV_BUILTIN (sha256sig1_di, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
-RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
-RISCV_BUILTIN (sha256sum0_di, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
-RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
-RISCV_BUILTIN (sha256sum1_di, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
-
-DIRECT_BUILTIN (sha512sig0h, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-DIRECT_BUILTIN (sha512sig0l, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-DIRECT_BUILTIN (sha512sig1h, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-DIRECT_BUILTIN (sha512sig1l, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-DIRECT_BUILTIN (sha512sum0r, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-DIRECT_BUILTIN (sha512sum1r, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-
-DIRECT_BUILTIN (sha512sig0, RISCV_DI_FTYPE_DI, crypto_zknh64),
-DIRECT_BUILTIN (sha512sig1, RISCV_DI_FTYPE_DI, crypto_zknh64),
-DIRECT_BUILTIN (sha512sum0, RISCV_DI_FTYPE_DI, crypto_zknh64),
-DIRECT_BUILTIN (sha512sum1, RISCV_DI_FTYPE_DI, crypto_zknh64),
+RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
+RISCV_BUILTIN (sha256sig0_di, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
+RISCV_BUILTIN (sha256sig1_di, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
+RISCV_BUILTIN (sha256sum0_di, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
+RISCV_BUILTIN (sha256sum1_di, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+
+DIRECT_BUILTIN (sha512sig0h, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sig0l, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sig1h, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sig1l, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sum0r, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sum1r, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+
+DIRECT_BUILTIN (sha512sig0, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+DIRECT_BUILTIN (sha512sig1, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+DIRECT_BUILTIN (sha512sum0, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+DIRECT_BUILTIN (sha512sum1, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
 
 // ZKSH
-RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zksh32),
-RISCV_BUILTIN (sm3p0_di, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zksh64),
-RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zksh32),
-RISCV_BUILTIN (sm3p1_di, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zksh64),
+RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh32),
+RISCV_BUILTIN (sm3p0_di, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zksh64),
+RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh32),
+RISCV_BUILTIN (sm3p1_di, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zksh64),
 
 // ZKSED
-RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI_SI, crypto_zksed32),
-RISCV_BUILTIN (sm4ed_di, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI_SI, crypto_zksed64),
-RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI_SI, crypto_zksed32),
-RISCV_BUILTIN (sm4ks_di, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI_SI, crypto_zksed64),
+RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed32),
+RISCV_BUILTIN (sm4ed_di, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI_USI, crypto_zksed64),
+RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed32),
+RISCV_BUILTIN (sm4ks_di, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI_USI, crypto_zksed64),
-- 
2.42.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 2/2] RISC-V: Update testsuite for type-changed builtins
  2023-09-07  2:17 [RFC PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types Tsukasa OI
  2023-09-07  2:17 ` [RFC PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned Tsukasa OI
@ 2023-09-07  2:17 ` Tsukasa OI
  2023-09-17 15:58   ` Jeff Law
  2023-09-08  1:03 ` [RFC PATCH 0/1] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t Tsukasa OI
  2023-09-12  1:28 ` [PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types Tsukasa OI
  3 siblings, 1 reply; 12+ messages in thread
From: Tsukasa OI @ 2023-09-07  2:17 UTC (permalink / raw)
  To: Tsukasa OI, Kito Cheng, Palmer Dabbelt, Andrew Waterman,
	Jim Wilson, Jeff Law
  Cc: gcc-patches

From: Tsukasa OI <research_trasio@irq.a4lg.com>

This commit replaces the type of the builtin used in the testsuite.

Even without this commit, it won't cause any test failures but changed so
that no confusion occurs.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/zbc32.c: Make signed type to unsigned.
	* gcc.target/riscv/zbc64.c: Ditto.
	* gcc.target/riscv/zbkb32.c: Ditto.
	* gcc.target/riscv/zbkb64.c: Ditto.
	* gcc.target/riscv/zbkc32.c: Ditto.
	* gcc.target/riscv/zbkc64.c: Ditto.
	* gcc.target/riscv/zbkx32.c: Ditto.
	* gcc.target/riscv/zbkx64.c: Ditto.
	* gcc.target/riscv/zknd32.c: Ditto.
	* gcc.target/riscv/zknd64.c: Ditto.
	* gcc.target/riscv/zkne32.c: Ditto.
	* gcc.target/riscv/zkne64.c: Ditto.
	* gcc.target/riscv/zknh-sha256.c: Ditto.
	* gcc.target/riscv/zknh-sha512-32.c: Ditto.
	* gcc.target/riscv/zknh-sha512-64.c: Ditto.
	* gcc.target/riscv/zksed32.c: Ditto.
	* gcc.target/riscv/zksed64.c: Ditto.
	* gcc.target/riscv/zksh32.c: Ditto.
	* gcc.target/riscv/zksh64.c: Ditto.
---
 gcc/testsuite/gcc.target/riscv/zbc32.c          |  6 +++---
 gcc/testsuite/gcc.target/riscv/zbc64.c          |  6 +++---
 gcc/testsuite/gcc.target/riscv/zbkb32.c         | 10 +++++-----
 gcc/testsuite/gcc.target/riscv/zbkb64.c         |  8 ++++----
 gcc/testsuite/gcc.target/riscv/zbkc32.c         |  4 ++--
 gcc/testsuite/gcc.target/riscv/zbkc64.c         |  4 ++--
 gcc/testsuite/gcc.target/riscv/zbkx32.c         |  4 ++--
 gcc/testsuite/gcc.target/riscv/zbkx64.c         |  4 ++--
 gcc/testsuite/gcc.target/riscv/zknd32.c         |  4 ++--
 gcc/testsuite/gcc.target/riscv/zknd64.c         | 10 +++++-----
 gcc/testsuite/gcc.target/riscv/zkne32.c         |  4 ++--
 gcc/testsuite/gcc.target/riscv/zkne64.c         |  8 ++++----
 gcc/testsuite/gcc.target/riscv/zknh-sha256.c    |  8 ++++----
 gcc/testsuite/gcc.target/riscv/zknh-sha512-32.c | 12 ++++++------
 gcc/testsuite/gcc.target/riscv/zknh-sha512-64.c |  8 ++++----
 gcc/testsuite/gcc.target/riscv/zksed32.c        |  4 ++--
 gcc/testsuite/gcc.target/riscv/zksed64.c        |  4 ++--
 gcc/testsuite/gcc.target/riscv/zksh32.c         |  4 ++--
 gcc/testsuite/gcc.target/riscv/zksh64.c         |  4 ++--
 19 files changed, 58 insertions(+), 58 deletions(-)

diff --git a/gcc/testsuite/gcc.target/riscv/zbc32.c b/gcc/testsuite/gcc.target/riscv/zbc32.c
index 08705c4a687e..f3fb2238f7f4 100644
--- a/gcc/testsuite/gcc.target/riscv/zbc32.c
+++ b/gcc/testsuite/gcc.target/riscv/zbc32.c
@@ -3,17 +3,17 @@
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2)
+uint32_t foo1(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_clmul(rs1, rs2);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2)
+uint32_t foo2(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_clmulh(rs1, rs2);
 }
 
-int32_t foo3(int32_t rs1, int32_t rs2)
+uint32_t foo3(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_clmulr(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbc64.c b/gcc/testsuite/gcc.target/riscv/zbc64.c
index a19f42b2883f..841a0aa7847d 100644
--- a/gcc/testsuite/gcc.target/riscv/zbc64.c
+++ b/gcc/testsuite/gcc.target/riscv/zbc64.c
@@ -3,17 +3,17 @@
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2)
+uint64_t foo1(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_clmul(rs1, rs2);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2)
+uint64_t foo2(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_clmulh(rs1, rs2);
 }
 
-int64_t foo3(int64_t rs1, int64_t rs2)
+uint64_t foo3(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_clmulr(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkb32.c b/gcc/testsuite/gcc.target/riscv/zbkb32.c
index dd45b8b9dc72..b2e442dc49d8 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkb32.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkb32.c
@@ -4,27 +4,27 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int16_t rs1, int16_t rs2)
+uint32_t foo1(uint16_t rs1, uint16_t rs2)
 {
     return __builtin_riscv_pack(rs1, rs2);
 }
 
-int32_t foo2(int8_t rs1, int8_t rs2)
+uint32_t foo2(uint8_t rs1, uint8_t rs2)
 {
     return __builtin_riscv_packh(rs1, rs2);
 }
 
-int32_t foo3(int32_t rs1)
+uint32_t foo3(uint32_t rs1)
 {
     return __builtin_riscv_brev8(rs1);
 }
 
-int32_t foo4(int32_t rs1)
+uint32_t foo4(uint32_t rs1)
 {
     return __builtin_riscv_zip(rs1);
 }
 
-int32_t foo5(int32_t rs1)
+uint32_t foo5(uint32_t rs1)
 {
     return __builtin_riscv_unzip(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkb64.c b/gcc/testsuite/gcc.target/riscv/zbkb64.c
index 960a2ae30ed6..08ac9c2a9f00 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkb64.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkb64.c
@@ -3,22 +3,22 @@
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 #include <stdint-gcc.h>
 
-int64_t foo1(int32_t rs1, int32_t rs2)
+uint64_t foo1(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_pack(rs1, rs2);
 }
 
-int64_t foo2(int8_t rs1, int8_t rs2)
+uint64_t foo2(uint8_t rs1, uint8_t rs2)
 {
     return __builtin_riscv_packh(rs1, rs2);
 }
 
-int64_t foo3(int16_t rs1, int16_t rs2)
+uint64_t foo3(uint16_t rs1, uint16_t rs2)
 {
     return __builtin_riscv_packw(rs1, rs2);
 }
 
-int64_t foo4(int64_t rs1, int64_t rs2)
+uint64_t foo4(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_brev8(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkc32.c b/gcc/testsuite/gcc.target/riscv/zbkc32.c
index a8e29200250b..29f0d624a7d7 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkc32.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkc32.c
@@ -3,12 +3,12 @@
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2)
+uint32_t foo1(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_clmul(rs1, rs2);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2)
+uint32_t foo2(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_clmulh(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkc64.c b/gcc/testsuite/gcc.target/riscv/zbkc64.c
index 728f8baf099d..53e6ac215ed3 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkc64.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkc64.c
@@ -3,12 +3,12 @@
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2)
+uint64_t foo1(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_clmul(rs1, rs2);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2)
+uint64_t foo2(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_clmulh(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkx32.c b/gcc/testsuite/gcc.target/riscv/zbkx32.c
index bd95524f548b..b8b822a7c499 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkx32.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkx32.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo3(int32_t rs1, int32_t rs2)
+uint32_t foo3(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_xperm8(rs1, rs2);
 }
 
-int32_t foo4(int32_t rs1, int32_t rs2)
+uint32_t foo4(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_xperm4(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkx64.c b/gcc/testsuite/gcc.target/riscv/zbkx64.c
index 2a04a94b86c4..732436701b33 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkx64.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkx64.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2)
+uint64_t foo1(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_xperm8(rs1, rs2);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2)
+uint64_t foo2(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_xperm4(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zknd32.c b/gcc/testsuite/gcc.target/riscv/zknd32.c
index 5fcc66da9015..e60c027e0911 100644
--- a/gcc/testsuite/gcc.target/riscv/zknd32.c
+++ b/gcc/testsuite/gcc.target/riscv/zknd32.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo1(uint32_t rs1, uint32_t rs2, int bs)
 {
     return __builtin_riscv_aes32dsi(rs1,rs2,bs);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo2(uint32_t rs1, uint32_t rs2, int bs)
 {
     return __builtin_riscv_aes32dsmi(rs1,rs2,bs);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zknd64.c b/gcc/testsuite/gcc.target/riscv/zknd64.c
index b1dff98f7e21..910b91c6ed88 100644
--- a/gcc/testsuite/gcc.target/riscv/zknd64.c
+++ b/gcc/testsuite/gcc.target/riscv/zknd64.c
@@ -4,27 +4,27 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2)
+uint64_t foo1(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64ds(rs1,rs2);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2)
+uint64_t foo2(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64dsm(rs1,rs2);
 }
 
-int64_t foo3(int64_t rs1, int rnum)
+uint64_t foo3(uint64_t rs1, unsigned rnum)
 {
     return __builtin_riscv_aes64ks1i(rs1,rnum);
 }
 
-int64_t foo4(int64_t rs1, int64_t rs2)
+uint64_t foo4(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64ks2(rs1,rs2);
 }
 
-int64_t foo5(int64_t rs1)
+uint64_t foo5(uint64_t rs1)
 {
     return __builtin_riscv_aes64im(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zkne32.c b/gcc/testsuite/gcc.target/riscv/zkne32.c
index c131c9a6bbb1..252e9ffa43b3 100644
--- a/gcc/testsuite/gcc.target/riscv/zkne32.c
+++ b/gcc/testsuite/gcc.target/riscv/zkne32.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo1(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_aes32esi(rs1, rs2, bs);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo2(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_aes32esmi(rs1, rs2, bs);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zkne64.c b/gcc/testsuite/gcc.target/riscv/zkne64.c
index 7d82b5a5d411..b25f6b5c29ac 100644
--- a/gcc/testsuite/gcc.target/riscv/zkne64.c
+++ b/gcc/testsuite/gcc.target/riscv/zkne64.c
@@ -4,22 +4,22 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2)
+uint64_t foo1(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64es(rs1,rs2);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2)
+uint64_t foo2(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64esm(rs1,rs2);
 }
 
-int64_t foo3(int64_t rs1, int rnum)
+uint64_t foo3(uint64_t rs1, unsigned rnum)
 {
     return __builtin_riscv_aes64ks1i(rs1,rnum);
 }
 
-int64_t foo4(int64_t rs1, int64_t rs2)
+uint64_t foo4(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64ks2(rs1,rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha256.c b/gcc/testsuite/gcc.target/riscv/zknh-sha256.c
index 54329aa6af2e..952d611cd0b9 100644
--- a/gcc/testsuite/gcc.target/riscv/zknh-sha256.c
+++ b/gcc/testsuite/gcc.target/riscv/zknh-sha256.c
@@ -2,22 +2,22 @@
 /* { dg-options "-O2 -march=rv64gc_zknh -mabi=lp64" } */
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 
-long foo1(long rs1)
+unsigned long foo1(unsigned long rs1)
 {
     return __builtin_riscv_sha256sig0(rs1);
 }
 
-long foo2(long rs1)
+unsigned long foo2(unsigned long rs1)
 {
     return __builtin_riscv_sha256sig1(rs1);
 }
 
-long foo3(long rs1)
+unsigned long foo3(unsigned long rs1)
 {
     return __builtin_riscv_sha256sum0(rs1);
 }
 
-long foo4(long rs1)
+unsigned long foo4(unsigned long rs1)
 {
     return __builtin_riscv_sha256sum1(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha512-32.c b/gcc/testsuite/gcc.target/riscv/zknh-sha512-32.c
index 4ebc470f8ab7..f2bcae36a1f2 100644
--- a/gcc/testsuite/gcc.target/riscv/zknh-sha512-32.c
+++ b/gcc/testsuite/gcc.target/riscv/zknh-sha512-32.c
@@ -4,32 +4,32 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2)
+uint32_t foo1(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sig0h(rs1,rs2);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2)
+uint32_t foo2(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sig0l(rs1,rs2);
 }
 
-int32_t foo3(int32_t rs1, int32_t rs2)
+uint32_t foo3(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sig1h(rs1,rs2);
 }
 
-int32_t foo4(int32_t rs1, int32_t rs2)
+uint32_t foo4(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sig1l(rs1,rs2);
 }
 
-int32_t foo5(int32_t rs1, int32_t rs2)
+uint32_t foo5(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sum0r(rs1,rs2);
 }
 
-int32_t foo6(int32_t rs1, int32_t rs2)
+uint32_t foo6(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sum1r(rs1,rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha512-64.c b/gcc/testsuite/gcc.target/riscv/zknh-sha512-64.c
index 0fb5c75b9ce6..4f248575e66e 100644
--- a/gcc/testsuite/gcc.target/riscv/zknh-sha512-64.c
+++ b/gcc/testsuite/gcc.target/riscv/zknh-sha512-64.c
@@ -4,22 +4,22 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1)
+uint64_t foo1(uint64_t rs1)
 {
     return __builtin_riscv_sha512sig0(rs1);
 }
 
-int64_t foo2(int64_t rs1)
+uint64_t foo2(uint64_t rs1)
 {
     return __builtin_riscv_sha512sig1(rs1);
 }
 
-int64_t foo3(int64_t rs1)
+uint64_t foo3(uint64_t rs1)
 {
     return __builtin_riscv_sha512sum0(rs1);
 }
 
-int64_t foo4(int64_t rs1)
+uint64_t foo4(uint64_t rs1)
 {
     return __builtin_riscv_sha512sum1(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksed32.c b/gcc/testsuite/gcc.target/riscv/zksed32.c
index 9548d007cb22..7df04147e05c 100644
--- a/gcc/testsuite/gcc.target/riscv/zksed32.c
+++ b/gcc/testsuite/gcc.target/riscv/zksed32.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo1(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ks(rs1,rs2,bs);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo2(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ed(rs1,rs2,bs);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksed64.c b/gcc/testsuite/gcc.target/riscv/zksed64.c
index 190a654151db..3485adf9cd88 100644
--- a/gcc/testsuite/gcc.target/riscv/zksed64.c
+++ b/gcc/testsuite/gcc.target/riscv/zksed64.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2, int bs)
+uint64_t foo1(uint64_t rs1, uint64_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ks(rs1,rs2,bs);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2, int bs)
+uint64_t foo2(uint64_t rs1, uint64_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ed(rs1,rs2,bs);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksh32.c b/gcc/testsuite/gcc.target/riscv/zksh32.c
index 50370b58b7a9..20513f986f88 100644
--- a/gcc/testsuite/gcc.target/riscv/zksh32.c
+++ b/gcc/testsuite/gcc.target/riscv/zksh32.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1)
+uint32_t foo1(uint32_t rs1)
 {
     return __builtin_riscv_sm3p0(rs1);
 }
 
-int32_t foo2(int32_t rs1)
+uint32_t foo2(uint32_t rs1)
 {
     return __builtin_riscv_sm3p1(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksh64.c b/gcc/testsuite/gcc.target/riscv/zksh64.c
index 69847f3df359..bdd137872785 100644
--- a/gcc/testsuite/gcc.target/riscv/zksh64.c
+++ b/gcc/testsuite/gcc.target/riscv/zksh64.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1)
+uint64_t foo1(uint64_t rs1)
 {
     return __builtin_riscv_sm3p0(rs1);
 }
 
-int64_t foo2(int64_t rs1)
+uint64_t foo2(uint64_t rs1)
 {
     return __builtin_riscv_sm3p1(rs1);
 }
-- 
2.42.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 0/1] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t
  2023-09-07  2:17 [RFC PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types Tsukasa OI
  2023-09-07  2:17 ` [RFC PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned Tsukasa OI
  2023-09-07  2:17 ` [RFC PATCH 2/2] RISC-V: Update testsuite for type-changed builtins Tsukasa OI
@ 2023-09-08  1:03 ` Tsukasa OI
  2023-09-08  1:03   ` [RFC PATCH 1/1] " Tsukasa OI
  2023-09-12  1:28 ` [PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types Tsukasa OI
  3 siblings, 1 reply; 12+ messages in thread
From: Tsukasa OI @ 2023-09-08  1:03 UTC (permalink / raw)
  To: Tsukasa OI, Kito Cheng, Palmer Dabbelt, Andrew Waterman,
	Jim Wilson, Jeff Law
  Cc: gcc-patches

Hi,

This is built on another RFC PATCH "RISC-V: Change RISC-V bit manipulation
/ scalar crypto builtin types" and changes SHA-256, SM3 and SM4 intrinsics
operate on uint32_t, not on XLEN-bit wide integers.

This is in parity with the LLVM commit 599421ae36c3 ("[RISCV] Re-define
sha256, Zksed, and Zksh intrinsics to use i32 types.") by Craig Topper.

Because we had to refine the base instruction definitions, it was way harder
than that of LLVM.  Thankfully, we have a similar example: 32-bit integer
instructions on RV64 such as ADDW.

Before:
   riscv_<op>_si: For RV32, fully operate on uint32_t
   riscv_<op>_di: For RV64, fully operate on uint64_t
After:
  *riscv_<op>_si: For RV32, fully operate on uint32_t
   riscv_<op>_di_extended:
                  For RV64, input is uint32_t and output is int64_t,
                  sign-extended from the int32_t result
                  (represents a part of <op> behavior).
   riscv_<op>_si: Common (fully operate on uint32_t).
                  On RV32, expands to *riscv_<op>_si.
                  On RV64, initially expands to riscv_<op>_di_extended *and*
                  extracts lower 32-bits from the int64_t result.

Sincerely,
Tsukasa




Tsukasa OI (1):
  RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t

 gcc/config/riscv/crypto.md                    | 161 ++++++++++++------
 gcc/config/riscv/riscv-builtins.cc            |   7 +-
 gcc/config/riscv/riscv-ftypes.def             |   1 -
 gcc/config/riscv/riscv-scalar-crypto.def      |  24 +--
 .../gcc.target/riscv/zknh-sha256-32.c         |  10 ++
 .../riscv/{zknh-sha256.c => zknh-sha256-64.c} |   8 +-
 gcc/testsuite/gcc.target/riscv/zksed64.c      |   4 +-
 gcc/testsuite/gcc.target/riscv/zksh64.c       |   4 +-
 8 files changed, 139 insertions(+), 80 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
 rename gcc/testsuite/gcc.target/riscv/{zknh-sha256.c => zknh-sha256-64.c} (78%)


base-commit: daaed758517c81fc8f8bc6502be648aca51ab278
prerequisite-patch-id: 4f4a84ebc0c33ea159db4dcd70fa8894f27c638a
prerequisite-patch-id: d2b85f777b042d349c5e232979ee219c8147c154
-- 
2.42.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [RFC PATCH 1/1] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t
  2023-09-08  1:03 ` [RFC PATCH 0/1] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t Tsukasa OI
@ 2023-09-08  1:03   ` Tsukasa OI
  0 siblings, 0 replies; 12+ messages in thread
From: Tsukasa OI @ 2023-09-08  1:03 UTC (permalink / raw)
  To: Tsukasa OI, Kito Cheng, Palmer Dabbelt, Andrew Waterman,
	Jim Wilson, Jeff Law
  Cc: gcc-patches

From: Tsukasa OI <research_trasio@irq.a4lg.com>

This is in parity with the LLVM commit 599421ae36c3 ("[RISCV] Re-define
sha256, Zksed, and Zksh intrinsics to use i32 types.").

SHA-256, SM3 and SM4 instructions operate on 32-bit integers and upper
32-bits have no effects on RV64 (the output is sign-extended from the
original 32-bit value).  In that sense, making those intrinsics only
operate on uint32_t is much more natural than XLEN-bits wide integers.

This commit reforms instructions and expansions based on 32-bit
instruction handling on RV64 (such as ADDW).

Before:
   riscv_<op>_si: For RV32, operate on uint32_t
   riscv_<op>_di: For RV64, operate on uint64_t
After:
  *riscv_<op>_si: For RV32, operate on uint32_t
   riscv_<op>_di_extended:
                  For RV64, input is uint32_t and...
                  output is sign-extended int64_t.
   riscv_<op>_si: Common and expands to either of above.
                  On RV64, extract lower 32-bits from the int64_t result.

It also refines definitions of SHA-256, SM3 and SM4 intrinsics.

gcc/ChangeLog:

	* config/riscv/crypto.md (riscv_sha256sig0_<mode>,
	riscv_sha256sig1_<mode>, riscv_sha256sum0_<mode>,
	riscv_sha256sum1_<mode>, riscv_sm3p0_<mode>, riscv_sm3p1_<mode>,
	riscv_sm4ed_<mode>, riscv_sm4ks_<mode>): Remove and replace with
	new insn/expansions.
	(SHA256_OP, SM3_OP, SM4_OP): New iterators.
	(sha256_op, sm3_op, sm4_op): New attributes for iteration.
	(*riscv_<sha256_op>_si): New raw instruction for RV32.
	(*riscv_<sm3_op>_si): Ditto.
	(*riscv_<sm4_op>_si): Ditto.
	(riscv_<sha256_op>_di_extended): New base instruction for RV64.
	(riscv_<sm3_op>_di_extended): Ditto.
	(riscv_<sm4_op>_di_extended): Ditto.
	(riscv_<sha256_op>_si): New common instruction expansion.
	(riscv_<sm3_op>_si): Ditto.
	(riscv_<sm4_op>_si): Ditto.
	* config/riscv/riscv-builtins.cc: Add availability "crypto_zknh",
	"crypto_zksh" and "crypto_zksed".  Remove availability
	"crypto_zksh{32,64}" and "crypto_zksed{32,64}".
	* config/riscv/riscv-ftypes.def: Remove unused function type.
	* config/riscv/riscv-scalar-crypto.def: Make SHA-256, SM3 and SM4
	intrinsics to operate on uint32_t.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/zknh-sha256.c: Moved to...
	* gcc.target/riscv/zknh-sha256-64.c: ...here.  Test RV64.
	* gcc.target/riscv/zknh-sha256-32.c: New test for RV32.
	* gcc.target/riscv/zksh64.c: Change the type.
	* gcc.target/riscv/zksed64.c: Ditto.
---
 gcc/config/riscv/crypto.md                    | 161 ++++++++++++------
 gcc/config/riscv/riscv-builtins.cc            |   7 +-
 gcc/config/riscv/riscv-ftypes.def             |   1 -
 gcc/config/riscv/riscv-scalar-crypto.def      |  24 +--
 .../gcc.target/riscv/zknh-sha256-32.c         |  10 ++
 .../riscv/{zknh-sha256.c => zknh-sha256-64.c} |   8 +-
 gcc/testsuite/gcc.target/riscv/zksed64.c      |   4 +-
 gcc/testsuite/gcc.target/riscv/zksh64.c       |   4 +-
 8 files changed, 139 insertions(+), 80 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
 rename gcc/testsuite/gcc.target/riscv/{zknh-sha256.c => zknh-sha256-64.c} (78%)

diff --git a/gcc/config/riscv/crypto.md b/gcc/config/riscv/crypto.md
index e4b7f0190dfe..03a1d03397d9 100644
--- a/gcc/config/riscv/crypto.md
+++ b/gcc/config/riscv/crypto.md
@@ -250,36 +250,47 @@
 
 ;; ZKNH - SHA256
 
-(define_insn "riscv_sha256sig0_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SHA_256_SIG0))]
-  "TARGET_ZKNH"
-  "sha256sig0\t%0,%1"
-  [(set_attr "type" "crypto")])
-
-(define_insn "riscv_sha256sig1_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SHA_256_SIG1))]
-  "TARGET_ZKNH"
-  "sha256sig1\t%0,%1"
+(define_int_iterator SHA256_OP [
+  UNSPEC_SHA_256_SIG0 UNSPEC_SHA_256_SIG1
+  UNSPEC_SHA_256_SUM0 UNSPEC_SHA_256_SUM1])
+(define_int_attr sha256_op [
+  (UNSPEC_SHA_256_SIG0 "sha256sig0") (UNSPEC_SHA_256_SIG1 "sha256sig1")
+  (UNSPEC_SHA_256_SUM0 "sha256sum0") (UNSPEC_SHA_256_SUM1 "sha256sum1")])
+
+(define_insn "*riscv_<sha256_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                   SHA256_OP))]
+  "TARGET_ZKNH && !TARGET_64BIT"
+  "<sha256_op>\t%0,%1"
   [(set_attr "type" "crypto")])
 
-(define_insn "riscv_sha256sum0_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SHA_256_SUM0))]
-  "TARGET_ZKNH"
-  "sha256sum0\t%0,%1"
+(define_insn "riscv_<sha256_op>_di_extended"
+  [(set (match_operand:DI 0 "register_operand" "=r")
+        (sign_extend:DI
+             (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                        SHA256_OP)))]
+  "TARGET_ZKNH && TARGET_64BIT"
+  "<sha256_op>\t%0,%1"
   [(set_attr "type" "crypto")])
 
-(define_insn "riscv_sha256sum1_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SHA_256_SUM1))]
+(define_expand "riscv_<sha256_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                   SHA256_OP))]
   "TARGET_ZKNH"
-  "sha256sum1\t%0,%1"
+  {
+    if (TARGET_64BIT)
+      {
+        rtx t = gen_reg_rtx (DImode);
+        emit_insn (gen_riscv_<sha256_op>_di_extended (t, operands[1]));
+        t = gen_lowpart (SImode, t);
+        SUBREG_PROMOTED_VAR_P (t) = 1;
+        SUBREG_PROMOTED_SET (t, SRP_SIGNED);
+        emit_move_insn (operands[0], t);
+        DONE;
+      }
+  }
   [(set_attr "type" "crypto")])
 
 ;; ZKNH - SHA512
@@ -372,40 +383,88 @@
 
  ;; ZKSH
 
-(define_insn "riscv_sm3p0_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SM3_P0))]
-  "TARGET_ZKSH"
-  "sm3p0\t%0,%1"
+(define_int_iterator SM3_OP [UNSPEC_SM3_P0 UNSPEC_SM3_P1])
+(define_int_attr sm3_op [(UNSPEC_SM3_P0 "sm3p0") (UNSPEC_SM3_P1 "sm3p1")])
+
+(define_insn "*riscv_<sm3_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                   SM3_OP))]
+  "TARGET_ZKSH && !TARGET_64BIT"
+  "<sm3_op>\t%0,%1"
   [(set_attr "type" "crypto")])
 
-(define_insn "riscv_sm3p1_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SM3_P1))]
+(define_insn "riscv_<sm3_op>_di_extended"
+  [(set (match_operand:DI 0 "register_operand" "=r")
+        (sign_extend:DI
+             (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                        SM3_OP)))]
+  "TARGET_ZKSH && TARGET_64BIT"
+  "<sm3_op>\t%0,%1"
+  [(set_attr "type" "crypto")])
+
+(define_expand "riscv_<sm3_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                   SM3_OP))]
   "TARGET_ZKSH"
-  "sm3p1\t%0,%1"
+  {
+    if (TARGET_64BIT)
+      {
+        rtx t = gen_reg_rtx (DImode);
+        emit_insn (gen_riscv_<sm3_op>_di_extended (t, operands[1]));
+        t = gen_lowpart (SImode, t);
+        SUBREG_PROMOTED_VAR_P (t) = 1;
+        SUBREG_PROMOTED_SET (t, SRP_SIGNED);
+        emit_move_insn (operands[0], t);
+        DONE;
+      }
+  }
   [(set_attr "type" "crypto")])
 
 ;; ZKSED
 
-(define_insn "riscv_sm4ed_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")
-                  (match_operand:X 2 "register_operand" "r")
-                  (match_operand:SI 3 "register_operand" "D03")]
-                  UNSPEC_SM4_ED))]
-  "TARGET_ZKSED"
-  "sm4ed\t%0,%1,%2,%3"
+(define_int_iterator SM4_OP [UNSPEC_SM4_ED UNSPEC_SM4_KS])
+(define_int_attr sm4_op [(UNSPEC_SM4_ED "sm4ed") (UNSPEC_SM4_KS "sm4ks")])
+
+(define_insn "*riscv_<sm4_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+                   (match_operand:SI 2 "register_operand" "r")
+                   (match_operand:SI 3 "register_operand" "D03")]
+                   SM4_OP))]
+  "TARGET_ZKSED && !TARGET_64BIT"
+  "<sm4_op>\t%0,%1,%2,%3"
   [(set_attr "type" "crypto")])
 
-(define_insn "riscv_sm4ks_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")
-                  (match_operand:X 2 "register_operand" "r")
-                  (match_operand:SI 3 "register_operand" "D03")]
-                  UNSPEC_SM4_KS))]
+(define_insn "riscv_<sm4_op>_di_extended"
+  [(set (match_operand:DI 0 "register_operand" "=r")
+        (sign_extend:DI
+             (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+                        (match_operand:SI 2 "register_operand" "r")
+                        (match_operand:SI 3 "register_operand" "D03")]
+                        SM4_OP)))]
+  "TARGET_ZKSED && TARGET_64BIT"
+  "<sm4_op>\t%0,%1,%2,%3"
+  [(set_attr "type" "crypto")])
+
+(define_expand "riscv_<sm4_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+                   (match_operand:SI 2 "register_operand" "r")
+                   (match_operand:SI 3 "register_operand" "D03")]
+                   SM4_OP))]
   "TARGET_ZKSED"
-  "sm4ks\t%0,%1,%2,%3"
+  {
+    if (TARGET_64BIT)
+      {
+        rtx t = gen_reg_rtx (DImode);
+        emit_insn (gen_riscv_<sm4_op>_di_extended (t, operands[1], operands[2], operands[3]));
+        t = gen_lowpart (SImode, t);
+        SUBREG_PROMOTED_VAR_P (t) = 1;
+        SUBREG_PROMOTED_SET (t, SRP_SIGNED);
+        emit_move_insn (operands[0], t);
+        DONE;
+      }
+  }
   [(set_attr "type" "crypto")])
diff --git a/gcc/config/riscv/riscv-builtins.cc b/gcc/config/riscv/riscv-builtins.cc
index f6b06b3c16ac..3fe3a89dcc25 100644
--- a/gcc/config/riscv/riscv-builtins.cc
+++ b/gcc/config/riscv/riscv-builtins.cc
@@ -112,12 +112,11 @@ AVAIL (crypto_zknd64, TARGET_ZKND && TARGET_64BIT)
 AVAIL (crypto_zkne32, TARGET_ZKNE && !TARGET_64BIT)
 AVAIL (crypto_zkne64, TARGET_ZKNE && TARGET_64BIT)
 AVAIL (crypto_zkne_or_zknd, (TARGET_ZKNE || TARGET_ZKND) && TARGET_64BIT)
+AVAIL (crypto_zknh, TARGET_ZKNH)
 AVAIL (crypto_zknh32, TARGET_ZKNH && !TARGET_64BIT)
 AVAIL (crypto_zknh64, TARGET_ZKNH && TARGET_64BIT)
-AVAIL (crypto_zksh32, TARGET_ZKSH && !TARGET_64BIT)
-AVAIL (crypto_zksh64, TARGET_ZKSH && TARGET_64BIT)
-AVAIL (crypto_zksed32, TARGET_ZKSED && !TARGET_64BIT)
-AVAIL (crypto_zksed64, TARGET_ZKSED && TARGET_64BIT)
+AVAIL (crypto_zksh, TARGET_ZKSH)
+AVAIL (crypto_zksed, TARGET_ZKSED)
 AVAIL (clmul_zbkc32_or_zbc32, (TARGET_ZBKC || TARGET_ZBC) && !TARGET_64BIT)
 AVAIL (clmul_zbkc64_or_zbc64, (TARGET_ZBKC || TARGET_ZBC) && TARGET_64BIT)
 AVAIL (clmulr_zbc32, TARGET_ZBC && !TARGET_64BIT)
diff --git a/gcc/config/riscv/riscv-ftypes.def b/gcc/config/riscv/riscv-ftypes.def
index 366861ce640e..33620c57ca06 100644
--- a/gcc/config/riscv/riscv-ftypes.def
+++ b/gcc/config/riscv/riscv-ftypes.def
@@ -41,4 +41,3 @@ DEF_RISCV_FTYPE (2, (UDI, USI, USI))
 DEF_RISCV_FTYPE (2, (UDI, UDI, USI))
 DEF_RISCV_FTYPE (2, (UDI, UDI, UDI))
 DEF_RISCV_FTYPE (3, (USI, USI, USI, USI))
-DEF_RISCV_FTYPE (3, (UDI, UDI, UDI, USI))
diff --git a/gcc/config/riscv/riscv-scalar-crypto.def b/gcc/config/riscv/riscv-scalar-crypto.def
index db86ec9fd78a..3db9ed4a03e5 100644
--- a/gcc/config/riscv/riscv-scalar-crypto.def
+++ b/gcc/config/riscv/riscv-scalar-crypto.def
@@ -54,14 +54,10 @@ DIRECT_BUILTIN (aes64es, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne64),
 DIRECT_BUILTIN (aes64esm, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne64),
 
 // ZKNH
-RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
-RISCV_BUILTIN (sha256sig0_di, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
-RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
-RISCV_BUILTIN (sha256sig1_di, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
-RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
-RISCV_BUILTIN (sha256sum0_di, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
-RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
-RISCV_BUILTIN (sha256sum1_di, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
+RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
+RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
+RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
 
 DIRECT_BUILTIN (sha512sig0h, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
 DIRECT_BUILTIN (sha512sig0l, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
@@ -76,13 +72,9 @@ DIRECT_BUILTIN (sha512sum0, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
 DIRECT_BUILTIN (sha512sum1, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
 
 // ZKSH
-RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh32),
-RISCV_BUILTIN (sm3p0_di, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zksh64),
-RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh32),
-RISCV_BUILTIN (sm3p1_di, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zksh64),
+RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh),
+RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh),
 
 // ZKSED
-RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed32),
-RISCV_BUILTIN (sm4ed_di, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI_USI, crypto_zksed64),
-RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed32),
-RISCV_BUILTIN (sm4ks_di, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI_USI, crypto_zksed64),
+RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed),
+RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed),
diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c b/gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
new file mode 100644
index 000000000000..c51b143a8a5c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
@@ -0,0 +1,10 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv32gc_zknh -mabi=ilp32d" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+
+#include "zknh-sha256-64.c"
+
+/* { dg-final { scan-assembler-times "sha256sig0" 1 } } */
+/* { dg-final { scan-assembler-times "sha256sig1" 1 } } */
+/* { dg-final { scan-assembler-times "sha256sum0" 1 } } */
+/* { dg-final { scan-assembler-times "sha256sum1" 1 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha256.c b/gcc/testsuite/gcc.target/riscv/zknh-sha256-64.c
similarity index 78%
rename from gcc/testsuite/gcc.target/riscv/zknh-sha256.c
rename to gcc/testsuite/gcc.target/riscv/zknh-sha256-64.c
index 952d611cd0b9..2ef37601e6fb 100644
--- a/gcc/testsuite/gcc.target/riscv/zknh-sha256.c
+++ b/gcc/testsuite/gcc.target/riscv/zknh-sha256-64.c
@@ -2,22 +2,22 @@
 /* { dg-options "-O2 -march=rv64gc_zknh -mabi=lp64" } */
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 
-unsigned long foo1(unsigned long rs1)
+unsigned int foo1(unsigned int rs1)
 {
     return __builtin_riscv_sha256sig0(rs1);
 }
 
-unsigned long foo2(unsigned long rs1)
+unsigned int foo2(unsigned int rs1)
 {
     return __builtin_riscv_sha256sig1(rs1);
 }
 
-unsigned long foo3(unsigned long rs1)
+unsigned int foo3(unsigned int rs1)
 {
     return __builtin_riscv_sha256sum0(rs1);
 }
 
-unsigned long foo4(unsigned long rs1)
+unsigned int foo4(unsigned int rs1)
 {
     return __builtin_riscv_sha256sum1(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksed64.c b/gcc/testsuite/gcc.target/riscv/zksed64.c
index 3485adf9cd88..913e7be4e4d9 100644
--- a/gcc/testsuite/gcc.target/riscv/zksed64.c
+++ b/gcc/testsuite/gcc.target/riscv/zksed64.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-uint64_t foo1(uint64_t rs1, uint64_t rs2, unsigned bs)
+uint32_t foo1(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ks(rs1,rs2,bs);
 }
 
-uint64_t foo2(uint64_t rs1, uint64_t rs2, unsigned bs)
+uint32_t foo2(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ed(rs1,rs2,bs);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksh64.c b/gcc/testsuite/gcc.target/riscv/zksh64.c
index bdd137872785..30bb1bdeeeb7 100644
--- a/gcc/testsuite/gcc.target/riscv/zksh64.c
+++ b/gcc/testsuite/gcc.target/riscv/zksh64.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-uint64_t foo1(uint64_t rs1)
+uint32_t foo1(uint32_t rs1)
 {
     return __builtin_riscv_sm3p0(rs1);
 }
 
-uint64_t foo2(uint64_t rs1)
+uint32_t foo2(uint32_t rs1)
 {
     return __builtin_riscv_sm3p1(rs1);
 }
-- 
2.42.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types
  2023-09-07  2:17 [RFC PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types Tsukasa OI
                   ` (2 preceding siblings ...)
  2023-09-08  1:03 ` [RFC PATCH 0/1] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t Tsukasa OI
@ 2023-09-12  1:28 ` Tsukasa OI
  2023-09-12  1:28   ` [PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned Tsukasa OI
  2023-09-12  1:28   ` [PATCH 2/2] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t Tsukasa OI
  3 siblings, 2 replies; 12+ messages in thread
From: Tsukasa OI @ 2023-09-12  1:28 UTC (permalink / raw)
  To: Tsukasa OI, Kito Cheng, Palmer Dabbelt, Andrew Waterman,
	Jim Wilson, Jeff Law
  Cc: gcc-patches

Hello,

My research suggests my previous RFC patchs will not break any real-world
programs (as far as I could find) and I will submit this patch set
(previously, two RFC patch sets) for upstream contribution.

RFC PATCH 1 (combined to this):
<https://gcc.gnu.org/pipermail/gcc-patches/2023-September/629527.html>
RFC PATCH 2 (combined to this):
<https://gcc.gnu.org/pipermail/gcc-patches/2023-September/629644.html>

This patch set consists of two commits:

1.  Changing signed types to unsigned types on bit manipulation and scalar
    crypto intrinsics.
2.  Changing from uint64_t (changed from int64_t on the PATCH 1/2) to
    uint32_t on SHA-256, SM3 and SM4 intrinsics (all operate on 32-bit
    integers and upper 32-bits are not affected).

This is in sync with following commits in LLVM (will be a part of 17):

1. 599421ae36c3
  ("[RISCV] Use unsigned instead of signed types for Zk* and Zb* builtins.")
2. a64b3e92c7cb
  ("[RISCV] Re-define sha256, Zksed, and Zksh intrinsics to use i32 types.")

Many RISC-V builtins operate in signed integer types but in many cases,
they are better to be unsigned.

It follows with a bit fixed cover letter from the previous RFC PATCH sets.

Sincerely,
Tsukasa



Many RISC-V builtins operate in signed integer types but in many cases,
they are better to be unsigned.

There are a few reasons to do that in PATCH 1/2:

1.  Being more natural
    For bit manipulation operations, the direct input and the manipulated
    result should have an unsigned type.
    e.g. __builtin_bswap16
        Both input and output should be (and are) unsigned.
    e.g. __builtin_popcount
        The input should be (and is) unsigned.
        The output is a bit count and is in int (signed integer).
2.  In parity with LLVM (likely in version 17)
    LLVM made similar changes to this patch set in the commit 599421ae36c3
    ("[RISCV] Use unsigned instead of signed types for Zk* and Zb*
    builtins.") by Craig Topper.
    Note that shift amount / round number argument types are changed to
    unsigned in this LLVM commit, I did the same.
3.  Minimum compatibility breakage
    This change rarely causes warnings even if both -Wall and -Wextra are
    specified.

This is not completely compatible.  For instance, we will notice when
operating with for instance C++'s "auto" / "decltype" or overload
resolution.  But I consider this change in PATCH 1/2 is acceptable.



There are also operations that only operate in 32-bit integers but require
XLEN-bit wide integers.  Intrinsics for SHA-256, SM3 and SM4 applies.

PATCH 2/2 changes this in parity with the LLVM commit a64b3e92c7cb ("[RISCV]
Re-define sha256, Zksed, and Zksh intrinsics to use i32 types.") by Craig
Topper and makes those intrinsics accept/return uint32_t even on RV64.

This instruction/expansion changes are based on the handling of 32-bit
instructions on RV64 (such as ADDW).

Before:
   riscv_<op>_si: For RV32, fully operate on uint32_t
   riscv_<op>_di: For RV64, fully operate on uint64_t
After:
  *riscv_<op>_si: For RV32, fully operate on uint32_t
   riscv_<op>_di_extended:
                  For RV64. Input is uint32_t and output is int64_t,
                  sign-extended from the int32_t result
                  (represents a part of <op> behavior).
   riscv_<op>_si: Common (fully operate on uint32_t).
                  On RV32, "expands" to *riscv_<op>_si.
                  On RV64, initially expands to riscv_<op>_di_extended and
                  extracts lower 32-bits from the int64_t result.

I think this (not completely compatible) change is still acceptable.




Tsukasa OI (2):
  RISC-V: Make bit manipulation value / round number and shift amount
    types for builtins unsigned
  RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t

 gcc/config/riscv/crypto.md                    | 161 ++++++++++++------
 gcc/config/riscv/riscv-builtins.cc            |  14 +-
 gcc/config/riscv/riscv-cmo.def                |  16 +-
 gcc/config/riscv/riscv-ftypes.def             |  23 ++-
 gcc/config/riscv/riscv-scalar-crypto.def      |  96 +++++------
 gcc/testsuite/gcc.target/riscv/zbc32.c        |   6 +-
 gcc/testsuite/gcc.target/riscv/zbc64.c        |   6 +-
 gcc/testsuite/gcc.target/riscv/zbkb32.c       |  10 +-
 gcc/testsuite/gcc.target/riscv/zbkb64.c       |   8 +-
 gcc/testsuite/gcc.target/riscv/zbkc32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zbkc64.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zbkx32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zbkx64.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zknd32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zknd64.c       |  10 +-
 gcc/testsuite/gcc.target/riscv/zkne32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zkne64.c       |   8 +-
 .../gcc.target/riscv/zknh-sha256-32.c         |  10 ++
 .../riscv/{zknh-sha256.c => zknh-sha256-64.c} |   8 +-
 .../gcc.target/riscv/zknh-sha512-32.c         |  12 +-
 .../gcc.target/riscv/zknh-sha512-64.c         |   8 +-
 gcc/testsuite/gcc.target/riscv/zksed32.c      |   4 +-
 gcc/testsuite/gcc.target/riscv/zksed64.c      |   4 +-
 gcc/testsuite/gcc.target/riscv/zksh32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zksh64.c       |   4 +-
 25 files changed, 247 insertions(+), 189 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
 rename gcc/testsuite/gcc.target/riscv/{zknh-sha256.c => zknh-sha256-64.c} (79%)


base-commit: fb4b53d964b797e5f3380726175c95110c4ff9ff
-- 
2.42.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned
  2023-09-12  1:28 ` [PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types Tsukasa OI
@ 2023-09-12  1:28   ` Tsukasa OI
  2023-09-17 15:58     ` Jeff Law
  2023-09-12  1:28   ` [PATCH 2/2] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t Tsukasa OI
  1 sibling, 1 reply; 12+ messages in thread
From: Tsukasa OI @ 2023-09-12  1:28 UTC (permalink / raw)
  To: Tsukasa OI, Kito Cheng, Palmer Dabbelt, Andrew Waterman,
	Jim Wilson, Jeff Law
  Cc: gcc-patches

From: Tsukasa OI <research_trasio@irq.a4lg.com>

For bit manipulation operations, input(s) and the manipulated output are
better to be unsigned like other target-independent builtins like
__builtin_bswap32 and __builtin_popcount.

Although this is not completely compatible as before (as the type changes),
most code will run normally, even without warnings (with -Wall -Wextra).

To make consistent to the LLVM commit 599421ae36c3 ("[RISCV] Use unsigned
instead of signed types for Zk* and Zb* builtins."), round numbers and
shift amount on the scalar crypto instructions are also changed
to unsigned.

gcc/ChangeLog:

	* config/riscv/riscv-builtins.cc (RISCV_ATYPE_UQI): New for
	uint8_t.  (RISCV_ATYPE_UHI): New for uint16_t.
	(RISCV_ATYPE_QI, RISCV_ATYPE_HI, RISCV_ATYPE_SI, RISCV_ATYPE_DI):
	Removed as no longer used.
	(RISCV_ATYPE_UDI): New for uint64_t.
	* config/riscv/riscv-cmo.def: Make types unsigned for not working
	"zicbop_cbo_prefetchi" and working bit manipulation clmul builtin
	argument/return types.
	* config/riscv/riscv-ftypes.def: Make bit manipulation, round
	number and shift amount types unsigned.
	* config/riscv/riscv-scalar-crypto.def: Ditto.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/zbc32.c: Make signed type to unsigned.
	* gcc.target/riscv/zbc64.c: Ditto.
	* gcc.target/riscv/zbkb32.c: Ditto.
	* gcc.target/riscv/zbkb64.c: Ditto.
	* gcc.target/riscv/zbkc32.c: Ditto.
	* gcc.target/riscv/zbkc64.c: Ditto.
	* gcc.target/riscv/zbkx32.c: Ditto.
	* gcc.target/riscv/zbkx64.c: Ditto.
	* gcc.target/riscv/zknd32.c: Ditto.
	* gcc.target/riscv/zknd64.c: Ditto.
	* gcc.target/riscv/zkne32.c: Ditto.
	* gcc.target/riscv/zkne64.c: Ditto.
	* gcc.target/riscv/zknh-sha256.c: Ditto.
	* gcc.target/riscv/zknh-sha512-32.c: Ditto.
	* gcc.target/riscv/zknh-sha512-64.c: Ditto.
	* gcc.target/riscv/zksed32.c: Ditto.
	* gcc.target/riscv/zksed64.c: Ditto.
	* gcc.target/riscv/zksh32.c: Ditto.
	* gcc.target/riscv/zksh64.c: Ditto.
---
 gcc/config/riscv/riscv-builtins.cc            |   7 +-
 gcc/config/riscv/riscv-cmo.def                |  16 +--
 gcc/config/riscv/riscv-ftypes.def             |  24 ++--
 gcc/config/riscv/riscv-scalar-crypto.def      | 104 +++++++++---------
 gcc/testsuite/gcc.target/riscv/zbc32.c        |   6 +-
 gcc/testsuite/gcc.target/riscv/zbc64.c        |   6 +-
 gcc/testsuite/gcc.target/riscv/zbkb32.c       |  10 +-
 gcc/testsuite/gcc.target/riscv/zbkb64.c       |   8 +-
 gcc/testsuite/gcc.target/riscv/zbkc32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zbkc64.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zbkx32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zbkx64.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zknd32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zknd64.c       |  10 +-
 gcc/testsuite/gcc.target/riscv/zkne32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zkne64.c       |   8 +-
 gcc/testsuite/gcc.target/riscv/zknh-sha256.c  |   8 +-
 .../gcc.target/riscv/zknh-sha512-32.c         |  12 +-
 .../gcc.target/riscv/zknh-sha512-64.c         |   8 +-
 gcc/testsuite/gcc.target/riscv/zksed32.c      |   4 +-
 gcc/testsuite/gcc.target/riscv/zksed64.c      |   4 +-
 gcc/testsuite/gcc.target/riscv/zksh32.c       |   4 +-
 gcc/testsuite/gcc.target/riscv/zksh64.c       |   4 +-
 23 files changed, 133 insertions(+), 134 deletions(-)

diff --git a/gcc/config/riscv/riscv-builtins.cc b/gcc/config/riscv/riscv-builtins.cc
index 8afe7b7e97d3..f6b06b3c16ac 100644
--- a/gcc/config/riscv/riscv-builtins.cc
+++ b/gcc/config/riscv/riscv-builtins.cc
@@ -155,11 +155,10 @@ AVAIL (hint_pause, (!0))
 
 /* Argument types.  */
 #define RISCV_ATYPE_VOID void_type_node
+#define RISCV_ATYPE_UQI unsigned_intQI_type_node
+#define RISCV_ATYPE_UHI unsigned_intHI_type_node
 #define RISCV_ATYPE_USI unsigned_intSI_type_node
-#define RISCV_ATYPE_QI intQI_type_node
-#define RISCV_ATYPE_HI intHI_type_node
-#define RISCV_ATYPE_SI intSI_type_node
-#define RISCV_ATYPE_DI intDI_type_node
+#define RISCV_ATYPE_UDI unsigned_intDI_type_node
 #define RISCV_ATYPE_VOID_PTR ptr_type_node
 
 /* RISCV_FTYPE_ATYPESN takes N RISCV_FTYPES-like type codes and lists
diff --git a/gcc/config/riscv/riscv-cmo.def b/gcc/config/riscv/riscv-cmo.def
index b92044dc6ff9..ff713b78e19e 100644
--- a/gcc/config/riscv/riscv-cmo.def
+++ b/gcc/config/riscv/riscv-cmo.def
@@ -13,15 +13,15 @@ RISCV_BUILTIN (zero_si, "zicboz_cbo_zero", RISCV_BUILTIN_DIRECT_NO_TARGET, RISCV
 RISCV_BUILTIN (zero_di, "zicboz_cbo_zero", RISCV_BUILTIN_DIRECT_NO_TARGET, RISCV_VOID_FTYPE_VOID_PTR, zero64),
 
 // zicbop
-RISCV_BUILTIN (prefetchi_si, "zicbop_cbo_prefetchi", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, prefetchi32),
-RISCV_BUILTIN (prefetchi_di, "zicbop_cbo_prefetchi", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, prefetchi64),
+RISCV_BUILTIN (prefetchi_si, "zicbop_cbo_prefetchi", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, prefetchi32),
+RISCV_BUILTIN (prefetchi_di, "zicbop_cbo_prefetchi", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, prefetchi64),
 
 // zbkc or zbc
-RISCV_BUILTIN (clmul_si, "clmul", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, clmul_zbkc32_or_zbc32),
-RISCV_BUILTIN (clmul_di, "clmul", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, clmul_zbkc64_or_zbc64),
-RISCV_BUILTIN (clmulh_si, "clmulh", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, clmul_zbkc32_or_zbc32),
-RISCV_BUILTIN (clmulh_di, "clmulh", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, clmul_zbkc64_or_zbc64),
+RISCV_BUILTIN (clmul_si, "clmul", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI, clmul_zbkc32_or_zbc32),
+RISCV_BUILTIN (clmul_di, "clmul", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI, clmul_zbkc64_or_zbc64),
+RISCV_BUILTIN (clmulh_si, "clmulh", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI, clmul_zbkc32_or_zbc32),
+RISCV_BUILTIN (clmulh_di, "clmulh", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI, clmul_zbkc64_or_zbc64),
 
 // zbc
-RISCV_BUILTIN (clmulr_si, "clmulr", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, clmulr_zbc32),
-RISCV_BUILTIN (clmulr_di, "clmulr", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, clmulr_zbc64),
+RISCV_BUILTIN (clmulr_si, "clmulr", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI, clmulr_zbc32),
+RISCV_BUILTIN (clmulr_di, "clmulr", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI, clmulr_zbc64),
diff --git a/gcc/config/riscv/riscv-ftypes.def b/gcc/config/riscv/riscv-ftypes.def
index 3b518195a29c..366861ce640e 100644
--- a/gcc/config/riscv/riscv-ftypes.def
+++ b/gcc/config/riscv/riscv-ftypes.def
@@ -30,15 +30,15 @@ DEF_RISCV_FTYPE (0, (USI))
 DEF_RISCV_FTYPE (0, (VOID))
 DEF_RISCV_FTYPE (1, (VOID, USI))
 DEF_RISCV_FTYPE (1, (VOID, VOID_PTR))
-DEF_RISCV_FTYPE (1, (SI, SI))
-DEF_RISCV_FTYPE (1, (DI, DI))
-DEF_RISCV_FTYPE (2, (SI, QI, QI))
-DEF_RISCV_FTYPE (2, (SI, HI, HI))
-DEF_RISCV_FTYPE (2, (SI, SI, SI))
-DEF_RISCV_FTYPE (2, (DI, QI, QI))
-DEF_RISCV_FTYPE (2, (DI, HI, HI))
-DEF_RISCV_FTYPE (2, (DI, SI, SI))
-DEF_RISCV_FTYPE (2, (DI, DI, SI))
-DEF_RISCV_FTYPE (2, (DI, DI, DI))
-DEF_RISCV_FTYPE (3, (SI, SI, SI, SI))
-DEF_RISCV_FTYPE (3, (DI, DI, DI, SI))
+DEF_RISCV_FTYPE (1, (USI, USI))
+DEF_RISCV_FTYPE (1, (UDI, UDI))
+DEF_RISCV_FTYPE (2, (USI, UQI, UQI))
+DEF_RISCV_FTYPE (2, (USI, UHI, UHI))
+DEF_RISCV_FTYPE (2, (USI, USI, USI))
+DEF_RISCV_FTYPE (2, (UDI, UQI, UQI))
+DEF_RISCV_FTYPE (2, (UDI, UHI, UHI))
+DEF_RISCV_FTYPE (2, (UDI, USI, USI))
+DEF_RISCV_FTYPE (2, (UDI, UDI, USI))
+DEF_RISCV_FTYPE (2, (UDI, UDI, UDI))
+DEF_RISCV_FTYPE (3, (USI, USI, USI, USI))
+DEF_RISCV_FTYPE (3, (UDI, UDI, UDI, USI))
diff --git a/gcc/config/riscv/riscv-scalar-crypto.def b/gcc/config/riscv/riscv-scalar-crypto.def
index c2caed5151db..db86ec9fd78a 100644
--- a/gcc/config/riscv/riscv-scalar-crypto.def
+++ b/gcc/config/riscv/riscv-scalar-crypto.def
@@ -18,71 +18,71 @@ along with GCC; see the file COPYING3.  If not see
 <http://www.gnu.org/licenses/>.  */
 
 // ZBKB
-RISCV_BUILTIN (pack_sihi, "pack", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_HI_HI, crypto_zbkb32),
-RISCV_BUILTIN (pack_disi, "pack", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_SI_SI, crypto_zbkb64),
+RISCV_BUILTIN (pack_sihi, "pack", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_UHI_UHI, crypto_zbkb32),
+RISCV_BUILTIN (pack_disi, "pack", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_USI_USI, crypto_zbkb64),
 
-RISCV_BUILTIN (packh_si, "packh", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_QI_QI, crypto_zbkb32),
-RISCV_BUILTIN (packh_di, "packh", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_QI_QI, crypto_zbkb64),
+RISCV_BUILTIN (packh_si, "packh", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_UQI_UQI, crypto_zbkb32),
+RISCV_BUILTIN (packh_di, "packh", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UQI_UQI, crypto_zbkb64),
 
-RISCV_BUILTIN (packw, "packw", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_HI_HI, crypto_zbkb64),
+RISCV_BUILTIN (packw, "packw", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UHI_UHI, crypto_zbkb64),
 
-RISCV_BUILTIN (zip, "zip", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zbkb32),
-RISCV_BUILTIN (unzip, "unzip", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zbkb32),
+RISCV_BUILTIN (zip, "zip", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zbkb32),
+RISCV_BUILTIN (unzip, "unzip", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zbkb32),
 
-RISCV_BUILTIN (brev8_si, "brev8", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zbkb32),
-RISCV_BUILTIN (brev8_di, "brev8", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zbkb64),
+RISCV_BUILTIN (brev8_si, "brev8", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zbkb32),
+RISCV_BUILTIN (brev8_di, "brev8", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zbkb64),
 
 // ZBKX
-RISCV_BUILTIN (xperm4_si, "xperm4", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkx32),
-RISCV_BUILTIN (xperm4_di, "xperm4", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkx64),
-RISCV_BUILTIN (xperm8_si, "xperm8", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI, crypto_zbkx32),
-RISCV_BUILTIN (xperm8_di, "xperm8", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI, crypto_zbkx64),
+RISCV_BUILTIN (xperm4_si, "xperm4", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI, crypto_zbkx32),
+RISCV_BUILTIN (xperm4_di, "xperm4", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI, crypto_zbkx64),
+RISCV_BUILTIN (xperm8_si, "xperm8", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI, crypto_zbkx32),
+RISCV_BUILTIN (xperm8_di, "xperm8", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI, crypto_zbkx64),
 
 // ZKND
-DIRECT_BUILTIN (aes32dsi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zknd32),
-DIRECT_BUILTIN (aes32dsmi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zknd32),
-DIRECT_BUILTIN (aes64ds, RISCV_DI_FTYPE_DI_DI, crypto_zknd64),
-DIRECT_BUILTIN (aes64dsm, RISCV_DI_FTYPE_DI_DI, crypto_zknd64),
-DIRECT_BUILTIN (aes64im, RISCV_DI_FTYPE_DI, crypto_zknd64),
-DIRECT_BUILTIN (aes64ks1i, RISCV_DI_FTYPE_DI_SI, crypto_zkne_or_zknd),
-DIRECT_BUILTIN (aes64ks2, RISCV_DI_FTYPE_DI_DI, crypto_zkne_or_zknd),
+DIRECT_BUILTIN (aes32dsi, RISCV_USI_FTYPE_USI_USI_USI, crypto_zknd32),
+DIRECT_BUILTIN (aes32dsmi, RISCV_USI_FTYPE_USI_USI_USI, crypto_zknd32),
+DIRECT_BUILTIN (aes64ds, RISCV_UDI_FTYPE_UDI_UDI, crypto_zknd64),
+DIRECT_BUILTIN (aes64dsm, RISCV_UDI_FTYPE_UDI_UDI, crypto_zknd64),
+DIRECT_BUILTIN (aes64im, RISCV_UDI_FTYPE_UDI, crypto_zknd64),
+DIRECT_BUILTIN (aes64ks1i, RISCV_UDI_FTYPE_UDI_USI, crypto_zkne_or_zknd),
+DIRECT_BUILTIN (aes64ks2, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne_or_zknd),
 
 // ZKNE
-DIRECT_BUILTIN (aes32esi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zkne32),
-DIRECT_BUILTIN (aes32esmi, RISCV_SI_FTYPE_SI_SI_SI, crypto_zkne32),
-DIRECT_BUILTIN (aes64es, RISCV_DI_FTYPE_DI_DI, crypto_zkne64),
-DIRECT_BUILTIN (aes64esm, RISCV_DI_FTYPE_DI_DI, crypto_zkne64),
+DIRECT_BUILTIN (aes32esi, RISCV_USI_FTYPE_USI_USI_USI, crypto_zkne32),
+DIRECT_BUILTIN (aes32esmi, RISCV_USI_FTYPE_USI_USI_USI, crypto_zkne32),
+DIRECT_BUILTIN (aes64es, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne64),
+DIRECT_BUILTIN (aes64esm, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne64),
 
 // ZKNH
-RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
-RISCV_BUILTIN (sha256sig0_di, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
-RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
-RISCV_BUILTIN (sha256sig1_di, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
-RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
-RISCV_BUILTIN (sha256sum0_di, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
-RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zknh32),
-RISCV_BUILTIN (sha256sum1_di, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zknh64),
-
-DIRECT_BUILTIN (sha512sig0h, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-DIRECT_BUILTIN (sha512sig0l, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-DIRECT_BUILTIN (sha512sig1h, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-DIRECT_BUILTIN (sha512sig1l, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-DIRECT_BUILTIN (sha512sum0r, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-DIRECT_BUILTIN (sha512sum1r, RISCV_SI_FTYPE_SI_SI, crypto_zknh32),
-
-DIRECT_BUILTIN (sha512sig0, RISCV_DI_FTYPE_DI, crypto_zknh64),
-DIRECT_BUILTIN (sha512sig1, RISCV_DI_FTYPE_DI, crypto_zknh64),
-DIRECT_BUILTIN (sha512sum0, RISCV_DI_FTYPE_DI, crypto_zknh64),
-DIRECT_BUILTIN (sha512sum1, RISCV_DI_FTYPE_DI, crypto_zknh64),
+RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
+RISCV_BUILTIN (sha256sig0_di, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
+RISCV_BUILTIN (sha256sig1_di, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
+RISCV_BUILTIN (sha256sum0_di, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
+RISCV_BUILTIN (sha256sum1_di, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+
+DIRECT_BUILTIN (sha512sig0h, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sig0l, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sig1h, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sig1l, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sum0r, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+DIRECT_BUILTIN (sha512sum1r, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
+
+DIRECT_BUILTIN (sha512sig0, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+DIRECT_BUILTIN (sha512sig1, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+DIRECT_BUILTIN (sha512sum0, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+DIRECT_BUILTIN (sha512sum1, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
 
 // ZKSH
-RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zksh32),
-RISCV_BUILTIN (sm3p0_di, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zksh64),
-RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI, crypto_zksh32),
-RISCV_BUILTIN (sm3p1_di, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI, crypto_zksh64),
+RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh32),
+RISCV_BUILTIN (sm3p0_di, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zksh64),
+RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh32),
+RISCV_BUILTIN (sm3p1_di, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zksh64),
 
 // ZKSED
-RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI_SI, crypto_zksed32),
-RISCV_BUILTIN (sm4ed_di, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI_SI, crypto_zksed64),
-RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_SI_FTYPE_SI_SI_SI, crypto_zksed32),
-RISCV_BUILTIN (sm4ks_di, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_DI_FTYPE_DI_DI_SI, crypto_zksed64),
+RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed32),
+RISCV_BUILTIN (sm4ed_di, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI_USI, crypto_zksed64),
+RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed32),
+RISCV_BUILTIN (sm4ks_di, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI_USI, crypto_zksed64),
diff --git a/gcc/testsuite/gcc.target/riscv/zbc32.c b/gcc/testsuite/gcc.target/riscv/zbc32.c
index 08705c4a687e..f3fb2238f7f4 100644
--- a/gcc/testsuite/gcc.target/riscv/zbc32.c
+++ b/gcc/testsuite/gcc.target/riscv/zbc32.c
@@ -3,17 +3,17 @@
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2)
+uint32_t foo1(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_clmul(rs1, rs2);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2)
+uint32_t foo2(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_clmulh(rs1, rs2);
 }
 
-int32_t foo3(int32_t rs1, int32_t rs2)
+uint32_t foo3(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_clmulr(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbc64.c b/gcc/testsuite/gcc.target/riscv/zbc64.c
index a19f42b2883f..841a0aa7847d 100644
--- a/gcc/testsuite/gcc.target/riscv/zbc64.c
+++ b/gcc/testsuite/gcc.target/riscv/zbc64.c
@@ -3,17 +3,17 @@
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2)
+uint64_t foo1(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_clmul(rs1, rs2);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2)
+uint64_t foo2(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_clmulh(rs1, rs2);
 }
 
-int64_t foo3(int64_t rs1, int64_t rs2)
+uint64_t foo3(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_clmulr(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkb32.c b/gcc/testsuite/gcc.target/riscv/zbkb32.c
index dd45b8b9dc72..b2e442dc49d8 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkb32.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkb32.c
@@ -4,27 +4,27 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int16_t rs1, int16_t rs2)
+uint32_t foo1(uint16_t rs1, uint16_t rs2)
 {
     return __builtin_riscv_pack(rs1, rs2);
 }
 
-int32_t foo2(int8_t rs1, int8_t rs2)
+uint32_t foo2(uint8_t rs1, uint8_t rs2)
 {
     return __builtin_riscv_packh(rs1, rs2);
 }
 
-int32_t foo3(int32_t rs1)
+uint32_t foo3(uint32_t rs1)
 {
     return __builtin_riscv_brev8(rs1);
 }
 
-int32_t foo4(int32_t rs1)
+uint32_t foo4(uint32_t rs1)
 {
     return __builtin_riscv_zip(rs1);
 }
 
-int32_t foo5(int32_t rs1)
+uint32_t foo5(uint32_t rs1)
 {
     return __builtin_riscv_unzip(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkb64.c b/gcc/testsuite/gcc.target/riscv/zbkb64.c
index 960a2ae30ed6..08ac9c2a9f00 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkb64.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkb64.c
@@ -3,22 +3,22 @@
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 #include <stdint-gcc.h>
 
-int64_t foo1(int32_t rs1, int32_t rs2)
+uint64_t foo1(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_pack(rs1, rs2);
 }
 
-int64_t foo2(int8_t rs1, int8_t rs2)
+uint64_t foo2(uint8_t rs1, uint8_t rs2)
 {
     return __builtin_riscv_packh(rs1, rs2);
 }
 
-int64_t foo3(int16_t rs1, int16_t rs2)
+uint64_t foo3(uint16_t rs1, uint16_t rs2)
 {
     return __builtin_riscv_packw(rs1, rs2);
 }
 
-int64_t foo4(int64_t rs1, int64_t rs2)
+uint64_t foo4(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_brev8(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkc32.c b/gcc/testsuite/gcc.target/riscv/zbkc32.c
index a8e29200250b..29f0d624a7d7 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkc32.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkc32.c
@@ -3,12 +3,12 @@
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2)
+uint32_t foo1(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_clmul(rs1, rs2);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2)
+uint32_t foo2(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_clmulh(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkc64.c b/gcc/testsuite/gcc.target/riscv/zbkc64.c
index 728f8baf099d..53e6ac215ed3 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkc64.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkc64.c
@@ -3,12 +3,12 @@
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2)
+uint64_t foo1(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_clmul(rs1, rs2);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2)
+uint64_t foo2(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_clmulh(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkx32.c b/gcc/testsuite/gcc.target/riscv/zbkx32.c
index bd95524f548b..b8b822a7c499 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkx32.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkx32.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo3(int32_t rs1, int32_t rs2)
+uint32_t foo3(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_xperm8(rs1, rs2);
 }
 
-int32_t foo4(int32_t rs1, int32_t rs2)
+uint32_t foo4(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_xperm4(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zbkx64.c b/gcc/testsuite/gcc.target/riscv/zbkx64.c
index 2a04a94b86c4..732436701b33 100644
--- a/gcc/testsuite/gcc.target/riscv/zbkx64.c
+++ b/gcc/testsuite/gcc.target/riscv/zbkx64.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2)
+uint64_t foo1(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_xperm8(rs1, rs2);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2)
+uint64_t foo2(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_xperm4(rs1, rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zknd32.c b/gcc/testsuite/gcc.target/riscv/zknd32.c
index 5fcc66da9015..e60c027e0911 100644
--- a/gcc/testsuite/gcc.target/riscv/zknd32.c
+++ b/gcc/testsuite/gcc.target/riscv/zknd32.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo1(uint32_t rs1, uint32_t rs2, int bs)
 {
     return __builtin_riscv_aes32dsi(rs1,rs2,bs);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo2(uint32_t rs1, uint32_t rs2, int bs)
 {
     return __builtin_riscv_aes32dsmi(rs1,rs2,bs);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zknd64.c b/gcc/testsuite/gcc.target/riscv/zknd64.c
index b1dff98f7e21..910b91c6ed88 100644
--- a/gcc/testsuite/gcc.target/riscv/zknd64.c
+++ b/gcc/testsuite/gcc.target/riscv/zknd64.c
@@ -4,27 +4,27 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2)
+uint64_t foo1(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64ds(rs1,rs2);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2)
+uint64_t foo2(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64dsm(rs1,rs2);
 }
 
-int64_t foo3(int64_t rs1, int rnum)
+uint64_t foo3(uint64_t rs1, unsigned rnum)
 {
     return __builtin_riscv_aes64ks1i(rs1,rnum);
 }
 
-int64_t foo4(int64_t rs1, int64_t rs2)
+uint64_t foo4(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64ks2(rs1,rs2);
 }
 
-int64_t foo5(int64_t rs1)
+uint64_t foo5(uint64_t rs1)
 {
     return __builtin_riscv_aes64im(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zkne32.c b/gcc/testsuite/gcc.target/riscv/zkne32.c
index c131c9a6bbb1..252e9ffa43b3 100644
--- a/gcc/testsuite/gcc.target/riscv/zkne32.c
+++ b/gcc/testsuite/gcc.target/riscv/zkne32.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo1(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_aes32esi(rs1, rs2, bs);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo2(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_aes32esmi(rs1, rs2, bs);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zkne64.c b/gcc/testsuite/gcc.target/riscv/zkne64.c
index 7d82b5a5d411..b25f6b5c29ac 100644
--- a/gcc/testsuite/gcc.target/riscv/zkne64.c
+++ b/gcc/testsuite/gcc.target/riscv/zkne64.c
@@ -4,22 +4,22 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2)
+uint64_t foo1(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64es(rs1,rs2);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2)
+uint64_t foo2(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64esm(rs1,rs2);
 }
 
-int64_t foo3(int64_t rs1, int rnum)
+uint64_t foo3(uint64_t rs1, unsigned rnum)
 {
     return __builtin_riscv_aes64ks1i(rs1,rnum);
 }
 
-int64_t foo4(int64_t rs1, int64_t rs2)
+uint64_t foo4(uint64_t rs1, uint64_t rs2)
 {
     return __builtin_riscv_aes64ks2(rs1,rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha256.c b/gcc/testsuite/gcc.target/riscv/zknh-sha256.c
index 54329aa6af2e..952d611cd0b9 100644
--- a/gcc/testsuite/gcc.target/riscv/zknh-sha256.c
+++ b/gcc/testsuite/gcc.target/riscv/zknh-sha256.c
@@ -2,22 +2,22 @@
 /* { dg-options "-O2 -march=rv64gc_zknh -mabi=lp64" } */
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 
-long foo1(long rs1)
+unsigned long foo1(unsigned long rs1)
 {
     return __builtin_riscv_sha256sig0(rs1);
 }
 
-long foo2(long rs1)
+unsigned long foo2(unsigned long rs1)
 {
     return __builtin_riscv_sha256sig1(rs1);
 }
 
-long foo3(long rs1)
+unsigned long foo3(unsigned long rs1)
 {
     return __builtin_riscv_sha256sum0(rs1);
 }
 
-long foo4(long rs1)
+unsigned long foo4(unsigned long rs1)
 {
     return __builtin_riscv_sha256sum1(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha512-32.c b/gcc/testsuite/gcc.target/riscv/zknh-sha512-32.c
index 4ebc470f8ab7..f2bcae36a1f2 100644
--- a/gcc/testsuite/gcc.target/riscv/zknh-sha512-32.c
+++ b/gcc/testsuite/gcc.target/riscv/zknh-sha512-32.c
@@ -4,32 +4,32 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2)
+uint32_t foo1(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sig0h(rs1,rs2);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2)
+uint32_t foo2(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sig0l(rs1,rs2);
 }
 
-int32_t foo3(int32_t rs1, int32_t rs2)
+uint32_t foo3(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sig1h(rs1,rs2);
 }
 
-int32_t foo4(int32_t rs1, int32_t rs2)
+uint32_t foo4(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sig1l(rs1,rs2);
 }
 
-int32_t foo5(int32_t rs1, int32_t rs2)
+uint32_t foo5(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sum0r(rs1,rs2);
 }
 
-int32_t foo6(int32_t rs1, int32_t rs2)
+uint32_t foo6(uint32_t rs1, uint32_t rs2)
 {
     return __builtin_riscv_sha512sum1r(rs1,rs2);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha512-64.c b/gcc/testsuite/gcc.target/riscv/zknh-sha512-64.c
index 0fb5c75b9ce6..4f248575e66e 100644
--- a/gcc/testsuite/gcc.target/riscv/zknh-sha512-64.c
+++ b/gcc/testsuite/gcc.target/riscv/zknh-sha512-64.c
@@ -4,22 +4,22 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1)
+uint64_t foo1(uint64_t rs1)
 {
     return __builtin_riscv_sha512sig0(rs1);
 }
 
-int64_t foo2(int64_t rs1)
+uint64_t foo2(uint64_t rs1)
 {
     return __builtin_riscv_sha512sig1(rs1);
 }
 
-int64_t foo3(int64_t rs1)
+uint64_t foo3(uint64_t rs1)
 {
     return __builtin_riscv_sha512sum0(rs1);
 }
 
-int64_t foo4(int64_t rs1)
+uint64_t foo4(uint64_t rs1)
 {
     return __builtin_riscv_sha512sum1(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksed32.c b/gcc/testsuite/gcc.target/riscv/zksed32.c
index 9548d007cb22..7df04147e05c 100644
--- a/gcc/testsuite/gcc.target/riscv/zksed32.c
+++ b/gcc/testsuite/gcc.target/riscv/zksed32.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo1(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ks(rs1,rs2,bs);
 }
 
-int32_t foo2(int32_t rs1, int32_t rs2, int bs)
+uint32_t foo2(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ed(rs1,rs2,bs);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksed64.c b/gcc/testsuite/gcc.target/riscv/zksed64.c
index 190a654151db..3485adf9cd88 100644
--- a/gcc/testsuite/gcc.target/riscv/zksed64.c
+++ b/gcc/testsuite/gcc.target/riscv/zksed64.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1, int64_t rs2, int bs)
+uint64_t foo1(uint64_t rs1, uint64_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ks(rs1,rs2,bs);
 }
 
-int64_t foo2(int64_t rs1, int64_t rs2, int bs)
+uint64_t foo2(uint64_t rs1, uint64_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ed(rs1,rs2,bs);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksh32.c b/gcc/testsuite/gcc.target/riscv/zksh32.c
index 50370b58b7a9..20513f986f88 100644
--- a/gcc/testsuite/gcc.target/riscv/zksh32.c
+++ b/gcc/testsuite/gcc.target/riscv/zksh32.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int32_t foo1(int32_t rs1)
+uint32_t foo1(uint32_t rs1)
 {
     return __builtin_riscv_sm3p0(rs1);
 }
 
-int32_t foo2(int32_t rs1)
+uint32_t foo2(uint32_t rs1)
 {
     return __builtin_riscv_sm3p1(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksh64.c b/gcc/testsuite/gcc.target/riscv/zksh64.c
index 69847f3df359..bdd137872785 100644
--- a/gcc/testsuite/gcc.target/riscv/zksh64.c
+++ b/gcc/testsuite/gcc.target/riscv/zksh64.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-int64_t foo1(int64_t rs1)
+uint64_t foo1(uint64_t rs1)
 {
     return __builtin_riscv_sm3p0(rs1);
 }
 
-int64_t foo2(int64_t rs1)
+uint64_t foo2(uint64_t rs1)
 {
     return __builtin_riscv_sm3p1(rs1);
 }
-- 
2.42.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 2/2] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t
  2023-09-12  1:28 ` [PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types Tsukasa OI
  2023-09-12  1:28   ` [PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned Tsukasa OI
@ 2023-09-12  1:28   ` Tsukasa OI
  2023-09-12  2:44     ` Kito Cheng
  1 sibling, 1 reply; 12+ messages in thread
From: Tsukasa OI @ 2023-09-12  1:28 UTC (permalink / raw)
  To: Tsukasa OI, Kito Cheng, Palmer Dabbelt, Andrew Waterman,
	Jim Wilson, Jeff Law
  Cc: gcc-patches

From: Tsukasa OI <research_trasio@irq.a4lg.com>

This is in parity with the LLVM commit a64b3e92c7cb ("[RISCV] Re-define
sha256, Zksed, and Zksh intrinsics to use i32 types.").

SHA-256, SM3 and SM4 instructions operate on 32-bit integers and upper
32-bits have no effects on RV64 (the output is sign-extended from the
original 32-bit value).  In that sense, making those intrinsics only
operate on uint32_t is much more natural than XLEN-bits wide integers.

This commit reforms instructions and expansions based on 32-bit
instruction handling on RV64 (such as ADDW).

Before:
   riscv_<op>_si: For RV32, fully operate on uint32_t
   riscv_<op>_di: For RV64, fully operate on uint64_t
After:
  *riscv_<op>_si: For RV32, fully operate on uint32_t
   riscv_<op>_di_extended:
                  For RV64.  Input is uint32_t and output is int64_t,
                  sign-extended from the int32_t result
                  (represents a part of <op> behavior).
   riscv_<op>_si: Common (fully operate on uint32_t).
                  On RV32, "expands" to *riscv_<op>_si.
                  On RV64, initially expands to riscv_<op>_di_extended *and*
                  extracts lower 32-bits from the int64_t result.

It also refines definitions of SHA-256, SM3 and SM4 intrinsics.

gcc/ChangeLog:

	* config/riscv/crypto.md (riscv_sha256sig0_<mode>,
	riscv_sha256sig1_<mode>, riscv_sha256sum0_<mode>,
	riscv_sha256sum1_<mode>, riscv_sm3p0_<mode>, riscv_sm3p1_<mode>,
	riscv_sm4ed_<mode>, riscv_sm4ks_<mode>): Remove and replace with
	new insn/expansions.
	(SHA256_OP, SM3_OP, SM4_OP): New iterators.
	(sha256_op, sm3_op, sm4_op): New attributes for iteration.
	(*riscv_<sha256_op>_si): New raw instruction for RV32.
	(*riscv_<sm3_op>_si): Ditto.
	(*riscv_<sm4_op>_si): Ditto.
	(riscv_<sha256_op>_di_extended): New base instruction for RV64.
	(riscv_<sm3_op>_di_extended): Ditto.
	(riscv_<sm4_op>_di_extended): Ditto.
	(riscv_<sha256_op>_si): New common instruction expansion.
	(riscv_<sm3_op>_si): Ditto.
	(riscv_<sm4_op>_si): Ditto.
	* config/riscv/riscv-builtins.cc: Add availability "crypto_zknh",
	"crypto_zksh" and "crypto_zksed".  Remove availability
	"crypto_zksh{32,64}" and "crypto_zksed{32,64}".
	* config/riscv/riscv-ftypes.def: Remove unused function type.
	* config/riscv/riscv-scalar-crypto.def: Make SHA-256, SM3 and SM4
	intrinsics to operate on uint32_t.

gcc/testsuite/ChangeLog:

	* gcc.target/riscv/zknh-sha256.c: Moved to...
	* gcc.target/riscv/zknh-sha256-64.c: ...here.  Test RV64.
	* gcc.target/riscv/zknh-sha256-32.c: New test for RV32.
	* gcc.target/riscv/zksh64.c: Change the type.
	* gcc.target/riscv/zksed64.c: Ditto.
---
 gcc/config/riscv/crypto.md                    | 161 ++++++++++++------
 gcc/config/riscv/riscv-builtins.cc            |   7 +-
 gcc/config/riscv/riscv-ftypes.def             |   1 -
 gcc/config/riscv/riscv-scalar-crypto.def      |  24 +--
 .../gcc.target/riscv/zknh-sha256-32.c         |  10 ++
 .../riscv/{zknh-sha256.c => zknh-sha256-64.c} |   8 +-
 gcc/testsuite/gcc.target/riscv/zksed64.c      |   4 +-
 gcc/testsuite/gcc.target/riscv/zksh64.c       |   4 +-
 8 files changed, 139 insertions(+), 80 deletions(-)
 create mode 100644 gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
 rename gcc/testsuite/gcc.target/riscv/{zknh-sha256.c => zknh-sha256-64.c} (78%)

diff --git a/gcc/config/riscv/crypto.md b/gcc/config/riscv/crypto.md
index e4b7f0190dfe..03a1d03397d9 100644
--- a/gcc/config/riscv/crypto.md
+++ b/gcc/config/riscv/crypto.md
@@ -250,36 +250,47 @@
 
 ;; ZKNH - SHA256
 
-(define_insn "riscv_sha256sig0_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SHA_256_SIG0))]
-  "TARGET_ZKNH"
-  "sha256sig0\t%0,%1"
-  [(set_attr "type" "crypto")])
-
-(define_insn "riscv_sha256sig1_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SHA_256_SIG1))]
-  "TARGET_ZKNH"
-  "sha256sig1\t%0,%1"
+(define_int_iterator SHA256_OP [
+  UNSPEC_SHA_256_SIG0 UNSPEC_SHA_256_SIG1
+  UNSPEC_SHA_256_SUM0 UNSPEC_SHA_256_SUM1])
+(define_int_attr sha256_op [
+  (UNSPEC_SHA_256_SIG0 "sha256sig0") (UNSPEC_SHA_256_SIG1 "sha256sig1")
+  (UNSPEC_SHA_256_SUM0 "sha256sum0") (UNSPEC_SHA_256_SUM1 "sha256sum1")])
+
+(define_insn "*riscv_<sha256_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                   SHA256_OP))]
+  "TARGET_ZKNH && !TARGET_64BIT"
+  "<sha256_op>\t%0,%1"
   [(set_attr "type" "crypto")])
 
-(define_insn "riscv_sha256sum0_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SHA_256_SUM0))]
-  "TARGET_ZKNH"
-  "sha256sum0\t%0,%1"
+(define_insn "riscv_<sha256_op>_di_extended"
+  [(set (match_operand:DI 0 "register_operand" "=r")
+        (sign_extend:DI
+             (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                        SHA256_OP)))]
+  "TARGET_ZKNH && TARGET_64BIT"
+  "<sha256_op>\t%0,%1"
   [(set_attr "type" "crypto")])
 
-(define_insn "riscv_sha256sum1_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SHA_256_SUM1))]
+(define_expand "riscv_<sha256_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                   SHA256_OP))]
   "TARGET_ZKNH"
-  "sha256sum1\t%0,%1"
+  {
+    if (TARGET_64BIT)
+      {
+        rtx t = gen_reg_rtx (DImode);
+        emit_insn (gen_riscv_<sha256_op>_di_extended (t, operands[1]));
+        t = gen_lowpart (SImode, t);
+        SUBREG_PROMOTED_VAR_P (t) = 1;
+        SUBREG_PROMOTED_SET (t, SRP_SIGNED);
+        emit_move_insn (operands[0], t);
+        DONE;
+      }
+  }
   [(set_attr "type" "crypto")])
 
 ;; ZKNH - SHA512
@@ -372,40 +383,88 @@
 
  ;; ZKSH
 
-(define_insn "riscv_sm3p0_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SM3_P0))]
-  "TARGET_ZKSH"
-  "sm3p0\t%0,%1"
+(define_int_iterator SM3_OP [UNSPEC_SM3_P0 UNSPEC_SM3_P1])
+(define_int_attr sm3_op [(UNSPEC_SM3_P0 "sm3p0") (UNSPEC_SM3_P1 "sm3p1")])
+
+(define_insn "*riscv_<sm3_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                   SM3_OP))]
+  "TARGET_ZKSH && !TARGET_64BIT"
+  "<sm3_op>\t%0,%1"
   [(set_attr "type" "crypto")])
 
-(define_insn "riscv_sm3p1_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")]
-                  UNSPEC_SM3_P1))]
+(define_insn "riscv_<sm3_op>_di_extended"
+  [(set (match_operand:DI 0 "register_operand" "=r")
+        (sign_extend:DI
+             (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                        SM3_OP)))]
+  "TARGET_ZKSH && TARGET_64BIT"
+  "<sm3_op>\t%0,%1"
+  [(set_attr "type" "crypto")])
+
+(define_expand "riscv_<sm3_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
+                   SM3_OP))]
   "TARGET_ZKSH"
-  "sm3p1\t%0,%1"
+  {
+    if (TARGET_64BIT)
+      {
+        rtx t = gen_reg_rtx (DImode);
+        emit_insn (gen_riscv_<sm3_op>_di_extended (t, operands[1]));
+        t = gen_lowpart (SImode, t);
+        SUBREG_PROMOTED_VAR_P (t) = 1;
+        SUBREG_PROMOTED_SET (t, SRP_SIGNED);
+        emit_move_insn (operands[0], t);
+        DONE;
+      }
+  }
   [(set_attr "type" "crypto")])
 
 ;; ZKSED
 
-(define_insn "riscv_sm4ed_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")
-                  (match_operand:X 2 "register_operand" "r")
-                  (match_operand:SI 3 "register_operand" "D03")]
-                  UNSPEC_SM4_ED))]
-  "TARGET_ZKSED"
-  "sm4ed\t%0,%1,%2,%3"
+(define_int_iterator SM4_OP [UNSPEC_SM4_ED UNSPEC_SM4_KS])
+(define_int_attr sm4_op [(UNSPEC_SM4_ED "sm4ed") (UNSPEC_SM4_KS "sm4ks")])
+
+(define_insn "*riscv_<sm4_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+                   (match_operand:SI 2 "register_operand" "r")
+                   (match_operand:SI 3 "register_operand" "D03")]
+                   SM4_OP))]
+  "TARGET_ZKSED && !TARGET_64BIT"
+  "<sm4_op>\t%0,%1,%2,%3"
   [(set_attr "type" "crypto")])
 
-(define_insn "riscv_sm4ks_<mode>"
-  [(set (match_operand:X 0 "register_operand" "=r")
-        (unspec:X [(match_operand:X 1 "register_operand" "r")
-                  (match_operand:X 2 "register_operand" "r")
-                  (match_operand:SI 3 "register_operand" "D03")]
-                  UNSPEC_SM4_KS))]
+(define_insn "riscv_<sm4_op>_di_extended"
+  [(set (match_operand:DI 0 "register_operand" "=r")
+        (sign_extend:DI
+             (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+                        (match_operand:SI 2 "register_operand" "r")
+                        (match_operand:SI 3 "register_operand" "D03")]
+                        SM4_OP)))]
+  "TARGET_ZKSED && TARGET_64BIT"
+  "<sm4_op>\t%0,%1,%2,%3"
+  [(set_attr "type" "crypto")])
+
+(define_expand "riscv_<sm4_op>_si"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+        (unspec:SI [(match_operand:SI 1 "register_operand" "r")
+                   (match_operand:SI 2 "register_operand" "r")
+                   (match_operand:SI 3 "register_operand" "D03")]
+                   SM4_OP))]
   "TARGET_ZKSED"
-  "sm4ks\t%0,%1,%2,%3"
+  {
+    if (TARGET_64BIT)
+      {
+        rtx t = gen_reg_rtx (DImode);
+        emit_insn (gen_riscv_<sm4_op>_di_extended (t, operands[1], operands[2], operands[3]));
+        t = gen_lowpart (SImode, t);
+        SUBREG_PROMOTED_VAR_P (t) = 1;
+        SUBREG_PROMOTED_SET (t, SRP_SIGNED);
+        emit_move_insn (operands[0], t);
+        DONE;
+      }
+  }
   [(set_attr "type" "crypto")])
diff --git a/gcc/config/riscv/riscv-builtins.cc b/gcc/config/riscv/riscv-builtins.cc
index f6b06b3c16ac..3fe3a89dcc25 100644
--- a/gcc/config/riscv/riscv-builtins.cc
+++ b/gcc/config/riscv/riscv-builtins.cc
@@ -112,12 +112,11 @@ AVAIL (crypto_zknd64, TARGET_ZKND && TARGET_64BIT)
 AVAIL (crypto_zkne32, TARGET_ZKNE && !TARGET_64BIT)
 AVAIL (crypto_zkne64, TARGET_ZKNE && TARGET_64BIT)
 AVAIL (crypto_zkne_or_zknd, (TARGET_ZKNE || TARGET_ZKND) && TARGET_64BIT)
+AVAIL (crypto_zknh, TARGET_ZKNH)
 AVAIL (crypto_zknh32, TARGET_ZKNH && !TARGET_64BIT)
 AVAIL (crypto_zknh64, TARGET_ZKNH && TARGET_64BIT)
-AVAIL (crypto_zksh32, TARGET_ZKSH && !TARGET_64BIT)
-AVAIL (crypto_zksh64, TARGET_ZKSH && TARGET_64BIT)
-AVAIL (crypto_zksed32, TARGET_ZKSED && !TARGET_64BIT)
-AVAIL (crypto_zksed64, TARGET_ZKSED && TARGET_64BIT)
+AVAIL (crypto_zksh, TARGET_ZKSH)
+AVAIL (crypto_zksed, TARGET_ZKSED)
 AVAIL (clmul_zbkc32_or_zbc32, (TARGET_ZBKC || TARGET_ZBC) && !TARGET_64BIT)
 AVAIL (clmul_zbkc64_or_zbc64, (TARGET_ZBKC || TARGET_ZBC) && TARGET_64BIT)
 AVAIL (clmulr_zbc32, TARGET_ZBC && !TARGET_64BIT)
diff --git a/gcc/config/riscv/riscv-ftypes.def b/gcc/config/riscv/riscv-ftypes.def
index 366861ce640e..33620c57ca06 100644
--- a/gcc/config/riscv/riscv-ftypes.def
+++ b/gcc/config/riscv/riscv-ftypes.def
@@ -41,4 +41,3 @@ DEF_RISCV_FTYPE (2, (UDI, USI, USI))
 DEF_RISCV_FTYPE (2, (UDI, UDI, USI))
 DEF_RISCV_FTYPE (2, (UDI, UDI, UDI))
 DEF_RISCV_FTYPE (3, (USI, USI, USI, USI))
-DEF_RISCV_FTYPE (3, (UDI, UDI, UDI, USI))
diff --git a/gcc/config/riscv/riscv-scalar-crypto.def b/gcc/config/riscv/riscv-scalar-crypto.def
index db86ec9fd78a..3db9ed4a03e5 100644
--- a/gcc/config/riscv/riscv-scalar-crypto.def
+++ b/gcc/config/riscv/riscv-scalar-crypto.def
@@ -54,14 +54,10 @@ DIRECT_BUILTIN (aes64es, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne64),
 DIRECT_BUILTIN (aes64esm, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne64),
 
 // ZKNH
-RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
-RISCV_BUILTIN (sha256sig0_di, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
-RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
-RISCV_BUILTIN (sha256sig1_di, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
-RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
-RISCV_BUILTIN (sha256sum0_di, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
-RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
-RISCV_BUILTIN (sha256sum1_di, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
+RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
+RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
+RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
+RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
 
 DIRECT_BUILTIN (sha512sig0h, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
 DIRECT_BUILTIN (sha512sig0l, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
@@ -76,13 +72,9 @@ DIRECT_BUILTIN (sha512sum0, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
 DIRECT_BUILTIN (sha512sum1, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
 
 // ZKSH
-RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh32),
-RISCV_BUILTIN (sm3p0_di, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zksh64),
-RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh32),
-RISCV_BUILTIN (sm3p1_di, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zksh64),
+RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh),
+RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh),
 
 // ZKSED
-RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed32),
-RISCV_BUILTIN (sm4ed_di, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI_USI, crypto_zksed64),
-RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed32),
-RISCV_BUILTIN (sm4ks_di, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI_USI, crypto_zksed64),
+RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed),
+RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed),
diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c b/gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
new file mode 100644
index 000000000000..c51b143a8a5c
--- /dev/null
+++ b/gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
@@ -0,0 +1,10 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -march=rv32gc_zknh -mabi=ilp32d" } */
+/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
+
+#include "zknh-sha256-64.c"
+
+/* { dg-final { scan-assembler-times "sha256sig0" 1 } } */
+/* { dg-final { scan-assembler-times "sha256sig1" 1 } } */
+/* { dg-final { scan-assembler-times "sha256sum0" 1 } } */
+/* { dg-final { scan-assembler-times "sha256sum1" 1 } } */
diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha256.c b/gcc/testsuite/gcc.target/riscv/zknh-sha256-64.c
similarity index 78%
rename from gcc/testsuite/gcc.target/riscv/zknh-sha256.c
rename to gcc/testsuite/gcc.target/riscv/zknh-sha256-64.c
index 952d611cd0b9..2ef37601e6fb 100644
--- a/gcc/testsuite/gcc.target/riscv/zknh-sha256.c
+++ b/gcc/testsuite/gcc.target/riscv/zknh-sha256-64.c
@@ -2,22 +2,22 @@
 /* { dg-options "-O2 -march=rv64gc_zknh -mabi=lp64" } */
 /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
 
-unsigned long foo1(unsigned long rs1)
+unsigned int foo1(unsigned int rs1)
 {
     return __builtin_riscv_sha256sig0(rs1);
 }
 
-unsigned long foo2(unsigned long rs1)
+unsigned int foo2(unsigned int rs1)
 {
     return __builtin_riscv_sha256sig1(rs1);
 }
 
-unsigned long foo3(unsigned long rs1)
+unsigned int foo3(unsigned int rs1)
 {
     return __builtin_riscv_sha256sum0(rs1);
 }
 
-unsigned long foo4(unsigned long rs1)
+unsigned int foo4(unsigned int rs1)
 {
     return __builtin_riscv_sha256sum1(rs1);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksed64.c b/gcc/testsuite/gcc.target/riscv/zksed64.c
index 3485adf9cd88..913e7be4e4d9 100644
--- a/gcc/testsuite/gcc.target/riscv/zksed64.c
+++ b/gcc/testsuite/gcc.target/riscv/zksed64.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-uint64_t foo1(uint64_t rs1, uint64_t rs2, unsigned bs)
+uint32_t foo1(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ks(rs1,rs2,bs);
 }
 
-uint64_t foo2(uint64_t rs1, uint64_t rs2, unsigned bs)
+uint32_t foo2(uint32_t rs1, uint32_t rs2, unsigned bs)
 {
     return __builtin_riscv_sm4ed(rs1,rs2,bs);
 }
diff --git a/gcc/testsuite/gcc.target/riscv/zksh64.c b/gcc/testsuite/gcc.target/riscv/zksh64.c
index bdd137872785..30bb1bdeeeb7 100644
--- a/gcc/testsuite/gcc.target/riscv/zksh64.c
+++ b/gcc/testsuite/gcc.target/riscv/zksh64.c
@@ -4,12 +4,12 @@
 
 #include <stdint-gcc.h>
 
-uint64_t foo1(uint64_t rs1)
+uint32_t foo1(uint32_t rs1)
 {
     return __builtin_riscv_sm3p0(rs1);
 }
 
-uint64_t foo2(uint64_t rs1)
+uint32_t foo2(uint32_t rs1)
 {
     return __builtin_riscv_sm3p1(rs1);
 }
-- 
2.42.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t
  2023-09-12  1:28   ` [PATCH 2/2] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t Tsukasa OI
@ 2023-09-12  2:44     ` Kito Cheng
  2023-09-12  3:20       ` Tsukasa OI
  0 siblings, 1 reply; 12+ messages in thread
From: Kito Cheng @ 2023-09-12  2:44 UTC (permalink / raw)
  To: Tsukasa OI
  Cc: Palmer Dabbelt, Andrew Waterman, Jim Wilson, Jeff Law, gcc-patches

LGTM, I think llvm and GCC are inconsistent for those intrinsics API
is really unfortunate...so really appreciate making those API align :)
And did you have plan to add riscv_crypto.h after updating/fixing all builtin?

On Tue, Sep 12, 2023 at 9:29 AM Tsukasa OI via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
>
> From: Tsukasa OI <research_trasio@irq.a4lg.com>
>
> This is in parity with the LLVM commit a64b3e92c7cb ("[RISCV] Re-define
> sha256, Zksed, and Zksh intrinsics to use i32 types.").
>
> SHA-256, SM3 and SM4 instructions operate on 32-bit integers and upper
> 32-bits have no effects on RV64 (the output is sign-extended from the
> original 32-bit value).  In that sense, making those intrinsics only
> operate on uint32_t is much more natural than XLEN-bits wide integers.
>
> This commit reforms instructions and expansions based on 32-bit
> instruction handling on RV64 (such as ADDW).
>
> Before:
>    riscv_<op>_si: For RV32, fully operate on uint32_t
>    riscv_<op>_di: For RV64, fully operate on uint64_t
> After:
>   *riscv_<op>_si: For RV32, fully operate on uint32_t
>    riscv_<op>_di_extended:
>                   For RV64.  Input is uint32_t and output is int64_t,
>                   sign-extended from the int32_t result
>                   (represents a part of <op> behavior).
>    riscv_<op>_si: Common (fully operate on uint32_t).
>                   On RV32, "expands" to *riscv_<op>_si.
>                   On RV64, initially expands to riscv_<op>_di_extended *and*
>                   extracts lower 32-bits from the int64_t result.
>
> It also refines definitions of SHA-256, SM3 and SM4 intrinsics.
>
> gcc/ChangeLog:
>
>         * config/riscv/crypto.md (riscv_sha256sig0_<mode>,
>         riscv_sha256sig1_<mode>, riscv_sha256sum0_<mode>,
>         riscv_sha256sum1_<mode>, riscv_sm3p0_<mode>, riscv_sm3p1_<mode>,
>         riscv_sm4ed_<mode>, riscv_sm4ks_<mode>): Remove and replace with
>         new insn/expansions.
>         (SHA256_OP, SM3_OP, SM4_OP): New iterators.
>         (sha256_op, sm3_op, sm4_op): New attributes for iteration.
>         (*riscv_<sha256_op>_si): New raw instruction for RV32.
>         (*riscv_<sm3_op>_si): Ditto.
>         (*riscv_<sm4_op>_si): Ditto.
>         (riscv_<sha256_op>_di_extended): New base instruction for RV64.
>         (riscv_<sm3_op>_di_extended): Ditto.
>         (riscv_<sm4_op>_di_extended): Ditto.
>         (riscv_<sha256_op>_si): New common instruction expansion.
>         (riscv_<sm3_op>_si): Ditto.
>         (riscv_<sm4_op>_si): Ditto.
>         * config/riscv/riscv-builtins.cc: Add availability "crypto_zknh",
>         "crypto_zksh" and "crypto_zksed".  Remove availability
>         "crypto_zksh{32,64}" and "crypto_zksed{32,64}".
>         * config/riscv/riscv-ftypes.def: Remove unused function type.
>         * config/riscv/riscv-scalar-crypto.def: Make SHA-256, SM3 and SM4
>         intrinsics to operate on uint32_t.
>
> gcc/testsuite/ChangeLog:
>
>         * gcc.target/riscv/zknh-sha256.c: Moved to...
>         * gcc.target/riscv/zknh-sha256-64.c: ...here.  Test RV64.
>         * gcc.target/riscv/zknh-sha256-32.c: New test for RV32.
>         * gcc.target/riscv/zksh64.c: Change the type.
>         * gcc.target/riscv/zksed64.c: Ditto.
> ---
>  gcc/config/riscv/crypto.md                    | 161 ++++++++++++------
>  gcc/config/riscv/riscv-builtins.cc            |   7 +-
>  gcc/config/riscv/riscv-ftypes.def             |   1 -
>  gcc/config/riscv/riscv-scalar-crypto.def      |  24 +--
>  .../gcc.target/riscv/zknh-sha256-32.c         |  10 ++
>  .../riscv/{zknh-sha256.c => zknh-sha256-64.c} |   8 +-
>  gcc/testsuite/gcc.target/riscv/zksed64.c      |   4 +-
>  gcc/testsuite/gcc.target/riscv/zksh64.c       |   4 +-
>  8 files changed, 139 insertions(+), 80 deletions(-)
>  create mode 100644 gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
>  rename gcc/testsuite/gcc.target/riscv/{zknh-sha256.c => zknh-sha256-64.c} (78%)
>
> diff --git a/gcc/config/riscv/crypto.md b/gcc/config/riscv/crypto.md
> index e4b7f0190dfe..03a1d03397d9 100644
> --- a/gcc/config/riscv/crypto.md
> +++ b/gcc/config/riscv/crypto.md
> @@ -250,36 +250,47 @@
>
>  ;; ZKNH - SHA256
>
> -(define_insn "riscv_sha256sig0_<mode>"
> -  [(set (match_operand:X 0 "register_operand" "=r")
> -        (unspec:X [(match_operand:X 1 "register_operand" "r")]
> -                  UNSPEC_SHA_256_SIG0))]
> -  "TARGET_ZKNH"
> -  "sha256sig0\t%0,%1"
> -  [(set_attr "type" "crypto")])
> -
> -(define_insn "riscv_sha256sig1_<mode>"
> -  [(set (match_operand:X 0 "register_operand" "=r")
> -        (unspec:X [(match_operand:X 1 "register_operand" "r")]
> -                  UNSPEC_SHA_256_SIG1))]
> -  "TARGET_ZKNH"
> -  "sha256sig1\t%0,%1"
> +(define_int_iterator SHA256_OP [
> +  UNSPEC_SHA_256_SIG0 UNSPEC_SHA_256_SIG1
> +  UNSPEC_SHA_256_SUM0 UNSPEC_SHA_256_SUM1])
> +(define_int_attr sha256_op [
> +  (UNSPEC_SHA_256_SIG0 "sha256sig0") (UNSPEC_SHA_256_SIG1 "sha256sig1")
> +  (UNSPEC_SHA_256_SUM0 "sha256sum0") (UNSPEC_SHA_256_SUM1 "sha256sum1")])
> +
> +(define_insn "*riscv_<sha256_op>_si"
> +  [(set (match_operand:SI 0 "register_operand" "=r")
> +        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
> +                   SHA256_OP))]
> +  "TARGET_ZKNH && !TARGET_64BIT"
> +  "<sha256_op>\t%0,%1"
>    [(set_attr "type" "crypto")])
>
> -(define_insn "riscv_sha256sum0_<mode>"
> -  [(set (match_operand:X 0 "register_operand" "=r")
> -        (unspec:X [(match_operand:X 1 "register_operand" "r")]
> -                  UNSPEC_SHA_256_SUM0))]
> -  "TARGET_ZKNH"
> -  "sha256sum0\t%0,%1"
> +(define_insn "riscv_<sha256_op>_di_extended"
> +  [(set (match_operand:DI 0 "register_operand" "=r")
> +        (sign_extend:DI
> +             (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
> +                        SHA256_OP)))]
> +  "TARGET_ZKNH && TARGET_64BIT"
> +  "<sha256_op>\t%0,%1"
>    [(set_attr "type" "crypto")])
>
> -(define_insn "riscv_sha256sum1_<mode>"
> -  [(set (match_operand:X 0 "register_operand" "=r")
> -        (unspec:X [(match_operand:X 1 "register_operand" "r")]
> -                  UNSPEC_SHA_256_SUM1))]
> +(define_expand "riscv_<sha256_op>_si"
> +  [(set (match_operand:SI 0 "register_operand" "=r")
> +        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
> +                   SHA256_OP))]
>    "TARGET_ZKNH"
> -  "sha256sum1\t%0,%1"
> +  {
> +    if (TARGET_64BIT)
> +      {
> +        rtx t = gen_reg_rtx (DImode);
> +        emit_insn (gen_riscv_<sha256_op>_di_extended (t, operands[1]));
> +        t = gen_lowpart (SImode, t);
> +        SUBREG_PROMOTED_VAR_P (t) = 1;
> +        SUBREG_PROMOTED_SET (t, SRP_SIGNED);
> +        emit_move_insn (operands[0], t);
> +        DONE;
> +      }
> +  }
>    [(set_attr "type" "crypto")])
>
>  ;; ZKNH - SHA512
> @@ -372,40 +383,88 @@
>
>   ;; ZKSH
>
> -(define_insn "riscv_sm3p0_<mode>"
> -  [(set (match_operand:X 0 "register_operand" "=r")
> -        (unspec:X [(match_operand:X 1 "register_operand" "r")]
> -                  UNSPEC_SM3_P0))]
> -  "TARGET_ZKSH"
> -  "sm3p0\t%0,%1"
> +(define_int_iterator SM3_OP [UNSPEC_SM3_P0 UNSPEC_SM3_P1])
> +(define_int_attr sm3_op [(UNSPEC_SM3_P0 "sm3p0") (UNSPEC_SM3_P1 "sm3p1")])
> +
> +(define_insn "*riscv_<sm3_op>_si"
> +  [(set (match_operand:SI 0 "register_operand" "=r")
> +        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
> +                   SM3_OP))]
> +  "TARGET_ZKSH && !TARGET_64BIT"
> +  "<sm3_op>\t%0,%1"
>    [(set_attr "type" "crypto")])
>
> -(define_insn "riscv_sm3p1_<mode>"
> -  [(set (match_operand:X 0 "register_operand" "=r")
> -        (unspec:X [(match_operand:X 1 "register_operand" "r")]
> -                  UNSPEC_SM3_P1))]
> +(define_insn "riscv_<sm3_op>_di_extended"
> +  [(set (match_operand:DI 0 "register_operand" "=r")
> +        (sign_extend:DI
> +             (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
> +                        SM3_OP)))]
> +  "TARGET_ZKSH && TARGET_64BIT"
> +  "<sm3_op>\t%0,%1"
> +  [(set_attr "type" "crypto")])
> +
> +(define_expand "riscv_<sm3_op>_si"
> +  [(set (match_operand:SI 0 "register_operand" "=r")
> +        (unspec:SI [(match_operand:SI 1 "register_operand" "r")]
> +                   SM3_OP))]
>    "TARGET_ZKSH"
> -  "sm3p1\t%0,%1"
> +  {
> +    if (TARGET_64BIT)
> +      {
> +        rtx t = gen_reg_rtx (DImode);
> +        emit_insn (gen_riscv_<sm3_op>_di_extended (t, operands[1]));
> +        t = gen_lowpart (SImode, t);
> +        SUBREG_PROMOTED_VAR_P (t) = 1;
> +        SUBREG_PROMOTED_SET (t, SRP_SIGNED);
> +        emit_move_insn (operands[0], t);
> +        DONE;
> +      }
> +  }
>    [(set_attr "type" "crypto")])
>
>  ;; ZKSED
>
> -(define_insn "riscv_sm4ed_<mode>"
> -  [(set (match_operand:X 0 "register_operand" "=r")
> -        (unspec:X [(match_operand:X 1 "register_operand" "r")
> -                  (match_operand:X 2 "register_operand" "r")
> -                  (match_operand:SI 3 "register_operand" "D03")]
> -                  UNSPEC_SM4_ED))]
> -  "TARGET_ZKSED"
> -  "sm4ed\t%0,%1,%2,%3"
> +(define_int_iterator SM4_OP [UNSPEC_SM4_ED UNSPEC_SM4_KS])
> +(define_int_attr sm4_op [(UNSPEC_SM4_ED "sm4ed") (UNSPEC_SM4_KS "sm4ks")])
> +
> +(define_insn "*riscv_<sm4_op>_si"
> +  [(set (match_operand:SI 0 "register_operand" "=r")
> +        (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> +                   (match_operand:SI 2 "register_operand" "r")
> +                   (match_operand:SI 3 "register_operand" "D03")]
> +                   SM4_OP))]
> +  "TARGET_ZKSED && !TARGET_64BIT"
> +  "<sm4_op>\t%0,%1,%2,%3"
>    [(set_attr "type" "crypto")])
>
> -(define_insn "riscv_sm4ks_<mode>"
> -  [(set (match_operand:X 0 "register_operand" "=r")
> -        (unspec:X [(match_operand:X 1 "register_operand" "r")
> -                  (match_operand:X 2 "register_operand" "r")
> -                  (match_operand:SI 3 "register_operand" "D03")]
> -                  UNSPEC_SM4_KS))]
> +(define_insn "riscv_<sm4_op>_di_extended"
> +  [(set (match_operand:DI 0 "register_operand" "=r")
> +        (sign_extend:DI
> +             (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> +                        (match_operand:SI 2 "register_operand" "r")
> +                        (match_operand:SI 3 "register_operand" "D03")]
> +                        SM4_OP)))]
> +  "TARGET_ZKSED && TARGET_64BIT"
> +  "<sm4_op>\t%0,%1,%2,%3"
> +  [(set_attr "type" "crypto")])
> +
> +(define_expand "riscv_<sm4_op>_si"
> +  [(set (match_operand:SI 0 "register_operand" "=r")
> +        (unspec:SI [(match_operand:SI 1 "register_operand" "r")
> +                   (match_operand:SI 2 "register_operand" "r")
> +                   (match_operand:SI 3 "register_operand" "D03")]
> +                   SM4_OP))]
>    "TARGET_ZKSED"
> -  "sm4ks\t%0,%1,%2,%3"
> +  {
> +    if (TARGET_64BIT)
> +      {
> +        rtx t = gen_reg_rtx (DImode);
> +        emit_insn (gen_riscv_<sm4_op>_di_extended (t, operands[1], operands[2], operands[3]));
> +        t = gen_lowpart (SImode, t);
> +        SUBREG_PROMOTED_VAR_P (t) = 1;
> +        SUBREG_PROMOTED_SET (t, SRP_SIGNED);
> +        emit_move_insn (operands[0], t);
> +        DONE;
> +      }
> +  }
>    [(set_attr "type" "crypto")])
> diff --git a/gcc/config/riscv/riscv-builtins.cc b/gcc/config/riscv/riscv-builtins.cc
> index f6b06b3c16ac..3fe3a89dcc25 100644
> --- a/gcc/config/riscv/riscv-builtins.cc
> +++ b/gcc/config/riscv/riscv-builtins.cc
> @@ -112,12 +112,11 @@ AVAIL (crypto_zknd64, TARGET_ZKND && TARGET_64BIT)
>  AVAIL (crypto_zkne32, TARGET_ZKNE && !TARGET_64BIT)
>  AVAIL (crypto_zkne64, TARGET_ZKNE && TARGET_64BIT)
>  AVAIL (crypto_zkne_or_zknd, (TARGET_ZKNE || TARGET_ZKND) && TARGET_64BIT)
> +AVAIL (crypto_zknh, TARGET_ZKNH)
>  AVAIL (crypto_zknh32, TARGET_ZKNH && !TARGET_64BIT)
>  AVAIL (crypto_zknh64, TARGET_ZKNH && TARGET_64BIT)
> -AVAIL (crypto_zksh32, TARGET_ZKSH && !TARGET_64BIT)
> -AVAIL (crypto_zksh64, TARGET_ZKSH && TARGET_64BIT)
> -AVAIL (crypto_zksed32, TARGET_ZKSED && !TARGET_64BIT)
> -AVAIL (crypto_zksed64, TARGET_ZKSED && TARGET_64BIT)
> +AVAIL (crypto_zksh, TARGET_ZKSH)
> +AVAIL (crypto_zksed, TARGET_ZKSED)
>  AVAIL (clmul_zbkc32_or_zbc32, (TARGET_ZBKC || TARGET_ZBC) && !TARGET_64BIT)
>  AVAIL (clmul_zbkc64_or_zbc64, (TARGET_ZBKC || TARGET_ZBC) && TARGET_64BIT)
>  AVAIL (clmulr_zbc32, TARGET_ZBC && !TARGET_64BIT)
> diff --git a/gcc/config/riscv/riscv-ftypes.def b/gcc/config/riscv/riscv-ftypes.def
> index 366861ce640e..33620c57ca06 100644
> --- a/gcc/config/riscv/riscv-ftypes.def
> +++ b/gcc/config/riscv/riscv-ftypes.def
> @@ -41,4 +41,3 @@ DEF_RISCV_FTYPE (2, (UDI, USI, USI))
>  DEF_RISCV_FTYPE (2, (UDI, UDI, USI))
>  DEF_RISCV_FTYPE (2, (UDI, UDI, UDI))
>  DEF_RISCV_FTYPE (3, (USI, USI, USI, USI))
> -DEF_RISCV_FTYPE (3, (UDI, UDI, UDI, USI))
> diff --git a/gcc/config/riscv/riscv-scalar-crypto.def b/gcc/config/riscv/riscv-scalar-crypto.def
> index db86ec9fd78a..3db9ed4a03e5 100644
> --- a/gcc/config/riscv/riscv-scalar-crypto.def
> +++ b/gcc/config/riscv/riscv-scalar-crypto.def
> @@ -54,14 +54,10 @@ DIRECT_BUILTIN (aes64es, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne64),
>  DIRECT_BUILTIN (aes64esm, RISCV_UDI_FTYPE_UDI_UDI, crypto_zkne64),
>
>  // ZKNH
> -RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
> -RISCV_BUILTIN (sha256sig0_di, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
> -RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
> -RISCV_BUILTIN (sha256sig1_di, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
> -RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
> -RISCV_BUILTIN (sha256sum0_di, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
> -RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh32),
> -RISCV_BUILTIN (sha256sum1_di, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
> +RISCV_BUILTIN (sha256sig0_si, "sha256sig0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
> +RISCV_BUILTIN (sha256sig1_si, "sha256sig1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
> +RISCV_BUILTIN (sha256sum0_si, "sha256sum0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
> +RISCV_BUILTIN (sha256sum1_si, "sha256sum1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zknh),
>
>  DIRECT_BUILTIN (sha512sig0h, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
>  DIRECT_BUILTIN (sha512sig0l, RISCV_USI_FTYPE_USI_USI, crypto_zknh32),
> @@ -76,13 +72,9 @@ DIRECT_BUILTIN (sha512sum0, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
>  DIRECT_BUILTIN (sha512sum1, RISCV_UDI_FTYPE_UDI, crypto_zknh64),
>
>  // ZKSH
> -RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh32),
> -RISCV_BUILTIN (sm3p0_di, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zksh64),
> -RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh32),
> -RISCV_BUILTIN (sm3p1_di, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI, crypto_zksh64),
> +RISCV_BUILTIN (sm3p0_si, "sm3p0", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh),
> +RISCV_BUILTIN (sm3p1_si, "sm3p1", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI, crypto_zksh),
>
>  // ZKSED
> -RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed32),
> -RISCV_BUILTIN (sm4ed_di, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI_USI, crypto_zksed64),
> -RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed32),
> -RISCV_BUILTIN (sm4ks_di, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_UDI_FTYPE_UDI_UDI_USI, crypto_zksed64),
> +RISCV_BUILTIN (sm4ed_si, "sm4ed", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed),
> +RISCV_BUILTIN (sm4ks_si, "sm4ks", RISCV_BUILTIN_DIRECT, RISCV_USI_FTYPE_USI_USI_USI, crypto_zksed),
> diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c b/gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
> new file mode 100644
> index 000000000000..c51b143a8a5c
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/riscv/zknh-sha256-32.c
> @@ -0,0 +1,10 @@
> +/* { dg-do compile } */
> +/* { dg-options "-O2 -march=rv32gc_zknh -mabi=ilp32d" } */
> +/* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
> +
> +#include "zknh-sha256-64.c"
> +
> +/* { dg-final { scan-assembler-times "sha256sig0" 1 } } */
> +/* { dg-final { scan-assembler-times "sha256sig1" 1 } } */
> +/* { dg-final { scan-assembler-times "sha256sum0" 1 } } */
> +/* { dg-final { scan-assembler-times "sha256sum1" 1 } } */
> diff --git a/gcc/testsuite/gcc.target/riscv/zknh-sha256.c b/gcc/testsuite/gcc.target/riscv/zknh-sha256-64.c
> similarity index 78%
> rename from gcc/testsuite/gcc.target/riscv/zknh-sha256.c
> rename to gcc/testsuite/gcc.target/riscv/zknh-sha256-64.c
> index 952d611cd0b9..2ef37601e6fb 100644
> --- a/gcc/testsuite/gcc.target/riscv/zknh-sha256.c
> +++ b/gcc/testsuite/gcc.target/riscv/zknh-sha256-64.c
> @@ -2,22 +2,22 @@
>  /* { dg-options "-O2 -march=rv64gc_zknh -mabi=lp64" } */
>  /* { dg-skip-if "" { *-*-* } { "-g" "-flto"} } */
>
> -unsigned long foo1(unsigned long rs1)
> +unsigned int foo1(unsigned int rs1)
>  {
>      return __builtin_riscv_sha256sig0(rs1);
>  }
>
> -unsigned long foo2(unsigned long rs1)
> +unsigned int foo2(unsigned int rs1)
>  {
>      return __builtin_riscv_sha256sig1(rs1);
>  }
>
> -unsigned long foo3(unsigned long rs1)
> +unsigned int foo3(unsigned int rs1)
>  {
>      return __builtin_riscv_sha256sum0(rs1);
>  }
>
> -unsigned long foo4(unsigned long rs1)
> +unsigned int foo4(unsigned int rs1)
>  {
>      return __builtin_riscv_sha256sum1(rs1);
>  }
> diff --git a/gcc/testsuite/gcc.target/riscv/zksed64.c b/gcc/testsuite/gcc.target/riscv/zksed64.c
> index 3485adf9cd88..913e7be4e4d9 100644
> --- a/gcc/testsuite/gcc.target/riscv/zksed64.c
> +++ b/gcc/testsuite/gcc.target/riscv/zksed64.c
> @@ -4,12 +4,12 @@
>
>  #include <stdint-gcc.h>
>
> -uint64_t foo1(uint64_t rs1, uint64_t rs2, unsigned bs)
> +uint32_t foo1(uint32_t rs1, uint32_t rs2, unsigned bs)
>  {
>      return __builtin_riscv_sm4ks(rs1,rs2,bs);
>  }
>
> -uint64_t foo2(uint64_t rs1, uint64_t rs2, unsigned bs)
> +uint32_t foo2(uint32_t rs1, uint32_t rs2, unsigned bs)
>  {
>      return __builtin_riscv_sm4ed(rs1,rs2,bs);
>  }
> diff --git a/gcc/testsuite/gcc.target/riscv/zksh64.c b/gcc/testsuite/gcc.target/riscv/zksh64.c
> index bdd137872785..30bb1bdeeeb7 100644
> --- a/gcc/testsuite/gcc.target/riscv/zksh64.c
> +++ b/gcc/testsuite/gcc.target/riscv/zksh64.c
> @@ -4,12 +4,12 @@
>
>  #include <stdint-gcc.h>
>
> -uint64_t foo1(uint64_t rs1)
> +uint32_t foo1(uint32_t rs1)
>  {
>      return __builtin_riscv_sm3p0(rs1);
>  }
>
> -uint64_t foo2(uint64_t rs1)
> +uint32_t foo2(uint32_t rs1)
>  {
>      return __builtin_riscv_sm3p1(rs1);
>  }
> --
> 2.42.0
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t
  2023-09-12  2:44     ` Kito Cheng
@ 2023-09-12  3:20       ` Tsukasa OI
  0 siblings, 0 replies; 12+ messages in thread
From: Tsukasa OI @ 2023-09-12  3:20 UTC (permalink / raw)
  To: Kito Cheng; +Cc: GCC Patches

On 2023/09/12 11:44, Kito Cheng wrote:
> LGTM, I think llvm and GCC are inconsistent for those intrinsics API
> is really unfortunate...so really appreciate making those API align :)

I guess that you mean LGTM to this patch set (given the context).
If so, I will commit this patch set later.

> And did you have plan to add riscv_crypto.h after updating/fixing all builtin?

Like riscv_vector.h?  I don't and I think we need to discuss what to
include in... like Toolchain SIG first.

Additionally, we need to discuss what to do with these XLEN-specific
builtins specific to LLVM (my idea is, add XLEN-specific builtins like
LLVM but not to remove XLEN independent ones).

Thanks,
Tsukasa

> Excerpt from the latest clang/include/clang/Basic/BuiltinsRISCV.def:
> 
> // Zbb extension
> TARGET_BUILTIN(__builtin_riscv_orc_b_32, "UiUi", "nc", "zbb")
> TARGET_BUILTIN(__builtin_riscv_orc_b_64, "UWiUWi", "nc", "zbb,64bit")
> TARGET_BUILTIN(__builtin_riscv_clz_32, "UiUi", "nc", "zbb|xtheadbb")
> TARGET_BUILTIN(__builtin_riscv_clz_64, "UiUWi", "nc", "zbb|xtheadbb,64bit")
> TARGET_BUILTIN(__builtin_riscv_ctz_32, "UiUi", "nc", "zbb")
> TARGET_BUILTIN(__builtin_riscv_ctz_64, "UiUWi", "nc", "zbb,64bit")
> 
> // Zbc or Zbkc extension
> TARGET_BUILTIN(__builtin_riscv_clmul_32, "UiUiUi", "nc", "zbc|zbkc")
> TARGET_BUILTIN(__builtin_riscv_clmul_64, "UWiUWiUWi", "nc", "zbc|zbkc,64bit")
> TARGET_BUILTIN(__builtin_riscv_clmulh_32, "UiUiUi", "nc", "zbc|zbkc,32bit")
> TARGET_BUILTIN(__builtin_riscv_clmulh_64, "UWiUWiUWi", "nc", "zbc|zbkc,64bit")
> TARGET_BUILTIN(__builtin_riscv_clmulr_32, "UiUiUi", "nc", "zbc,32bit")
> TARGET_BUILTIN(__builtin_riscv_clmulr_64, "UWiUWiUWi", "nc", "zbc,64bit")
> 
> // Zbkx
> TARGET_BUILTIN(__builtin_riscv_xperm4_32, "UiUiUi", "nc", "zbkx,32bit")
> TARGET_BUILTIN(__builtin_riscv_xperm4_64, "UWiUWiUWi", "nc", "zbkx,64bit")
> TARGET_BUILTIN(__builtin_riscv_xperm8_32, "UiUiUi", "nc", "zbkx,32bit")
> TARGET_BUILTIN(__builtin_riscv_xperm8_64, "UWiUWiUWi", "nc", "zbkx,64bit")
> 
> // Zbkb extension
> TARGET_BUILTIN(__builtin_riscv_brev8_32, "UiUi", "nc", "zbkb")
> TARGET_BUILTIN(__builtin_riscv_brev8_64, "UWiUWi", "nc", "zbkb,64bit")
> TARGET_BUILTIN(__builtin_riscv_zip_32, "UiUi", "nc", "zbkb,32bit")
> TARGET_BUILTIN(__builtin_riscv_unzip_32, "UiUi", "nc", "zbkb,32bit")

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned
  2023-09-12  1:28   ` [PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned Tsukasa OI
@ 2023-09-17 15:58     ` Jeff Law
  0 siblings, 0 replies; 12+ messages in thread
From: Jeff Law @ 2023-09-17 15:58 UTC (permalink / raw)
  To: Tsukasa OI, Kito Cheng, Palmer Dabbelt, Andrew Waterman, Jim Wilson
  Cc: gcc-patches



On 9/11/23 19:28, Tsukasa OI wrote:
> From: Tsukasa OI <research_trasio@irq.a4lg.com>
> 
> For bit manipulation operations, input(s) and the manipulated output are
> better to be unsigned like other target-independent builtins like
> __builtin_bswap32 and __builtin_popcount.
> 
> Although this is not completely compatible as before (as the type changes),
> most code will run normally, even without warnings (with -Wall -Wextra).
> 
> To make consistent to the LLVM commit 599421ae36c3 ("[RISCV] Use unsigned
> instead of signed types for Zk* and Zb* builtins."), round numbers and
> shift amount on the scalar crypto instructions are also changed
> to unsigned.
> 
> gcc/ChangeLog:
> 
> 	* config/riscv/riscv-builtins.cc (RISCV_ATYPE_UQI): New for
> 	uint8_t.  (RISCV_ATYPE_UHI): New for uint16_t.
> 	(RISCV_ATYPE_QI, RISCV_ATYPE_HI, RISCV_ATYPE_SI, RISCV_ATYPE_DI):
> 	Removed as no longer used.
> 	(RISCV_ATYPE_UDI): New for uint64_t.
> 	* config/riscv/riscv-cmo.def: Make types unsigned for not working
> 	"zicbop_cbo_prefetchi" and working bit manipulation clmul builtin
> 	argument/return types.
> 	* config/riscv/riscv-ftypes.def: Make bit manipulation, round
> 	number and shift amount types unsigned.
> 	* config/riscv/riscv-scalar-crypto.def: Ditto.
> 
> gcc/testsuite/ChangeLog:
> 
> 	* gcc.target/riscv/zbc32.c: Make signed type to unsigned.
> 	* gcc.target/riscv/zbc64.c: Ditto.
> 	* gcc.target/riscv/zbkb32.c: Ditto.
> 	* gcc.target/riscv/zbkb64.c: Ditto.
> 	* gcc.target/riscv/zbkc32.c: Ditto.
> 	* gcc.target/riscv/zbkc64.c: Ditto.
> 	* gcc.target/riscv/zbkx32.c: Ditto.
> 	* gcc.target/riscv/zbkx64.c: Ditto.
> 	* gcc.target/riscv/zknd32.c: Ditto.
> 	* gcc.target/riscv/zknd64.c: Ditto.
> 	* gcc.target/riscv/zkne32.c: Ditto.
> 	* gcc.target/riscv/zkne64.c: Ditto.
> 	* gcc.target/riscv/zknh-sha256.c: Ditto.
> 	* gcc.target/riscv/zknh-sha512-32.c: Ditto.
> 	* gcc.target/riscv/zknh-sha512-64.c: Ditto.
> 	* gcc.target/riscv/zksed32.c: Ditto.
> 	* gcc.target/riscv/zksed64.c: Ditto.
> 	* gcc.target/riscv/zksh32.c: Ditto.
> 	* gcc.target/riscv/zksh64.c: Ditto.
OK
Jeff
> ---

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [RFC PATCH 2/2] RISC-V: Update testsuite for type-changed builtins
  2023-09-07  2:17 ` [RFC PATCH 2/2] RISC-V: Update testsuite for type-changed builtins Tsukasa OI
@ 2023-09-17 15:58   ` Jeff Law
  0 siblings, 0 replies; 12+ messages in thread
From: Jeff Law @ 2023-09-17 15:58 UTC (permalink / raw)
  To: Tsukasa OI, Kito Cheng, Palmer Dabbelt, Andrew Waterman, Jim Wilson
  Cc: gcc-patches



On 9/6/23 20:17, Tsukasa OI wrote:
> From: Tsukasa OI <research_trasio@irq.a4lg.com>
> 
> This commit replaces the type of the builtin used in the testsuite.
> 
> Even without this commit, it won't cause any test failures but changed so
> that no confusion occurs.
> 
> gcc/testsuite/ChangeLog:
> 
> 	* gcc.target/riscv/zbc32.c: Make signed type to unsigned.
> 	* gcc.target/riscv/zbc64.c: Ditto.
> 	* gcc.target/riscv/zbkb32.c: Ditto.
> 	* gcc.target/riscv/zbkb64.c: Ditto.
> 	* gcc.target/riscv/zbkc32.c: Ditto.
> 	* gcc.target/riscv/zbkc64.c: Ditto.
> 	* gcc.target/riscv/zbkx32.c: Ditto.
> 	* gcc.target/riscv/zbkx64.c: Ditto.
> 	* gcc.target/riscv/zknd32.c: Ditto.
> 	* gcc.target/riscv/zknd64.c: Ditto.
> 	* gcc.target/riscv/zkne32.c: Ditto.
> 	* gcc.target/riscv/zkne64.c: Ditto.
> 	* gcc.target/riscv/zknh-sha256.c: Ditto.
> 	* gcc.target/riscv/zknh-sha512-32.c: Ditto.
> 	* gcc.target/riscv/zknh-sha512-64.c: Ditto.
> 	* gcc.target/riscv/zksed32.c: Ditto.
> 	* gcc.target/riscv/zksed64.c: Ditto.
> 	* gcc.target/riscv/zksh32.c: Ditto.
> 	* gcc.target/riscv/zksh64.c: Ditto.
OK
jeff

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2023-09-17 15:59 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-07  2:17 [RFC PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types Tsukasa OI
2023-09-07  2:17 ` [RFC PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned Tsukasa OI
2023-09-07  2:17 ` [RFC PATCH 2/2] RISC-V: Update testsuite for type-changed builtins Tsukasa OI
2023-09-17 15:58   ` Jeff Law
2023-09-08  1:03 ` [RFC PATCH 0/1] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t Tsukasa OI
2023-09-08  1:03   ` [RFC PATCH 1/1] " Tsukasa OI
2023-09-12  1:28 ` [PATCH 0/2] RISC-V: Change RISC-V bit manipulation / scalar crypto builtin types Tsukasa OI
2023-09-12  1:28   ` [PATCH 1/2] RISC-V: Make bit manipulation value / round number and shift amount types for builtins unsigned Tsukasa OI
2023-09-17 15:58     ` Jeff Law
2023-09-12  1:28   ` [PATCH 2/2] RISC-V: Make SHA-256, SM3 and SM4 builtins operate on uint32_t Tsukasa OI
2023-09-12  2:44     ` Kito Cheng
2023-09-12  3:20       ` Tsukasa OI

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).