From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi1-x229.google.com (mail-oi1-x229.google.com [IPv6:2607:f8b0:4864:20::229]) by sourceware.org (Postfix) with ESMTPS id 354B2385803D for ; Mon, 18 Apr 2022 12:02:28 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 354B2385803D Received: by mail-oi1-x229.google.com with SMTP id b188so14495052oia.13 for ; Mon, 18 Apr 2022 05:02:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=44fSyy6cVZTS3xWzPRmvLlM+rOaWdD+zhRkI0tyxYmw=; b=ops05HvO81hL7rZ7VOkSG9Ql9JPVhkuupOPRjSoOuCEAIpzUuoQpHWFQq/AboHVRxn ShL4JzzJsDyEJvv17pIcNLYO3hPNBxpSbCB6wHcYCdRrrTBp736qIxP/T8bXQENPiL7w xUWqJHyuVamitZnFAraRLr3bqpUTGZTNTqQxi7M7ESLyojzvwUKyKKr+HQAoDHrNo13i Pcxyp7bt/j/1SRGEZ8gr5peUMtJti0CBd+OoE/5CSxWO7nnusrTu8LOvX52F0q0zL2Xw nwCSiQFIIZAei/8c0Tj1MHqdAfzAaDgTgZXdQIlwQyfGBLdHryG2VweOUSpjiswdfzG/ 6c9g== X-Gm-Message-State: AOAM532LZ3qTOHvIm0BJKThIhpPrhlt7ArAJ0ELJyq4dPWEL+jJP8EP2 woGavJ8j16LuEMzD28cAo9eq1ySjtAUO5w== X-Google-Smtp-Source: ABdhPJzKauxCrykIDuY8HAuupFIQ3aco3por5FvkQi7MCkNqlbJe5zT/OUle0jFIhhlmHLk9yj+ymA== X-Received: by 2002:a05:6808:3a5:b0:322:825d:5399 with SMTP id n5-20020a05680803a500b00322825d5399mr2391010oie.29.1650283343944; Mon, 18 Apr 2022 05:02:23 -0700 (PDT) Received: from birita.. ([2804:431:c7ca:c9d0:566e:62b0:471b:674d]) by smtp.gmail.com with ESMTPSA id k124-20020aca3d82000000b002ef4c5bb9dbsm3832952oia.0.2022.04.18.05.02.22 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Apr 2022 05:02:23 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org Subject: [PATCH v2 6/8] x86: Add AVX2 optimized chacha20 Date: Mon, 18 Apr 2022 09:02:01 -0300 Message-Id: <20220418120203.3185943-7-adhemerval.zanella@linaro.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220418120203.3185943-1-adhemerval.zanella@linaro.org> References: <20220418120203.3185943-1-adhemerval.zanella@linaro.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-11.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_ASCII_DIVIDERS, KAM_NUMSUBJECT, KAM_SHORT, RCVD_IN_DNSWL_NONE, SCC_5_SHORT_WORD_LINES, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 18 Apr 2022 12:02:30 -0000 It adds vectorized ChaCha20 implementation based on libgcrypt cipher/chacha20-amd64-avx2.S. It is used only if AVX2 is supported and enabled by the architecture. As for generic implementation, the last step that XOR with the input is omited. On a Ryzen 9 5900X it shows the following improvements (using formatted bench-arc4random data): SSE2: Function MB/s -------------------------------------------------- arc4random [single-thread] 601.86 arc4random_buf(16) [single-thread] 880.53 arc4random_buf(32) [single-thread] 1182.72 arc4random_buf(48) [single-thread] 1307.22 arc4random_buf(64) [single-thread] 1381.01 arc4random_buf(80) [single-thread] 1399.29 arc4random_buf(96) [single-thread] 1445.00 arc4random_buf(112) [single-thread] 1465.05 arc4random_buf(128) [single-thread] 1497.05 -------------------------------------------------- AVX2: Function MB/s -------------------------------------------------- arc4random [single-thread] 744.84 arc4random_buf(16) [single-thread] 1298.39 arc4random_buf(32) [single-thread] 1969.08 arc4random_buf(48) [single-thread] 2327.11 arc4random_buf(64) [single-thread] 2549.97 arc4random_buf(80) [single-thread] 2631.39 arc4random_buf(96) [single-thread] 2802.66 arc4random_buf(112) [single-thread] 2897.42 arc4random_buf(128) [single-thread] 2976.55 -------------------------------------------------- Checked on x86_64-linux-gnu. --- LICENSES | 5 +- sysdeps/x86_64/Makefile | 1 + sysdeps/x86_64/chacha20-avx2.S | 313 +++++++++++++++++++++++++++++++++ sysdeps/x86_64/chacha20_arch.h | 18 +- 4 files changed, 331 insertions(+), 6 deletions(-) create mode 100644 sysdeps/x86_64/chacha20-avx2.S diff --git a/LICENSES b/LICENSES index 415991e208..05a5c07fcf 100644 --- a/LICENSES +++ b/LICENSES @@ -390,8 +390,9 @@ Copyright 2001 by Stephen L. Moshier License along with this library; if not, see . */ -sysdeps/aarch64/chacha20.S and sysdeps/x86_64/chacha20-sse2.S -import code from libgcrypt, with the following notices: +sysdeps/aarch64/chacha20.S, sysdeps/x86_64/chacha20-sse2.S, and +sysdeps/x86_64/chacha20-avx2.S import code from libgcrypt, with the +following notices: Copyright (C) 2017-2019 Jussi Kivilinna diff --git a/sysdeps/x86_64/Makefile b/sysdeps/x86_64/Makefile index c8fbc30857..0fa8897404 100644 --- a/sysdeps/x86_64/Makefile +++ b/sysdeps/x86_64/Makefile @@ -8,6 +8,7 @@ endif ifeq ($(subdir),stdlib) sysdep_routines += \ chacha20-sse2 \ + chacha20-avx2 \ # sysdep_routines endif diff --git a/sysdeps/x86_64/chacha20-avx2.S b/sysdeps/x86_64/chacha20-avx2.S new file mode 100644 index 0000000000..fb76865890 --- /dev/null +++ b/sysdeps/x86_64/chacha20-avx2.S @@ -0,0 +1,313 @@ +/* Optimized AVX2 implementation of ChaCha20 cipher. + Copyright (C) 2022 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +/* Based on D. J. Bernstein reference implementation at + http://cr.yp.to/chacha.html: + + chacha-regs.c version 20080118 + D. J. Bernstein + Public domain. */ + +#ifdef PIC +# define rRIP (%rip) +#else +# define rRIP +#endif + +/* register macros */ +#define INPUT %rdi +#define DST %rsi +#define SRC %rdx +#define NBLKS %rcx +#define ROUND %eax + +/* stack structure */ +#define STACK_VEC_X12 (32) +#define STACK_VEC_X13 (32 + STACK_VEC_X12) +#define STACK_TMP (32 + STACK_VEC_X13) +#define STACK_TMP1 (32 + STACK_TMP) + +#define STACK_MAX (32 + STACK_TMP1) + +/* vector registers */ +#define X0 %ymm0 +#define X1 %ymm1 +#define X2 %ymm2 +#define X3 %ymm3 +#define X4 %ymm4 +#define X5 %ymm5 +#define X6 %ymm6 +#define X7 %ymm7 +#define X8 %ymm8 +#define X9 %ymm9 +#define X10 %ymm10 +#define X11 %ymm11 +#define X12 %ymm12 +#define X13 %ymm13 +#define X14 %ymm14 +#define X15 %ymm15 + +#define X0h %xmm0 +#define X1h %xmm1 +#define X2h %xmm2 +#define X3h %xmm3 +#define X4h %xmm4 +#define X5h %xmm5 +#define X6h %xmm6 +#define X7h %xmm7 +#define X8h %xmm8 +#define X9h %xmm9 +#define X10h %xmm10 +#define X11h %xmm11 +#define X12h %xmm12 +#define X13h %xmm13 +#define X14h %xmm14 +#define X15h %xmm15 + +/********************************************************************** + helper macros + **********************************************************************/ + +/* 4x4 32-bit integer matrix transpose */ +#define transpose_4x4(x0,x1,x2,x3,t1,t2) \ + vpunpckhdq x1, x0, t2; \ + vpunpckldq x1, x0, x0; \ + \ + vpunpckldq x3, x2, t1; \ + vpunpckhdq x3, x2, x2; \ + \ + vpunpckhqdq t1, x0, x1; \ + vpunpcklqdq t1, x0, x0; \ + \ + vpunpckhqdq x2, t2, x3; \ + vpunpcklqdq x2, t2, x2; + +/* 2x2 128-bit matrix transpose */ +#define transpose_16byte_2x2(x0,x1,t1) \ + vmovdqa x0, t1; \ + vperm2i128 $0x20, x1, x0, x0; \ + vperm2i128 $0x31, x1, t1, x1; + +/********************************************************************** + 8-way chacha20 + **********************************************************************/ + +#define ROTATE2(v1,v2,c,tmp) \ + vpsrld $(32 - (c)), v1, tmp; \ + vpslld $(c), v1, v1; \ + vpaddb tmp, v1, v1; \ + vpsrld $(32 - (c)), v2, tmp; \ + vpslld $(c), v2, v2; \ + vpaddb tmp, v2, v2; + +#define ROTATE_SHUF_2(v1,v2,shuf) \ + vpshufb shuf, v1, v1; \ + vpshufb shuf, v2, v2; + +#define XOR(ds,s) \ + vpxor s, ds, ds; + +#define PLUS(ds,s) \ + vpaddd s, ds, ds; + +#define QUARTERROUND2(a1,b1,c1,d1,a2,b2,c2,d2,ign,tmp1,\ + interleave_op1,interleave_op2,\ + interleave_op3,interleave_op4) \ + vbroadcasti128 .Lshuf_rol16 rRIP, tmp1; \ + interleave_op1; \ + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ + ROTATE_SHUF_2(d1, d2, tmp1); \ + interleave_op2; \ + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ + ROTATE2(b1, b2, 12, tmp1); \ + vbroadcasti128 .Lshuf_rol8 rRIP, tmp1; \ + interleave_op3; \ + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ + ROTATE_SHUF_2(d1, d2, tmp1); \ + interleave_op4; \ + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ + ROTATE2(b1, b2, 7, tmp1); + + .section .text.avx2, "ax", @progbits + .align 32 +chacha20_data: +L(shuf_rol16): + .byte 2,3,0,1,6,7,4,5,10,11,8,9,14,15,12,13 +L(shuf_rol8): + .byte 3,0,1,2,7,4,5,6,11,8,9,10,15,12,13,14 +L(inc_counter): + .byte 0,1,2,3,4,5,6,7 +L(unsigned_cmp): + .long 0x80000000 + + .hidden __chacha20_avx2_blocks8 +ENTRY (__chacha20_avx2_blocks8) + /* input: + * %rdi: input + * %rsi: dst + * %rdx: src + * %rcx: nblks (multiple of 8) + */ + vzeroupper; + + pushq %rbp; + cfi_adjust_cfa_offset(8); + cfi_rel_offset(rbp, 0) + movq %rsp, %rbp; + cfi_def_cfa_register(rbp); + + subq $STACK_MAX, %rsp; + andq $~31, %rsp; + +L(loop8): + mov $20, ROUND; + + /* Construct counter vectors X12 and X13 */ + vpmovzxbd L(inc_counter) rRIP, X0; + vpbroadcastd L(unsigned_cmp) rRIP, X2; + vpbroadcastd (12 * 4)(INPUT), X12; + vpbroadcastd (13 * 4)(INPUT), X13; + vpaddd X0, X12, X12; + vpxor X2, X0, X0; + vpxor X2, X12, X1; + vpcmpgtd X1, X0, X0; + vpsubd X0, X13, X13; + vmovdqa X12, (STACK_VEC_X12)(%rsp); + vmovdqa X13, (STACK_VEC_X13)(%rsp); + + /* Load vectors */ + vpbroadcastd (0 * 4)(INPUT), X0; + vpbroadcastd (1 * 4)(INPUT), X1; + vpbroadcastd (2 * 4)(INPUT), X2; + vpbroadcastd (3 * 4)(INPUT), X3; + vpbroadcastd (4 * 4)(INPUT), X4; + vpbroadcastd (5 * 4)(INPUT), X5; + vpbroadcastd (6 * 4)(INPUT), X6; + vpbroadcastd (7 * 4)(INPUT), X7; + vpbroadcastd (8 * 4)(INPUT), X8; + vpbroadcastd (9 * 4)(INPUT), X9; + vpbroadcastd (10 * 4)(INPUT), X10; + vpbroadcastd (11 * 4)(INPUT), X11; + vpbroadcastd (14 * 4)(INPUT), X14; + vpbroadcastd (15 * 4)(INPUT), X15; + vmovdqa X15, (STACK_TMP)(%rsp); + +L(round2): + QUARTERROUND2(X0, X4, X8, X12, X1, X5, X9, X13, tmp:=,X15,,,,) + vmovdqa (STACK_TMP)(%rsp), X15; + vmovdqa X8, (STACK_TMP)(%rsp); + QUARTERROUND2(X2, X6, X10, X14, X3, X7, X11, X15, tmp:=,X8,,,,) + QUARTERROUND2(X0, X5, X10, X15, X1, X6, X11, X12, tmp:=,X8,,,,) + vmovdqa (STACK_TMP)(%rsp), X8; + vmovdqa X15, (STACK_TMP)(%rsp); + QUARTERROUND2(X2, X7, X8, X13, X3, X4, X9, X14, tmp:=,X15,,,,) + sub $2, ROUND; + jnz L(round2); + + vmovdqa X8, (STACK_TMP1)(%rsp); + + /* tmp := X15 */ + vpbroadcastd (0 * 4)(INPUT), X15; + PLUS(X0, X15); + vpbroadcastd (1 * 4)(INPUT), X15; + PLUS(X1, X15); + vpbroadcastd (2 * 4)(INPUT), X15; + PLUS(X2, X15); + vpbroadcastd (3 * 4)(INPUT), X15; + PLUS(X3, X15); + vpbroadcastd (4 * 4)(INPUT), X15; + PLUS(X4, X15); + vpbroadcastd (5 * 4)(INPUT), X15; + PLUS(X5, X15); + vpbroadcastd (6 * 4)(INPUT), X15; + PLUS(X6, X15); + vpbroadcastd (7 * 4)(INPUT), X15; + PLUS(X7, X15); + transpose_4x4(X0, X1, X2, X3, X8, X15); + transpose_4x4(X4, X5, X6, X7, X8, X15); + vmovdqa (STACK_TMP1)(%rsp), X8; + transpose_16byte_2x2(X0, X4, X15); + transpose_16byte_2x2(X1, X5, X15); + transpose_16byte_2x2(X2, X6, X15); + transpose_16byte_2x2(X3, X7, X15); + vmovdqa (STACK_TMP)(%rsp), X15; + vmovdqu X0, (64 * 0 + 16 * 0)(DST) + vmovdqu X1, (64 * 1 + 16 * 0)(DST) + vpbroadcastd (8 * 4)(INPUT), X0; + PLUS(X8, X0); + vpbroadcastd (9 * 4)(INPUT), X0; + PLUS(X9, X0); + vpbroadcastd (10 * 4)(INPUT), X0; + PLUS(X10, X0); + vpbroadcastd (11 * 4)(INPUT), X0; + PLUS(X11, X0); + vmovdqa (STACK_VEC_X12)(%rsp), X0; + PLUS(X12, X0); + vmovdqa (STACK_VEC_X13)(%rsp), X0; + PLUS(X13, X0); + vpbroadcastd (14 * 4)(INPUT), X0; + PLUS(X14, X0); + vpbroadcastd (15 * 4)(INPUT), X0; + PLUS(X15, X0); + vmovdqu X2, (64 * 2 + 16 * 0)(DST) + vmovdqu X3, (64 * 3 + 16 * 0)(DST) + + /* Update counter */ + addq $8, (12 * 4)(INPUT); + + transpose_4x4(X8, X9, X10, X11, X0, X1); + transpose_4x4(X12, X13, X14, X15, X0, X1); + vmovdqu X4, (64 * 4 + 16 * 0)(DST) + vmovdqu X5, (64 * 5 + 16 * 0)(DST) + transpose_16byte_2x2(X8, X12, X0); + transpose_16byte_2x2(X9, X13, X0); + transpose_16byte_2x2(X10, X14, X0); + transpose_16byte_2x2(X11, X15, X0); + vmovdqu X6, (64 * 6 + 16 * 0)(DST) + vmovdqu X7, (64 * 7 + 16 * 0)(DST) + vmovdqu X8, (64 * 0 + 16 * 2)(DST) + vmovdqu X9, (64 * 1 + 16 * 2)(DST) + vmovdqu X10, (64 * 2 + 16 * 2)(DST) + vmovdqu X11, (64 * 3 + 16 * 2)(DST) + vmovdqu X12, (64 * 4 + 16 * 2)(DST) + vmovdqu X13, (64 * 5 + 16 * 2)(DST) + vmovdqu X14, (64 * 6 + 16 * 2)(DST) + vmovdqu X15, (64 * 7 + 16 * 2)(DST) + + sub $8, NBLKS; + lea (8 * 64)(DST), DST; + lea (8 * 64)(SRC), SRC; + jnz L(loop8); + + /* clear the used vector registers and stack */ + vpxor X0, X0, X0; + vmovdqa X0, (STACK_VEC_X12)(%rsp); + vmovdqa X0, (STACK_VEC_X13)(%rsp); + vmovdqa X0, (STACK_TMP)(%rsp); + vmovdqa X0, (STACK_TMP1)(%rsp); + vzeroall; + + /* eax zeroed by round loop. */ + leave; + cfi_adjust_cfa_offset(-8) + cfi_def_cfa_register(%rsp); + ret; + int3; +END(__chacha20_avx2_blocks8) diff --git a/sysdeps/x86_64/chacha20_arch.h b/sysdeps/x86_64/chacha20_arch.h index 6fe5f77889..5b3ec7bbc4 100644 --- a/sysdeps/x86_64/chacha20_arch.h +++ b/sysdeps/x86_64/chacha20_arch.h @@ -23,16 +23,26 @@ unsigned int __chacha20_sse2_blocks4 (uint32_t *state, uint8_t *dst, const uint8_t *src, size_t nblks) attribute_hidden; +unsigned int __chacha20_avx2_blocks8 (uint32_t *state, uint8_t *dst, + const uint8_t *src, size_t nblks) + attribute_hidden; static inline void chacha20_crypt (struct chacha20_state *state, uint8_t *dst, const uint8_t *src, size_t bytes) { - _Static_assert (CHACHA20_BUFSIZE % 4 == 0, - "CHACHA20_BUFSIZE not multiple of 4"); + _Static_assert (CHACHA20_BUFSIZE % 4 == 0 && CHACHA20_BUFSIZE % 8 == 0, + "CHACHA20_BUFSIZE not multiple of 4 or 8"); _Static_assert (CHACHA20_BUFSIZE > CHACHA20_BLOCK_SIZE * 8, "CHACHA20_BUFSIZE <= CHACHA20_BLOCK_SIZE * 8"); + const struct cpu_features* cpu_features = __get_cpu_features (); - __chacha20_sse2_blocks4 (state->ctx, dst, src, - CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); + /* AVX2 version uses vzeroupper, so disable it if RTM is enabled. */ + if (CPU_FEATURE_USABLE_P (cpu_features, AVX2) + && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER)) + __chacha20_avx2_blocks8 (state->ctx, dst, src, + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); + else + __chacha20_sse2_blocks4 (state->ctx, dst, src, + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); } -- 2.32.0