From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com [IPv6:2607:f8b0:4864:20::b32]) by sourceware.org (Postfix) with ESMTPS id EB6553858D28 for ; Mon, 25 Apr 2022 18:52:28 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org EB6553858D28 Received: by mail-yb1-xb32.google.com with SMTP id g28so9378053ybj.10 for ; Mon, 25 Apr 2022 11:52:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=iEH40dcCeniOo2hPasd0UfFbagL3bAxB/51zr4y8ueM=; b=lCrptt0nT2vB2zX/xhpyfWMRi57DAsCMYabLEituNRuyqdqwb/BJQFG4JRd3HXfUmG NjE2jlpO7JeO9cu5UTi5Hw+81zmsGv8yfNC9LSpdxM73egHTgu/bT9QY5ZjyYvjD0jTy lh7tGLUbY1d1OW/yVPUq6/YjOhhMihUaFcooDHNN0hrFgrDMeoJPHSYFzq5gkg8ZNw6Y kX0scQ/5m2eM0iRExl822DygfeyB7UtE65H1JIlDWGEQB6Rpzb+XQ91S7uEwu78VjwyG JrEaaXr3wA0SMZ+yu18N+02ljyKm1ojwn5Jbu5nnwijimpxfG1ydhH0saOSsrAPHAVaT AgdA== X-Gm-Message-State: AOAM531yjl3ZiYMd9NI3HIN6887Ozw4unhLcfrJmIX/aglrTy6GXu6RH wECNUeLouRpVsM6VD2g4rLl49S2Un5SgEcDDqR0HaWu8Rhc= X-Google-Smtp-Source: ABdhPJxX28P3kkvuSZmOrNTKqO5sz4RtlLBK4PBVldqx3wQZlZghqQREoRBusqkdVv/AhE1w6nBGXR0B//xl96PbXH4= X-Received: by 2002:a25:2416:0:b0:645:6d37:3216 with SMTP id k22-20020a252416000000b006456d373216mr16847302ybk.611.1650912748178; Mon, 25 Apr 2022 11:52:28 -0700 (PDT) MIME-Version: 1.0 References: <20220425130156.1062525-1-adhemerval.zanella@linaro.org> <20220425130156.1062525-7-adhemerval.zanella@linaro.org> In-Reply-To: <20220425130156.1062525-7-adhemerval.zanella@linaro.org> From: Noah Goldstein Date: Mon, 25 Apr 2022 13:52:17 -0500 Message-ID: Subject: Re: [PATCH v4 6/9] x86: Add AVX2 optimized chacha20 To: Adhemerval Zanella Cc: GNU C Library Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, KAM_ASCII_DIVIDERS, KAM_NUMSUBJECT, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Apr 2022 18:52:31 -0000 On Mon, Apr 25, 2022 at 8:06 AM Adhemerval Zanella via Libc-alpha wrote: > > It adds vectorized ChaCha20 implementation based on libgcrypt > cipher/chacha20-amd64-avx2.S. It is used only if AVX2 is supported > and enabled by the architecture. > > As for generic implementation, the last step that XOR with the > input is omited. > > On a Ryzen 9 5900X it shows the following improvements (using > formatted bench-arc4random data): > > SSE2: > -------------------------------------------------- > arc4random [single-thread] 637.06 > arc4random_buf(16) [single-thread] 856.62 > arc4random_buf(32) [single-thread] 1129.41 > arc4random_buf(48) [single-thread] 1260.61 > arc4random_buf(64) [single-thread] 1330.56 > arc4random_buf(80) [single-thread] 1353.84 > arc4random_buf(96) [single-thread] 1376.53 > arc4random_buf(112) [single-thread] 1405.74 > arc4random_buf(128) [single-thread] 1422.59 > -------------------------------------------------- > > AVX2: > Function MB/s > -------------------------------------------------- > arc4random [single-thread] 809.53 > arc4random_buf(16) [single-thread] 1242.56 > arc4random_buf(32) [single-thread] 1915.90 > arc4random_buf(48) [single-thread] 2230.03 > arc4random_buf(64) [single-thread] 2429.68 > arc4random_buf(80) [single-thread] 2489.70 > arc4random_buf(96) [single-thread] 2598.88 > arc4random_buf(112) [single-thread] 2699.93 > arc4random_buf(128) [single-thread] 2747.31 > > Checked on x86_64-linux-gnu. > --- > LICENSES | 5 +- > sysdeps/x86_64/Makefile | 1 + > sysdeps/x86_64/chacha20-avx2.S | 313 +++++++++++++++++++++++++++++++++ > sysdeps/x86_64/chacha20_arch.h | 22 ++- > 4 files changed, 333 insertions(+), 8 deletions(-) > create mode 100644 sysdeps/x86_64/chacha20-avx2.S > > diff --git a/LICENSES b/LICENSES > index 415991e208..05a5c07fcf 100644 > --- a/LICENSES > +++ b/LICENSES > @@ -390,8 +390,9 @@ Copyright 2001 by Stephen L. Moshier > License along with this library; if not, see > . */ > > -sysdeps/aarch64/chacha20.S and sysdeps/x86_64/chacha20-sse2.S > -import code from libgcrypt, with the following notices: > +sysdeps/aarch64/chacha20.S, sysdeps/x86_64/chacha20-sse2.S, and > +sysdeps/x86_64/chacha20-avx2.S import code from libgcrypt, with the > +following notices: > > Copyright (C) 2017-2019 Jussi Kivilinna > > diff --git a/sysdeps/x86_64/Makefile b/sysdeps/x86_64/Makefile > index c8fbc30857..0fa8897404 100644 > --- a/sysdeps/x86_64/Makefile > +++ b/sysdeps/x86_64/Makefile > @@ -8,6 +8,7 @@ endif > ifeq ($(subdir),stdlib) > sysdep_routines += \ > chacha20-sse2 \ > + chacha20-avx2 \ > # sysdep_routines > endif > > diff --git a/sysdeps/x86_64/chacha20-avx2.S b/sysdeps/x86_64/chacha20-avx2.S > new file mode 100644 > index 0000000000..fb76865890 > --- /dev/null > +++ b/sysdeps/x86_64/chacha20-avx2.S > @@ -0,0 +1,313 @@ > +/* Optimized AVX2 implementation of ChaCha20 cipher. > + Copyright (C) 2022 Free Software Foundation, Inc. > + This file is part of the GNU C Library. > + > + The GNU C Library is free software; you can redistribute it and/or > + modify it under the terms of the GNU Lesser General Public > + License as published by the Free Software Foundation; either > + version 2.1 of the License, or (at your option) any later version. > + > + The GNU C Library is distributed in the hope that it will be useful, > + but WITHOUT ANY WARRANTY; without even the implied warranty of > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > + Lesser General Public License for more details. > + > + You should have received a copy of the GNU Lesser General Public > + License along with the GNU C Library; if not, see > + . */ > + > +#include > + > +/* Based on D. J. Bernstein reference implementation at > + http://cr.yp.to/chacha.html: > + > + chacha-regs.c version 20080118 > + D. J. Bernstein > + Public domain. */ > + > +#ifdef PIC > +# define rRIP (%rip) > +#else > +# define rRIP > +#endif > + > +/* register macros */ > +#define INPUT %rdi > +#define DST %rsi > +#define SRC %rdx > +#define NBLKS %rcx > +#define ROUND %eax > + > +/* stack structure */ > +#define STACK_VEC_X12 (32) > +#define STACK_VEC_X13 (32 + STACK_VEC_X12) > +#define STACK_TMP (32 + STACK_VEC_X13) > +#define STACK_TMP1 (32 + STACK_TMP) > + > +#define STACK_MAX (32 + STACK_TMP1) > + > +/* vector registers */ > +#define X0 %ymm0 > +#define X1 %ymm1 > +#define X2 %ymm2 > +#define X3 %ymm3 > +#define X4 %ymm4 > +#define X5 %ymm5 > +#define X6 %ymm6 > +#define X7 %ymm7 > +#define X8 %ymm8 > +#define X9 %ymm9 > +#define X10 %ymm10 > +#define X11 %ymm11 > +#define X12 %ymm12 > +#define X13 %ymm13 > +#define X14 %ymm14 > +#define X15 %ymm15 > + > +#define X0h %xmm0 > +#define X1h %xmm1 > +#define X2h %xmm2 > +#define X3h %xmm3 > +#define X4h %xmm4 > +#define X5h %xmm5 > +#define X6h %xmm6 > +#define X7h %xmm7 > +#define X8h %xmm8 > +#define X9h %xmm9 > +#define X10h %xmm10 > +#define X11h %xmm11 > +#define X12h %xmm12 > +#define X13h %xmm13 > +#define X14h %xmm14 > +#define X15h %xmm15 > + > +/********************************************************************** > + helper macros > + **********************************************************************/ > + > +/* 4x4 32-bit integer matrix transpose */ > +#define transpose_4x4(x0,x1,x2,x3,t1,t2) \ > + vpunpckhdq x1, x0, t2; \ > + vpunpckldq x1, x0, x0; \ > + \ > + vpunpckldq x3, x2, t1; \ > + vpunpckhdq x3, x2, x2; \ > + \ > + vpunpckhqdq t1, x0, x1; \ > + vpunpcklqdq t1, x0, x0; \ > + \ > + vpunpckhqdq x2, t2, x3; \ > + vpunpcklqdq x2, t2, x2; > + > +/* 2x2 128-bit matrix transpose */ > +#define transpose_16byte_2x2(x0,x1,t1) \ > + vmovdqa x0, t1; \ > + vperm2i128 $0x20, x1, x0, x0; \ > + vperm2i128 $0x31, x1, t1, x1; > + > +/********************************************************************** > + 8-way chacha20 > + **********************************************************************/ > + > +#define ROTATE2(v1,v2,c,tmp) \ > + vpsrld $(32 - (c)), v1, tmp; \ > + vpslld $(c), v1, v1; \ > + vpaddb tmp, v1, v1; \ > + vpsrld $(32 - (c)), v2, tmp; \ > + vpslld $(c), v2, v2; \ > + vpaddb tmp, v2, v2; > + > +#define ROTATE_SHUF_2(v1,v2,shuf) \ > + vpshufb shuf, v1, v1; \ > + vpshufb shuf, v2, v2; > + > +#define XOR(ds,s) \ > + vpxor s, ds, ds; > + > +#define PLUS(ds,s) \ > + vpaddd s, ds, ds; > + > +#define QUARTERROUND2(a1,b1,c1,d1,a2,b2,c2,d2,ign,tmp1,\ > + interleave_op1,interleave_op2,\ > + interleave_op3,interleave_op4) \ > + vbroadcasti128 .Lshuf_rol16 rRIP, tmp1; \ > + interleave_op1; \ > + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ > + ROTATE_SHUF_2(d1, d2, tmp1); \ > + interleave_op2; \ > + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ > + ROTATE2(b1, b2, 12, tmp1); \ > + vbroadcasti128 .Lshuf_rol8 rRIP, tmp1; \ > + interleave_op3; \ > + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ > + ROTATE_SHUF_2(d1, d2, tmp1); \ > + interleave_op4; \ > + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ > + ROTATE2(b1, b2, 7, tmp1); > + > + .section .text.avx2, "ax", @progbits > + .align 32 > +chacha20_data: > +L(shuf_rol16): > + .byte 2,3,0,1,6,7,4,5,10,11,8,9,14,15,12,13 > +L(shuf_rol8): > + .byte 3,0,1,2,7,4,5,6,11,8,9,10,15,12,13,14 > +L(inc_counter): > + .byte 0,1,2,3,4,5,6,7 > +L(unsigned_cmp): > + .long 0x80000000 > + > + .hidden __chacha20_avx2_blocks8 > +ENTRY (__chacha20_avx2_blocks8) > + /* input: > + * %rdi: input > + * %rsi: dst > + * %rdx: src > + * %rcx: nblks (multiple of 8) > + */ > + vzeroupper; > + > + pushq %rbp; > + cfi_adjust_cfa_offset(8); > + cfi_rel_offset(rbp, 0) > + movq %rsp, %rbp; > + cfi_def_cfa_register(rbp); > + > + subq $STACK_MAX, %rsp; > + andq $~31, %rsp; > + > +L(loop8): > + mov $20, ROUND; > + > + /* Construct counter vectors X12 and X13 */ > + vpmovzxbd L(inc_counter) rRIP, X0; > + vpbroadcastd L(unsigned_cmp) rRIP, X2; > + vpbroadcastd (12 * 4)(INPUT), X12; > + vpbroadcastd (13 * 4)(INPUT), X13; > + vpaddd X0, X12, X12; > + vpxor X2, X0, X0; > + vpxor X2, X12, X1; > + vpcmpgtd X1, X0, X0; > + vpsubd X0, X13, X13; > + vmovdqa X12, (STACK_VEC_X12)(%rsp); > + vmovdqa X13, (STACK_VEC_X13)(%rsp); > + > + /* Load vectors */ > + vpbroadcastd (0 * 4)(INPUT), X0; > + vpbroadcastd (1 * 4)(INPUT), X1; > + vpbroadcastd (2 * 4)(INPUT), X2; > + vpbroadcastd (3 * 4)(INPUT), X3; > + vpbroadcastd (4 * 4)(INPUT), X4; > + vpbroadcastd (5 * 4)(INPUT), X5; > + vpbroadcastd (6 * 4)(INPUT), X6; > + vpbroadcastd (7 * 4)(INPUT), X7; > + vpbroadcastd (8 * 4)(INPUT), X8; > + vpbroadcastd (9 * 4)(INPUT), X9; > + vpbroadcastd (10 * 4)(INPUT), X10; > + vpbroadcastd (11 * 4)(INPUT), X11; > + vpbroadcastd (14 * 4)(INPUT), X14; > + vpbroadcastd (15 * 4)(INPUT), X15; > + vmovdqa X15, (STACK_TMP)(%rsp); > + > +L(round2): > + QUARTERROUND2(X0, X4, X8, X12, X1, X5, X9, X13, tmp:=,X15,,,,) > + vmovdqa (STACK_TMP)(%rsp), X15; > + vmovdqa X8, (STACK_TMP)(%rsp); > + QUARTERROUND2(X2, X6, X10, X14, X3, X7, X11, X15, tmp:=,X8,,,,) > + QUARTERROUND2(X0, X5, X10, X15, X1, X6, X11, X12, tmp:=,X8,,,,) > + vmovdqa (STACK_TMP)(%rsp), X8; > + vmovdqa X15, (STACK_TMP)(%rsp); > + QUARTERROUND2(X2, X7, X8, X13, X3, X4, X9, X14, tmp:=,X15,,,,) > + sub $2, ROUND; > + jnz L(round2); > + > + vmovdqa X8, (STACK_TMP1)(%rsp); > + > + /* tmp := X15 */ > + vpbroadcastd (0 * 4)(INPUT), X15; > + PLUS(X0, X15); > + vpbroadcastd (1 * 4)(INPUT), X15; > + PLUS(X1, X15); > + vpbroadcastd (2 * 4)(INPUT), X15; > + PLUS(X2, X15); > + vpbroadcastd (3 * 4)(INPUT), X15; > + PLUS(X3, X15); > + vpbroadcastd (4 * 4)(INPUT), X15; > + PLUS(X4, X15); > + vpbroadcastd (5 * 4)(INPUT), X15; > + PLUS(X5, X15); > + vpbroadcastd (6 * 4)(INPUT), X15; > + PLUS(X6, X15); > + vpbroadcastd (7 * 4)(INPUT), X15; > + PLUS(X7, X15); > + transpose_4x4(X0, X1, X2, X3, X8, X15); > + transpose_4x4(X4, X5, X6, X7, X8, X15); > + vmovdqa (STACK_TMP1)(%rsp), X8; > + transpose_16byte_2x2(X0, X4, X15); > + transpose_16byte_2x2(X1, X5, X15); > + transpose_16byte_2x2(X2, X6, X15); > + transpose_16byte_2x2(X3, X7, X15); > + vmovdqa (STACK_TMP)(%rsp), X15; > + vmovdqu X0, (64 * 0 + 16 * 0)(DST) > + vmovdqu X1, (64 * 1 + 16 * 0)(DST) > + vpbroadcastd (8 * 4)(INPUT), X0; > + PLUS(X8, X0); > + vpbroadcastd (9 * 4)(INPUT), X0; > + PLUS(X9, X0); > + vpbroadcastd (10 * 4)(INPUT), X0; > + PLUS(X10, X0); > + vpbroadcastd (11 * 4)(INPUT), X0; > + PLUS(X11, X0); > + vmovdqa (STACK_VEC_X12)(%rsp), X0; > + PLUS(X12, X0); > + vmovdqa (STACK_VEC_X13)(%rsp), X0; > + PLUS(X13, X0); > + vpbroadcastd (14 * 4)(INPUT), X0; > + PLUS(X14, X0); > + vpbroadcastd (15 * 4)(INPUT), X0; > + PLUS(X15, X0); > + vmovdqu X2, (64 * 2 + 16 * 0)(DST) > + vmovdqu X3, (64 * 3 + 16 * 0)(DST) > + > + /* Update counter */ > + addq $8, (12 * 4)(INPUT); > + > + transpose_4x4(X8, X9, X10, X11, X0, X1); > + transpose_4x4(X12, X13, X14, X15, X0, X1); > + vmovdqu X4, (64 * 4 + 16 * 0)(DST) > + vmovdqu X5, (64 * 5 + 16 * 0)(DST) > + transpose_16byte_2x2(X8, X12, X0); > + transpose_16byte_2x2(X9, X13, X0); > + transpose_16byte_2x2(X10, X14, X0); > + transpose_16byte_2x2(X11, X15, X0); > + vmovdqu X6, (64 * 6 + 16 * 0)(DST) > + vmovdqu X7, (64 * 7 + 16 * 0)(DST) > + vmovdqu X8, (64 * 0 + 16 * 2)(DST) > + vmovdqu X9, (64 * 1 + 16 * 2)(DST) > + vmovdqu X10, (64 * 2 + 16 * 2)(DST) > + vmovdqu X11, (64 * 3 + 16 * 2)(DST) > + vmovdqu X12, (64 * 4 + 16 * 2)(DST) > + vmovdqu X13, (64 * 5 + 16 * 2)(DST) > + vmovdqu X14, (64 * 6 + 16 * 2)(DST) > + vmovdqu X15, (64 * 7 + 16 * 2)(DST) > + > + sub $8, NBLKS; > + lea (8 * 64)(DST), DST; > + lea (8 * 64)(SRC), SRC; > + jnz L(loop8); > + > + /* clear the used vector registers and stack */ > + vpxor X0, X0, X0; > + vmovdqa X0, (STACK_VEC_X12)(%rsp); > + vmovdqa X0, (STACK_VEC_X13)(%rsp); > + vmovdqa X0, (STACK_TMP)(%rsp); > + vmovdqa X0, (STACK_TMP1)(%rsp); > + vzeroall; > + > + /* eax zeroed by round loop. */ > + leave; > + cfi_adjust_cfa_offset(-8) > + cfi_def_cfa_register(%rsp); > + ret; > + int3; > +END(__chacha20_avx2_blocks8) > diff --git a/sysdeps/x86_64/chacha20_arch.h b/sysdeps/x86_64/chacha20_arch.h > index 5738c840a9..bfdc6c0a36 100644 > --- a/sysdeps/x86_64/chacha20_arch.h > +++ b/sysdeps/x86_64/chacha20_arch.h > @@ -23,16 +23,26 @@ > unsigned int __chacha20_sse2_blocks4 (uint32_t *state, uint8_t *dst, > const uint8_t *src, size_t nblks) > attribute_hidden; > +unsigned int __chacha20_avx2_blocks8 (uint32_t *state, uint8_t *dst, > + const uint8_t *src, size_t nblks) > + attribute_hidden; > > static inline void > chacha20_crypt (uint32_t *state, uint8_t *dst, const uint8_t *src, > size_t bytes) > { > - _Static_assert (CHACHA20_BUFSIZE % 4 == 0, > - "CHACHA20_BUFSIZE not multiple of 4"); > - _Static_assert (CHACHA20_BUFSIZE >= CHACHA20_BLOCK_SIZE * 4, > - "CHACHA20_BUFSIZE <= CHACHA20_BLOCK_SIZE * 4"); > + _Static_assert (CHACHA20_BUFSIZE % 4 == 0 && CHACHA20_BUFSIZE % 8 == 0, > + "CHACHA20_BUFSIZE not multiple of 4 or 8"); > + _Static_assert (CHACHA20_BUFSIZE >= CHACHA20_BLOCK_SIZE * 8, > + "CHACHA20_BUFSIZE < CHACHA20_BLOCK_SIZE * 8"); > + const struct cpu_features* cpu_features = __get_cpu_features (); > > - __chacha20_sse2_blocks4 (state, dst, src, > - CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); > + /* AVX2 version uses vzeroupper, so disable it if RTM is enabled. */ > + if (CPU_FEATURE_USABLE_P (cpu_features, AVX2) > + && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER)) > + __chacha20_avx2_blocks8 (state, dst, src, > + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); > + else > + __chacha20_sse2_blocks4 (state, dst, src, > + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); > } > -- > 2.34.1 > Nothing todo now, but we may want to compare the perf of this version against: https://elixir.bootlin.com/linux/v5.18-rc4/source/arch/x86/crypto/chacha-avx2-x86_64.S in the future.