From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi1-x22c.google.com (mail-oi1-x22c.google.com [IPv6:2607:f8b0:4864:20::22c]) by sourceware.org (Postfix) with ESMTPS id D84E93850204 for ; Wed, 13 Jul 2022 18:27:34 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org D84E93850204 Received: by mail-oi1-x22c.google.com with SMTP id o133so15349720oig.13 for ; Wed, 13 Jul 2022 11:27:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=hMjcTk3Ov4fsMU2G6vVRIQ1GNB59C3VJ+5mrUywRCoQ=; b=cP2pI273AVX/5DIuVsgjEfn9xB8nfLpoHgOsf2TgCoBfFRcb2kiT+FJBKE5bwz7P6j hFEOqizsOqqu9iqnJ5H6tuRV/qczKdtjJ4Voqq5v0opH3iwM5G04nYdYH19Nm7LzaaAV mi3YMJq7AGZ87clv6FNyz+csuJLJw5CoVESgLT2viwYG57IVsHw4Zr1E3c+TrAgETtIo 6KXnAbSfz4j5qyqsErjJUxm5egRPLaxux8jMCtiE2vgUqwJK32fN8wwiqrAuuTZUk9yL yUC1pd/zb4XR0/QFS9wVh79RP+0Bufw0l+Pk67kb2JnebDMYTrdzbYVWY7H645OlOopi wWWg== X-Gm-Message-State: AJIora9oqM9iLtUDjE4HBYTS8nB70QGg/XOBoxoZ1TR63RmNOAbncZvL sBnMLzubIqb/AF1OsgdKuqGPuRTpYMfk88hesaE= X-Google-Smtp-Source: AGRyM1vxtQBWIFpABoO7qyiH2Pm7e5jXLFvUH+uDMOSpbBl5B40Sb+18Jvi3287tss+V9b/bPW1PmeNxgHH60Yc1jFs= X-Received: by 2002:a05:6808:170b:b0:335:1807:e4a2 with SMTP id bc11-20020a056808170b00b003351807e4a2mr2429044oib.89.1657736854185; Wed, 13 Jul 2022 11:27:34 -0700 (PDT) MIME-Version: 1.0 References: <20220713173657.516725-1-adhemerval.zanella@linaro.org> <20220713173657.516725-6-adhemerval.zanella@linaro.org> In-Reply-To: From: Noah Goldstein Date: Wed, 13 Jul 2022 11:27:23 -0700 Message-ID: Subject: Re: [PATCH v9 5/9] x86: Add SSE2 optimized chacha20 To: Adhemerval Zanella Netto Cc: GNU C Library , Florian Weimer Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-8.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, KAM_ASCII_DIVIDERS, KAM_NUMSUBJECT, KAM_SHORT, RCVD_IN_DNSWL_NONE, SCC_5_SHORT_WORD_LINES, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 13 Jul 2022 18:27:38 -0000 On Wed, Jul 13, 2022 at 11:22 AM Noah Goldstein wrote: > > On Wed, Jul 13, 2022 at 11:20 AM Adhemerval Zanella Netto > wrote: > > > > > > > > On 13/07/22 15:12, Noah Goldstein wrote: > > > On Wed, Jul 13, 2022 at 10:39 AM Adhemerval Zanella via Libc-alpha > > > wrote: > > >> > > >> From: Adhemerval Zanella Netto > > >> > > >> It adds vectorized ChaCha20 implementation based on libgcrypt > > >> cipher/chacha20-amd64-ssse3.S. It replaces the ROTATE_SHUF_2 (which > > >> uses pshufb) by ROTATE2 and thus making the original implementation > > >> SSE2. > > >> > > >> As for generic implementation, the last step that XOR with the > > >> input is omited. The final state register clearing is also > > >> omitted. > > >> > > >> On a Ryzen 9 5900X it shows the following improvements (using > > >> formatted bench-arc4random data): > > >> > > >> GENERIC MB/s > > >> ----------------------------------------------- > > >> arc4random [single-thread] 443.11 > > >> arc4random_buf(16) [single-thread] 552.27 > > >> arc4random_buf(32) [single-thread] 626.86 > > >> arc4random_buf(48) [single-thread] 649.81 > > >> arc4random_buf(64) [single-thread] 663.95 > > >> arc4random_buf(80) [single-thread] 674.78 > > >> arc4random_buf(96) [single-thread] 675.17 > > >> arc4random_buf(112) [single-thread] 680.69 > > >> arc4random_buf(128) [single-thread] 683.20 > > >> ----------------------------------------------- > > >> > > >> SSE MB/s > > >> ----------------------------------------------- > > >> arc4random [single-thread] 704.25 > > >> arc4random_buf(16) [single-thread] 1018.17 > > >> arc4random_buf(32) [single-thread] 1315.27 > > >> arc4random_buf(48) [single-thread] 1449.36 > > >> arc4random_buf(64) [single-thread] 1511.16 > > >> arc4random_buf(80) [single-thread] 1539.48 > > >> arc4random_buf(96) [single-thread] 1571.06 > > >> arc4random_buf(112) [single-thread] 1596.16 > > >> arc4random_buf(128) [single-thread] 1613.48 > > >> ----------------------------------------------- > > >> > > >> Checked on x86_64-linux-gnu. > > >> --- > > >> LICENSES | 4 +- > > >> sysdeps/x86_64/Makefile | 6 + > > >> sysdeps/x86_64/chacha20-amd64-sse2.S | 306 +++++++++++++++++++++++++++ > > >> sysdeps/x86_64/chacha20_arch.h | 38 ++++ > > >> 4 files changed, 352 insertions(+), 2 deletions(-) > > >> create mode 100644 sysdeps/x86_64/chacha20-amd64-sse2.S > > >> create mode 100644 sysdeps/x86_64/chacha20_arch.h > > >> > > >> diff --git a/LICENSES b/LICENSES > > >> index a94ea89d0d..47e9cd8e31 100644 > > >> --- a/LICENSES > > >> +++ b/LICENSES > > >> @@ -390,8 +390,8 @@ Copyright 2001 by Stephen L. Moshier > > >> License along with this library; if not, see > > >> . */ > > >> > > >> -sysdeps/aarch64/chacha20-aarch64.S imports code from libgcrypt, with > > >> -the following notices: > > >> +sysdeps/aarch64/chacha20-aarch64.S and sysdeps/x86_64/chacha20-amd64-sse2.S > > >> +imports code from libgcrypt, with the following notices: > > >> > > >> Copyright (C) 2017-2019 Jussi Kivilinna > > >> > > >> diff --git a/sysdeps/x86_64/Makefile b/sysdeps/x86_64/Makefile > > >> index e597a4855f..a2e5af3ca9 100644 > > >> --- a/sysdeps/x86_64/Makefile > > >> +++ b/sysdeps/x86_64/Makefile > > >> @@ -5,6 +5,12 @@ ifeq ($(subdir),csu) > > >> gen-as-const-headers += link-defines.sym > > >> endif > > >> > > >> +ifeq ($(subdir),stdlib) > > >> +sysdep_routines += \ > > >> + chacha20-amd64-sse2 \ > > >> + # sysdep_routines > > >> +endif > > >> + > > >> ifeq ($(subdir),gmon) > > >> sysdep_routines += _mcount > > >> # We cannot compile _mcount.S with -pg because that would create > > >> diff --git a/sysdeps/x86_64/chacha20-amd64-sse2.S b/sysdeps/x86_64/chacha20-amd64-sse2.S > > >> new file mode 100644 > > >> index 0000000000..7b30f61446 > > >> --- /dev/null > > >> +++ b/sysdeps/x86_64/chacha20-amd64-sse2.S > > >> @@ -0,0 +1,306 @@ > > >> +/* Optimized SSE2 implementation of ChaCha20 cipher. > > >> + Copyright (C) 2022 Free Software Foundation, Inc. > > >> + This file is part of the GNU C Library. > > >> + > > >> + The GNU C Library is free software; you can redistribute it and/or > > >> + modify it under the terms of the GNU Lesser General Public > > >> + License as published by the Free Software Foundation; either > > >> + version 2.1 of the License, or (at your option) any later version. > > >> + > > >> + The GNU C Library is distributed in the hope that it will be useful, > > >> + but WITHOUT ANY WARRANTY; without even the implied warranty of > > >> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > >> + Lesser General Public License for more details. > > >> + > > >> + You should have received a copy of the GNU Lesser General Public > > >> + License along with the GNU C Library; if not, see > > >> + . */ > > >> + > > >> +/* chacha20-amd64-ssse3.S - SSSE3 implementation of ChaCha20 cipher > > > > > > Should this be sse2? > > > > This is the original header from libgcrypt, my understanding it would be > > better to keep as is. > > > > >> + > > >> + Copyright (C) 2017-2019 Jussi Kivilinna > > >> + > > >> + This file is part of Libgcrypt. > > >> + > > >> + Libgcrypt is free software; you can redistribute it and/or modify > > >> + it under the terms of the GNU Lesser General Public License as > > >> + published by the Free Software Foundation; either version 2.1 of > > >> + the License, or (at your option) any later version. > > >> + > > >> + Libgcrypt is distributed in the hope that it will be useful, > > >> + but WITHOUT ANY WARRANTY; without even the implied warranty of > > >> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > > >> + GNU Lesser General Public License for more details. > > >> + > > >> + You should have received a copy of the GNU Lesser General Public > > >> + License along with this program; if not, see . > > >> +*/ > > >> + > > >> +/* Based on D. J. Bernstein reference implementation at > > >> + http://cr.yp.to/chacha.html: > > >> + > > >> + chacha-regs.c version 20080118 > > >> + D. J. Bernstein > > >> + Public domain. */ > > >> + > > > > > > If you have time make the ifunc changes to avx2 can you add: > > > > > > #include > > > #if MINIMUM_X86_ISA_LEVEL <= 1 > > > > > > #endif > > > > > > as a build guard? > > > > Alright, I will add it. Err it should be #if MINIMUM_X86_ISA_LEVEL <= 2 sorry. > > There will be a link error if you don't make the ifunc `X86_ISA_...` > macro changes in ifunc-avx2. > > > > > > > > > > > > >> +#include > > >> + > > >> +#ifdef PIC > > >> +# define rRIP (%rip) > > >> +#else > > >> +# define rRIP > > >> +#endif > > >> + > > >> +/* 'ret' instruction replacement for straight-line speculation mitigation */ > > >> +#define ret_spec_stop \ > > >> + ret; int3; > > >> + > > >> +/* register macros */ > > >> +#define INPUT %rdi > > >> +#define DST %rsi > > >> +#define SRC %rdx > > >> +#define NBLKS %rcx > > >> +#define ROUND %eax > > >> + > > >> +/* stack structure */ > > >> +#define STACK_VEC_X12 (16) > > >> +#define STACK_VEC_X13 (16 + STACK_VEC_X12) > > >> +#define STACK_TMP (16 + STACK_VEC_X13) > > >> +#define STACK_TMP1 (16 + STACK_TMP) > > >> +#define STACK_TMP2 (16 + STACK_TMP1) > > >> + > > >> +#define STACK_MAX (16 + STACK_TMP2) > > >> + > > >> +/* vector registers */ > > >> +#define X0 %xmm0 > > >> +#define X1 %xmm1 > > >> +#define X2 %xmm2 > > >> +#define X3 %xmm3 > > >> +#define X4 %xmm4 > > >> +#define X5 %xmm5 > > >> +#define X6 %xmm6 > > >> +#define X7 %xmm7 > > >> +#define X8 %xmm8 > > >> +#define X9 %xmm9 > > >> +#define X10 %xmm10 > > >> +#define X11 %xmm11 > > >> +#define X12 %xmm12 > > >> +#define X13 %xmm13 > > >> +#define X14 %xmm14 > > >> +#define X15 %xmm15 > > >> + > > >> +/********************************************************************** > > >> + helper macros > > >> + **********************************************************************/ > > >> + > > >> +/* 4x4 32-bit integer matrix transpose */ > > >> +#define TRANSPOSE_4x4(x0, x1, x2, x3, t1, t2, t3) \ > > >> + movdqa x0, t2; \ > > >> + punpckhdq x1, t2; \ > > >> + punpckldq x1, x0; \ > > >> + \ > > >> + movdqa x2, t1; \ > > >> + punpckldq x3, t1; \ > > >> + punpckhdq x3, x2; \ > > >> + \ > > >> + movdqa x0, x1; \ > > >> + punpckhqdq t1, x1; \ > > >> + punpcklqdq t1, x0; \ > > >> + \ > > >> + movdqa t2, x3; \ > > >> + punpckhqdq x2, x3; \ > > >> + punpcklqdq x2, t2; \ > > >> + movdqa t2, x2; > > >> + > > >> +/* fill xmm register with 32-bit value from memory */ > > >> +#define PBROADCASTD(mem32, xreg) \ > > >> + movd mem32, xreg; \ > > >> + pshufd $0, xreg, xreg; > > >> + > > >> +/********************************************************************** > > >> + 4-way chacha20 > > >> + **********************************************************************/ > > >> + > > >> +#define ROTATE2(v1,v2,c,tmp1,tmp2) \ > > >> + movdqa v1, tmp1; \ > > >> + movdqa v2, tmp2; \ > > >> + psrld $(32 - (c)), v1; \ > > >> + pslld $(c), tmp1; \ > > >> + paddb tmp1, v1; \ > > >> + psrld $(32 - (c)), v2; \ > > >> + pslld $(c), tmp2; \ > > >> + paddb tmp2, v2; > > >> + > > >> +#define XOR(ds,s) \ > > >> + pxor s, ds; > > >> + > > >> +#define PLUS(ds,s) \ > > >> + paddd s, ds; > > >> + > > >> +#define QUARTERROUND2(a1,b1,c1,d1,a2,b2,c2,d2,ign,tmp1,tmp2) \ > > >> + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ > > >> + ROTATE2(d1, d2, 16, tmp1, tmp2); \ > > >> + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ > > >> + ROTATE2(b1, b2, 12, tmp1, tmp2); \ > > >> + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \ > > >> + ROTATE2(d1, d2, 8, tmp1, tmp2); \ > > >> + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \ > > >> + ROTATE2(b1, b2, 7, tmp1, tmp2); > > >> + > > >> + .section .text.sse2,"ax",@progbits > > >> + > > >> +chacha20_data: > > >> + .align 16 > > >> +L(counter1): > > >> + .long 1,0,0,0 > > >> +L(inc_counter): > > >> + .long 0,1,2,3 > > >> +L(unsigned_cmp): > > >> + .long 0x80000000,0x80000000,0x80000000,0x80000000 > > >> + > > >> + .hidden __chacha20_sse2_blocks4 > > >> +ENTRY (__chacha20_sse2_blocks4) > > >> + /* input: > > >> + * %rdi: input > > >> + * %rsi: dst > > >> + * %rdx: src > > >> + * %rcx: nblks (multiple of 4) > > >> + */ > > >> + > > >> + pushq %rbp; > > >> + cfi_adjust_cfa_offset(8); > > >> + cfi_rel_offset(rbp, 0) > > >> + movq %rsp, %rbp; > > >> + cfi_def_cfa_register(%rbp); > > >> + > > >> + subq $STACK_MAX, %rsp; > > >> + andq $~15, %rsp; > > >> + > > >> +L(loop4): > > >> + mov $20, ROUND; > > >> + > > >> + /* Construct counter vectors X12 and X13 */ > > >> + movdqa L(inc_counter) rRIP, X0; > > >> + movdqa L(unsigned_cmp) rRIP, X2; > > >> + PBROADCASTD((12 * 4)(INPUT), X12); > > >> + PBROADCASTD((13 * 4)(INPUT), X13); > > >> + paddd X0, X12; > > >> + movdqa X12, X1; > > >> + pxor X2, X0; > > >> + pxor X2, X1; > > >> + pcmpgtd X1, X0; > > >> + psubd X0, X13; > > >> + movdqa X12, (STACK_VEC_X12)(%rsp); > > >> + movdqa X13, (STACK_VEC_X13)(%rsp); > > >> + > > >> + /* Load vectors */ > > >> + PBROADCASTD((0 * 4)(INPUT), X0); > > >> + PBROADCASTD((1 * 4)(INPUT), X1); > > >> + PBROADCASTD((2 * 4)(INPUT), X2); > > >> + PBROADCASTD((3 * 4)(INPUT), X3); > > >> + PBROADCASTD((4 * 4)(INPUT), X4); > > >> + PBROADCASTD((5 * 4)(INPUT), X5); > > >> + PBROADCASTD((6 * 4)(INPUT), X6); > > >> + PBROADCASTD((7 * 4)(INPUT), X7); > > >> + PBROADCASTD((8 * 4)(INPUT), X8); > > >> + PBROADCASTD((9 * 4)(INPUT), X9); > > >> + PBROADCASTD((10 * 4)(INPUT), X10); > > >> + PBROADCASTD((11 * 4)(INPUT), X11); > > >> + PBROADCASTD((14 * 4)(INPUT), X14); > > >> + PBROADCASTD((15 * 4)(INPUT), X15); > > >> + movdqa X11, (STACK_TMP)(%rsp); > > >> + movdqa X15, (STACK_TMP1)(%rsp); > > >> + > > >> +L(round2_4): > > >> + QUARTERROUND2(X0, X4, X8, X12, X1, X5, X9, X13, tmp:=,X11,X15) > > >> + movdqa (STACK_TMP)(%rsp), X11; > > >> + movdqa (STACK_TMP1)(%rsp), X15; > > >> + movdqa X8, (STACK_TMP)(%rsp); > > >> + movdqa X9, (STACK_TMP1)(%rsp); > > >> + QUARTERROUND2(X2, X6, X10, X14, X3, X7, X11, X15, tmp:=,X8,X9) > > >> + QUARTERROUND2(X0, X5, X10, X15, X1, X6, X11, X12, tmp:=,X8,X9) > > >> + movdqa (STACK_TMP)(%rsp), X8; > > >> + movdqa (STACK_TMP1)(%rsp), X9; > > >> + movdqa X11, (STACK_TMP)(%rsp); > > >> + movdqa X15, (STACK_TMP1)(%rsp); > > >> + QUARTERROUND2(X2, X7, X8, X13, X3, X4, X9, X14, tmp:=,X11,X15) > > >> + sub $2, ROUND; > > >> + jnz L(round2_4); > > >> + > > >> + /* tmp := X15 */ > > >> + movdqa (STACK_TMP)(%rsp), X11; > > >> + PBROADCASTD((0 * 4)(INPUT), X15); > > >> + PLUS(X0, X15); > > >> + PBROADCASTD((1 * 4)(INPUT), X15); > > >> + PLUS(X1, X15); > > >> + PBROADCASTD((2 * 4)(INPUT), X15); > > >> + PLUS(X2, X15); > > >> + PBROADCASTD((3 * 4)(INPUT), X15); > > >> + PLUS(X3, X15); > > >> + PBROADCASTD((4 * 4)(INPUT), X15); > > >> + PLUS(X4, X15); > > >> + PBROADCASTD((5 * 4)(INPUT), X15); > > >> + PLUS(X5, X15); > > >> + PBROADCASTD((6 * 4)(INPUT), X15); > > >> + PLUS(X6, X15); > > >> + PBROADCASTD((7 * 4)(INPUT), X15); > > >> + PLUS(X7, X15); > > >> + PBROADCASTD((8 * 4)(INPUT), X15); > > >> + PLUS(X8, X15); > > >> + PBROADCASTD((9 * 4)(INPUT), X15); > > >> + PLUS(X9, X15); > > >> + PBROADCASTD((10 * 4)(INPUT), X15); > > >> + PLUS(X10, X15); > > >> + PBROADCASTD((11 * 4)(INPUT), X15); > > >> + PLUS(X11, X15); > > >> + movdqa (STACK_VEC_X12)(%rsp), X15; > > >> + PLUS(X12, X15); > > >> + movdqa (STACK_VEC_X13)(%rsp), X15; > > >> + PLUS(X13, X15); > > >> + movdqa X13, (STACK_TMP)(%rsp); > > >> + PBROADCASTD((14 * 4)(INPUT), X15); > > >> + PLUS(X14, X15); > > >> + movdqa (STACK_TMP1)(%rsp), X15; > > >> + movdqa X14, (STACK_TMP1)(%rsp); > > >> + PBROADCASTD((15 * 4)(INPUT), X13); > > >> + PLUS(X15, X13); > > >> + movdqa X15, (STACK_TMP2)(%rsp); > > >> + > > >> + /* Update counter */ > > >> + addq $4, (12 * 4)(INPUT); > > >> + > > >> + TRANSPOSE_4x4(X0, X1, X2, X3, X13, X14, X15); > > >> + movdqu X0, (64 * 0 + 16 * 0)(DST) > > >> + movdqu X1, (64 * 1 + 16 * 0)(DST) > > >> + movdqu X2, (64 * 2 + 16 * 0)(DST) > > >> + movdqu X3, (64 * 3 + 16 * 0)(DST) > > >> + TRANSPOSE_4x4(X4, X5, X6, X7, X0, X1, X2); > > >> + movdqa (STACK_TMP)(%rsp), X13; > > >> + movdqa (STACK_TMP1)(%rsp), X14; > > >> + movdqa (STACK_TMP2)(%rsp), X15; > > >> + movdqu X4, (64 * 0 + 16 * 1)(DST) > > >> + movdqu X5, (64 * 1 + 16 * 1)(DST) > > >> + movdqu X6, (64 * 2 + 16 * 1)(DST) > > >> + movdqu X7, (64 * 3 + 16 * 1)(DST) > > >> + TRANSPOSE_4x4(X8, X9, X10, X11, X0, X1, X2); > > >> + movdqu X8, (64 * 0 + 16 * 2)(DST) > > >> + movdqu X9, (64 * 1 + 16 * 2)(DST) > > >> + movdqu X10, (64 * 2 + 16 * 2)(DST) > > >> + movdqu X11, (64 * 3 + 16 * 2)(DST) > > >> + TRANSPOSE_4x4(X12, X13, X14, X15, X0, X1, X2); > > >> + movdqu X12, (64 * 0 + 16 * 3)(DST) > > >> + movdqu X13, (64 * 1 + 16 * 3)(DST) > > >> + movdqu X14, (64 * 2 + 16 * 3)(DST) > > >> + movdqu X15, (64 * 3 + 16 * 3)(DST) > > >> + > > >> + sub $4, NBLKS; > > >> + lea (4 * 64)(DST), DST; > > >> + lea (4 * 64)(SRC), SRC; > > >> + jnz L(loop4); > > >> + > > >> + /* eax zeroed by round loop. */ > > >> + leave; > > >> + cfi_adjust_cfa_offset(-8) > > >> + cfi_def_cfa_register(%rsp); > > >> + ret_spec_stop; > > >> +END (__chacha20_sse2_blocks4) > > >> diff --git a/sysdeps/x86_64/chacha20_arch.h b/sysdeps/x86_64/chacha20_arch.h > > >> new file mode 100644 > > >> index 0000000000..5738c840a9 > > >> --- /dev/null > > >> +++ b/sysdeps/x86_64/chacha20_arch.h > > >> @@ -0,0 +1,38 @@ > > >> +/* Chacha20 implementation, used on arc4random. > > >> + Copyright (C) 2022 Free Software Foundation, Inc. > > >> + This file is part of the GNU C Library. > > >> + > > >> + The GNU C Library is free software; you can redistribute it and/or > > >> + modify it under the terms of the GNU Lesser General Public > > >> + License as published by the Free Software Foundation; either > > >> + version 2.1 of the License, or (at your option) any later version. > > >> + > > >> + The GNU C Library is distributed in the hope that it will be useful, > > >> + but WITHOUT ANY WARRANTY; without even the implied warranty of > > >> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > >> + Lesser General Public License for more details. > > >> + > > >> + You should have received a copy of the GNU Lesser General Public > > >> + License along with the GNU C Library; if not, see > > >> + . */ > > >> + > > >> +#include > > >> +#include > > >> +#include > > >> + > > >> +unsigned int __chacha20_sse2_blocks4 (uint32_t *state, uint8_t *dst, > > >> + const uint8_t *src, size_t nblks) > > >> + attribute_hidden; > > >> + > > >> +static inline void > > >> +chacha20_crypt (uint32_t *state, uint8_t *dst, const uint8_t *src, > > >> + size_t bytes) > > >> +{ > > >> + _Static_assert (CHACHA20_BUFSIZE % 4 == 0, > > >> + "CHACHA20_BUFSIZE not multiple of 4"); > > >> + _Static_assert (CHACHA20_BUFSIZE >= CHACHA20_BLOCK_SIZE * 4, > > >> + "CHACHA20_BUFSIZE <= CHACHA20_BLOCK_SIZE * 4"); > > >> + > > >> + __chacha20_sse2_blocks4 (state, dst, src, > > >> + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE); > > >> +} > > >> -- > > >> 2.34.1 > > >>