public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Noah Goldstein <goldstein.w.n@gmail.com>
To: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
Cc: GNU C Library <libc-alpha@sourceware.org>,
	Florian Weimer <fweimer@redhat.com>
Subject: Re: [PATCH v9 5/9] x86: Add SSE2 optimized chacha20
Date: Wed, 13 Jul 2022 11:22:42 -0700	[thread overview]
Message-ID: <CAFUsyfJuT6DqS6sNX96-VrnvUxyx8z7S8mvgLMvxdxwCW873nw@mail.gmail.com> (raw)
In-Reply-To: <c9dd98ad-a4ef-a9c3-2f0e-cb35d631ef72@linaro.org>

On Wed, Jul 13, 2022 at 11:20 AM Adhemerval Zanella Netto
<adhemerval.zanella@linaro.org> wrote:
>
>
>
> On 13/07/22 15:12, Noah Goldstein wrote:
> > On Wed, Jul 13, 2022 at 10:39 AM Adhemerval Zanella via Libc-alpha
> > <libc-alpha@sourceware.org> wrote:
> >>
> >> From: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
> >>
> >> It adds vectorized ChaCha20 implementation based on libgcrypt
> >> cipher/chacha20-amd64-ssse3.S.  It replaces the ROTATE_SHUF_2 (which
> >> uses pshufb) by ROTATE2 and thus making the original implementation
> >> SSE2.
> >>
> >> As for generic implementation, the last step that XOR with the
> >> input is omited. The final state register clearing is also
> >> omitted.
> >>
> >> On a Ryzen 9 5900X it shows the following improvements (using
> >> formatted bench-arc4random data):
> >>
> >> GENERIC                                    MB/s
> >> -----------------------------------------------
> >> arc4random [single-thread]               443.11
> >> arc4random_buf(16) [single-thread]       552.27
> >> arc4random_buf(32) [single-thread]       626.86
> >> arc4random_buf(48) [single-thread]       649.81
> >> arc4random_buf(64) [single-thread]       663.95
> >> arc4random_buf(80) [single-thread]       674.78
> >> arc4random_buf(96) [single-thread]       675.17
> >> arc4random_buf(112) [single-thread]      680.69
> >> arc4random_buf(128) [single-thread]      683.20
> >> -----------------------------------------------
> >>
> >> SSE                                        MB/s
> >> -----------------------------------------------
> >> arc4random [single-thread]               704.25
> >> arc4random_buf(16) [single-thread]      1018.17
> >> arc4random_buf(32) [single-thread]      1315.27
> >> arc4random_buf(48) [single-thread]      1449.36
> >> arc4random_buf(64) [single-thread]      1511.16
> >> arc4random_buf(80) [single-thread]      1539.48
> >> arc4random_buf(96) [single-thread]      1571.06
> >> arc4random_buf(112) [single-thread]     1596.16
> >> arc4random_buf(128) [single-thread]     1613.48
> >> -----------------------------------------------
> >>
> >> Checked on x86_64-linux-gnu.
> >> ---
> >>  LICENSES                             |   4 +-
> >>  sysdeps/x86_64/Makefile              |   6 +
> >>  sysdeps/x86_64/chacha20-amd64-sse2.S | 306 +++++++++++++++++++++++++++
> >>  sysdeps/x86_64/chacha20_arch.h       |  38 ++++
> >>  4 files changed, 352 insertions(+), 2 deletions(-)
> >>  create mode 100644 sysdeps/x86_64/chacha20-amd64-sse2.S
> >>  create mode 100644 sysdeps/x86_64/chacha20_arch.h
> >>
> >> diff --git a/LICENSES b/LICENSES
> >> index a94ea89d0d..47e9cd8e31 100644
> >> --- a/LICENSES
> >> +++ b/LICENSES
> >> @@ -390,8 +390,8 @@ Copyright 2001 by Stephen L. Moshier <moshier@na-net.ornl.gov>
> >>   License along with this library; if not, see
> >>   <https://www.gnu.org/licenses/>.  */
> >>
> >> -sysdeps/aarch64/chacha20-aarch64.S imports code from libgcrypt, with
> >> -the following notices:
> >> +sysdeps/aarch64/chacha20-aarch64.S and sysdeps/x86_64/chacha20-amd64-sse2.S
> >> +imports code from libgcrypt, with the following notices:
> >>
> >>  Copyright (C) 2017-2019 Jussi Kivilinna <jussi.kivilinna@iki.fi>
> >>
> >> diff --git a/sysdeps/x86_64/Makefile b/sysdeps/x86_64/Makefile
> >> index e597a4855f..a2e5af3ca9 100644
> >> --- a/sysdeps/x86_64/Makefile
> >> +++ b/sysdeps/x86_64/Makefile
> >> @@ -5,6 +5,12 @@ ifeq ($(subdir),csu)
> >>  gen-as-const-headers += link-defines.sym
> >>  endif
> >>
> >> +ifeq ($(subdir),stdlib)
> >> +sysdep_routines += \
> >> +  chacha20-amd64-sse2 \
> >> +  # sysdep_routines
> >> +endif
> >> +
> >>  ifeq ($(subdir),gmon)
> >>  sysdep_routines += _mcount
> >>  # We cannot compile _mcount.S with -pg because that would create
> >> diff --git a/sysdeps/x86_64/chacha20-amd64-sse2.S b/sysdeps/x86_64/chacha20-amd64-sse2.S
> >> new file mode 100644
> >> index 0000000000..7b30f61446
> >> --- /dev/null
> >> +++ b/sysdeps/x86_64/chacha20-amd64-sse2.S
> >> @@ -0,0 +1,306 @@
> >> +/* Optimized SSE2 implementation of ChaCha20 cipher.
> >> +   Copyright (C) 2022 Free Software Foundation, Inc.
> >> +   This file is part of the GNU C Library.
> >> +
> >> +   The GNU C Library is free software; you can redistribute it and/or
> >> +   modify it under the terms of the GNU Lesser General Public
> >> +   License as published by the Free Software Foundation; either
> >> +   version 2.1 of the License, or (at your option) any later version.
> >> +
> >> +   The GNU C Library is distributed in the hope that it will be useful,
> >> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> >> +   Lesser General Public License for more details.
> >> +
> >> +   You should have received a copy of the GNU Lesser General Public
> >> +   License along with the GNU C Library; if not, see
> >> +   <https://www.gnu.org/licenses/>.  */
> >> +
> >> +/* chacha20-amd64-ssse3.S  -  SSSE3 implementation of ChaCha20 cipher
> >
> > Should this be sse2?
>
> This is the original header from libgcrypt, my understanding it would be
> better to keep as is.
>
> >> +
> >> +   Copyright (C) 2017-2019 Jussi Kivilinna <jussi.kivilinna@iki.fi>
> >> +
> >> +   This file is part of Libgcrypt.
> >> +
> >> +   Libgcrypt is free software; you can redistribute it and/or modify
> >> +   it under the terms of the GNU Lesser General Public License as
> >> +   published by the Free Software Foundation; either version 2.1 of
> >> +   the License, or (at your option) any later version.
> >> +
> >> +   Libgcrypt is distributed in the hope that it will be useful,
> >> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> >> +   GNU Lesser General Public License for more details.
> >> +
> >> +   You should have received a copy of the GNU Lesser General Public
> >> +   License along with this program; if not, see <http://www.gnu.org/licenses/>.
> >> +*/
> >> +
> >> +/* Based on D. J. Bernstein reference implementation at
> >> +   http://cr.yp.to/chacha.html:
> >> +
> >> +   chacha-regs.c version 20080118
> >> +   D. J. Bernstein
> >> +   Public domain.  */
> >> +
> >
> > If you have time make the ifunc changes to avx2 can you add:
> >
> > #include <isa-level.h>
> > #if MINIMUM_X86_ISA_LEVEL <= 1
> > <contentions of file>
> > #endif
> >
> > as a build guard?
>
> Alright, I will add it.

There will be a link error if you don't make the ifunc `X86_ISA_...`
macro changes in ifunc-avx2.


>
> >
> >
> >> +#include <sysdep.h>
> >> +
> >> +#ifdef PIC
> >> +#  define rRIP (%rip)
> >> +#else
> >> +#  define rRIP
> >> +#endif
> >> +
> >> +/* 'ret' instruction replacement for straight-line speculation mitigation */
> >> +#define ret_spec_stop \
> >> +        ret; int3;
> >> +
> >> +/* register macros */
> >> +#define INPUT %rdi
> >> +#define DST   %rsi
> >> +#define SRC   %rdx
> >> +#define NBLKS %rcx
> >> +#define ROUND %eax
> >> +
> >> +/* stack structure */
> >> +#define STACK_VEC_X12 (16)
> >> +#define STACK_VEC_X13 (16 + STACK_VEC_X12)
> >> +#define STACK_TMP     (16 + STACK_VEC_X13)
> >> +#define STACK_TMP1    (16 + STACK_TMP)
> >> +#define STACK_TMP2    (16 + STACK_TMP1)
> >> +
> >> +#define STACK_MAX     (16 + STACK_TMP2)
> >> +
> >> +/* vector registers */
> >> +#define X0 %xmm0
> >> +#define X1 %xmm1
> >> +#define X2 %xmm2
> >> +#define X3 %xmm3
> >> +#define X4 %xmm4
> >> +#define X5 %xmm5
> >> +#define X6 %xmm6
> >> +#define X7 %xmm7
> >> +#define X8 %xmm8
> >> +#define X9 %xmm9
> >> +#define X10 %xmm10
> >> +#define X11 %xmm11
> >> +#define X12 %xmm12
> >> +#define X13 %xmm13
> >> +#define X14 %xmm14
> >> +#define X15 %xmm15
> >> +
> >> +/**********************************************************************
> >> +  helper macros
> >> + **********************************************************************/
> >> +
> >> +/* 4x4 32-bit integer matrix transpose */
> >> +#define TRANSPOSE_4x4(x0, x1, x2, x3, t1, t2, t3) \
> >> +       movdqa    x0, t2; \
> >> +       punpckhdq x1, t2; \
> >> +       punpckldq x1, x0; \
> >> +       \
> >> +       movdqa    x2, t1; \
> >> +       punpckldq x3, t1; \
> >> +       punpckhdq x3, x2; \
> >> +       \
> >> +       movdqa     x0, x1; \
> >> +       punpckhqdq t1, x1; \
> >> +       punpcklqdq t1, x0; \
> >> +       \
> >> +       movdqa     t2, x3; \
> >> +       punpckhqdq x2, x3; \
> >> +       punpcklqdq x2, t2; \
> >> +       movdqa     t2, x2;
> >> +
> >> +/* fill xmm register with 32-bit value from memory */
> >> +#define PBROADCASTD(mem32, xreg) \
> >> +       movd mem32, xreg; \
> >> +       pshufd $0, xreg, xreg;
> >> +
> >> +/**********************************************************************
> >> +  4-way chacha20
> >> + **********************************************************************/
> >> +
> >> +#define ROTATE2(v1,v2,c,tmp1,tmp2)     \
> >> +       movdqa v1, tmp1;                \
> >> +       movdqa v2, tmp2;                \
> >> +       psrld $(32 - (c)), v1;          \
> >> +       pslld $(c), tmp1;               \
> >> +       paddb tmp1, v1;                 \
> >> +       psrld $(32 - (c)), v2;          \
> >> +       pslld $(c), tmp2;               \
> >> +       paddb tmp2, v2;
> >> +
> >> +#define XOR(ds,s) \
> >> +       pxor s, ds;
> >> +
> >> +#define PLUS(ds,s) \
> >> +       paddd s, ds;
> >> +
> >> +#define QUARTERROUND2(a1,b1,c1,d1,a2,b2,c2,d2,ign,tmp1,tmp2)   \
> >> +       PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2);       \
> >> +           ROTATE2(d1, d2, 16, tmp1, tmp2);                    \
> >> +       PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2);       \
> >> +           ROTATE2(b1, b2, 12, tmp1, tmp2);                    \
> >> +       PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2);       \
> >> +           ROTATE2(d1, d2, 8, tmp1, tmp2);                     \
> >> +       PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2);       \
> >> +           ROTATE2(b1, b2,  7, tmp1, tmp2);
> >> +
> >> +       .section .text.sse2,"ax",@progbits
> >> +
> >> +chacha20_data:
> >> +       .align 16
> >> +L(counter1):
> >> +       .long 1,0,0,0
> >> +L(inc_counter):
> >> +       .long 0,1,2,3
> >> +L(unsigned_cmp):
> >> +       .long 0x80000000,0x80000000,0x80000000,0x80000000
> >> +
> >> +       .hidden __chacha20_sse2_blocks4
> >> +ENTRY (__chacha20_sse2_blocks4)
> >> +       /* input:
> >> +        *      %rdi: input
> >> +        *      %rsi: dst
> >> +        *      %rdx: src
> >> +        *      %rcx: nblks (multiple of 4)
> >> +        */
> >> +
> >> +       pushq %rbp;
> >> +       cfi_adjust_cfa_offset(8);
> >> +       cfi_rel_offset(rbp, 0)
> >> +       movq %rsp, %rbp;
> >> +       cfi_def_cfa_register(%rbp);
> >> +
> >> +       subq $STACK_MAX, %rsp;
> >> +       andq $~15, %rsp;
> >> +
> >> +L(loop4):
> >> +       mov $20, ROUND;
> >> +
> >> +       /* Construct counter vectors X12 and X13 */
> >> +       movdqa L(inc_counter) rRIP, X0;
> >> +       movdqa L(unsigned_cmp) rRIP, X2;
> >> +       PBROADCASTD((12 * 4)(INPUT), X12);
> >> +       PBROADCASTD((13 * 4)(INPUT), X13);
> >> +       paddd X0, X12;
> >> +       movdqa X12, X1;
> >> +       pxor X2, X0;
> >> +       pxor X2, X1;
> >> +       pcmpgtd X1, X0;
> >> +       psubd X0, X13;
> >> +       movdqa X12, (STACK_VEC_X12)(%rsp);
> >> +       movdqa X13, (STACK_VEC_X13)(%rsp);
> >> +
> >> +       /* Load vectors */
> >> +       PBROADCASTD((0 * 4)(INPUT), X0);
> >> +       PBROADCASTD((1 * 4)(INPUT), X1);
> >> +       PBROADCASTD((2 * 4)(INPUT), X2);
> >> +       PBROADCASTD((3 * 4)(INPUT), X3);
> >> +       PBROADCASTD((4 * 4)(INPUT), X4);
> >> +       PBROADCASTD((5 * 4)(INPUT), X5);
> >> +       PBROADCASTD((6 * 4)(INPUT), X6);
> >> +       PBROADCASTD((7 * 4)(INPUT), X7);
> >> +       PBROADCASTD((8 * 4)(INPUT), X8);
> >> +       PBROADCASTD((9 * 4)(INPUT), X9);
> >> +       PBROADCASTD((10 * 4)(INPUT), X10);
> >> +       PBROADCASTD((11 * 4)(INPUT), X11);
> >> +       PBROADCASTD((14 * 4)(INPUT), X14);
> >> +       PBROADCASTD((15 * 4)(INPUT), X15);
> >> +       movdqa X11, (STACK_TMP)(%rsp);
> >> +       movdqa X15, (STACK_TMP1)(%rsp);
> >> +
> >> +L(round2_4):
> >> +       QUARTERROUND2(X0, X4,  X8, X12,   X1, X5,  X9, X13, tmp:=,X11,X15)
> >> +       movdqa (STACK_TMP)(%rsp), X11;
> >> +       movdqa (STACK_TMP1)(%rsp), X15;
> >> +       movdqa X8, (STACK_TMP)(%rsp);
> >> +       movdqa X9, (STACK_TMP1)(%rsp);
> >> +       QUARTERROUND2(X2, X6, X10, X14,   X3, X7, X11, X15, tmp:=,X8,X9)
> >> +       QUARTERROUND2(X0, X5, X10, X15,   X1, X6, X11, X12, tmp:=,X8,X9)
> >> +       movdqa (STACK_TMP)(%rsp), X8;
> >> +       movdqa (STACK_TMP1)(%rsp), X9;
> >> +       movdqa X11, (STACK_TMP)(%rsp);
> >> +       movdqa X15, (STACK_TMP1)(%rsp);
> >> +       QUARTERROUND2(X2, X7,  X8, X13,   X3, X4,  X9, X14, tmp:=,X11,X15)
> >> +       sub $2, ROUND;
> >> +       jnz L(round2_4);
> >> +
> >> +       /* tmp := X15 */
> >> +       movdqa (STACK_TMP)(%rsp), X11;
> >> +       PBROADCASTD((0 * 4)(INPUT), X15);
> >> +       PLUS(X0, X15);
> >> +       PBROADCASTD((1 * 4)(INPUT), X15);
> >> +       PLUS(X1, X15);
> >> +       PBROADCASTD((2 * 4)(INPUT), X15);
> >> +       PLUS(X2, X15);
> >> +       PBROADCASTD((3 * 4)(INPUT), X15);
> >> +       PLUS(X3, X15);
> >> +       PBROADCASTD((4 * 4)(INPUT), X15);
> >> +       PLUS(X4, X15);
> >> +       PBROADCASTD((5 * 4)(INPUT), X15);
> >> +       PLUS(X5, X15);
> >> +       PBROADCASTD((6 * 4)(INPUT), X15);
> >> +       PLUS(X6, X15);
> >> +       PBROADCASTD((7 * 4)(INPUT), X15);
> >> +       PLUS(X7, X15);
> >> +       PBROADCASTD((8 * 4)(INPUT), X15);
> >> +       PLUS(X8, X15);
> >> +       PBROADCASTD((9 * 4)(INPUT), X15);
> >> +       PLUS(X9, X15);
> >> +       PBROADCASTD((10 * 4)(INPUT), X15);
> >> +       PLUS(X10, X15);
> >> +       PBROADCASTD((11 * 4)(INPUT), X15);
> >> +       PLUS(X11, X15);
> >> +       movdqa (STACK_VEC_X12)(%rsp), X15;
> >> +       PLUS(X12, X15);
> >> +       movdqa (STACK_VEC_X13)(%rsp), X15;
> >> +       PLUS(X13, X15);
> >> +       movdqa X13, (STACK_TMP)(%rsp);
> >> +       PBROADCASTD((14 * 4)(INPUT), X15);
> >> +       PLUS(X14, X15);
> >> +       movdqa (STACK_TMP1)(%rsp), X15;
> >> +       movdqa X14, (STACK_TMP1)(%rsp);
> >> +       PBROADCASTD((15 * 4)(INPUT), X13);
> >> +       PLUS(X15, X13);
> >> +       movdqa X15, (STACK_TMP2)(%rsp);
> >> +
> >> +       /* Update counter */
> >> +       addq $4, (12 * 4)(INPUT);
> >> +
> >> +       TRANSPOSE_4x4(X0, X1, X2, X3, X13, X14, X15);
> >> +       movdqu X0, (64 * 0 + 16 * 0)(DST)
> >> +       movdqu X1, (64 * 1 + 16 * 0)(DST)
> >> +       movdqu X2, (64 * 2 + 16 * 0)(DST)
> >> +       movdqu X3, (64 * 3 + 16 * 0)(DST)
> >> +       TRANSPOSE_4x4(X4, X5, X6, X7, X0, X1, X2);
> >> +       movdqa (STACK_TMP)(%rsp), X13;
> >> +       movdqa (STACK_TMP1)(%rsp), X14;
> >> +       movdqa (STACK_TMP2)(%rsp), X15;
> >> +       movdqu X4, (64 * 0 + 16 * 1)(DST)
> >> +       movdqu X5, (64 * 1 + 16 * 1)(DST)
> >> +       movdqu X6, (64 * 2 + 16 * 1)(DST)
> >> +       movdqu X7, (64 * 3 + 16 * 1)(DST)
> >> +       TRANSPOSE_4x4(X8, X9, X10, X11, X0, X1, X2);
> >> +       movdqu X8,  (64 * 0 + 16 * 2)(DST)
> >> +       movdqu X9,  (64 * 1 + 16 * 2)(DST)
> >> +       movdqu X10, (64 * 2 + 16 * 2)(DST)
> >> +       movdqu X11, (64 * 3 + 16 * 2)(DST)
> >> +       TRANSPOSE_4x4(X12, X13, X14, X15, X0, X1, X2);
> >> +       movdqu X12, (64 * 0 + 16 * 3)(DST)
> >> +       movdqu X13, (64 * 1 + 16 * 3)(DST)
> >> +       movdqu X14, (64 * 2 + 16 * 3)(DST)
> >> +       movdqu X15, (64 * 3 + 16 * 3)(DST)
> >> +
> >> +       sub $4, NBLKS;
> >> +       lea (4 * 64)(DST), DST;
> >> +       lea (4 * 64)(SRC), SRC;
> >> +       jnz L(loop4);
> >> +
> >> +       /* eax zeroed by round loop. */
> >> +       leave;
> >> +       cfi_adjust_cfa_offset(-8)
> >> +       cfi_def_cfa_register(%rsp);
> >> +       ret_spec_stop;
> >> +END (__chacha20_sse2_blocks4)
> >> diff --git a/sysdeps/x86_64/chacha20_arch.h b/sysdeps/x86_64/chacha20_arch.h
> >> new file mode 100644
> >> index 0000000000..5738c840a9
> >> --- /dev/null
> >> +++ b/sysdeps/x86_64/chacha20_arch.h
> >> @@ -0,0 +1,38 @@
> >> +/* Chacha20 implementation, used on arc4random.
> >> +   Copyright (C) 2022 Free Software Foundation, Inc.
> >> +   This file is part of the GNU C Library.
> >> +
> >> +   The GNU C Library is free software; you can redistribute it and/or
> >> +   modify it under the terms of the GNU Lesser General Public
> >> +   License as published by the Free Software Foundation; either
> >> +   version 2.1 of the License, or (at your option) any later version.
> >> +
> >> +   The GNU C Library is distributed in the hope that it will be useful,
> >> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> >> +   Lesser General Public License for more details.
> >> +
> >> +   You should have received a copy of the GNU Lesser General Public
> >> +   License along with the GNU C Library; if not, see
> >> +   <http://www.gnu.org/licenses/>.  */
> >> +
> >> +#include <ldsodefs.h>
> >> +#include <cpu-features.h>
> >> +#include <sys/param.h>
> >> +
> >> +unsigned int __chacha20_sse2_blocks4 (uint32_t *state, uint8_t *dst,
> >> +                                     const uint8_t *src, size_t nblks)
> >> +     attribute_hidden;
> >> +
> >> +static inline void
> >> +chacha20_crypt (uint32_t *state, uint8_t *dst, const uint8_t *src,
> >> +               size_t bytes)
> >> +{
> >> +  _Static_assert (CHACHA20_BUFSIZE % 4 == 0,
> >> +                 "CHACHA20_BUFSIZE not multiple of 4");
> >> +  _Static_assert (CHACHA20_BUFSIZE >= CHACHA20_BLOCK_SIZE * 4,
> >> +                 "CHACHA20_BUFSIZE <= CHACHA20_BLOCK_SIZE * 4");
> >> +
> >> +  __chacha20_sse2_blocks4 (state, dst, src,
> >> +                          CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE);
> >> +}
> >> --
> >> 2.34.1
> >>

  reply	other threads:[~2022-07-13 18:22 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-13 17:36 [PATCH v9 0/9] Add arc4random support Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 1/9] stdlib: Add arc4random, arc4random_buf, and arc4random_uniform (BZ #4417) Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 2/9] stdlib: Add arc4random tests Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 3/9] benchtests: Add arc4random benchtest Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 4/9] aarch64: Add optimized chacha20 Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 5/9] x86: Add SSE2 " Adhemerval Zanella
2022-07-13 18:12   ` Noah Goldstein
2022-07-13 18:20     ` Adhemerval Zanella Netto
2022-07-13 18:22       ` Noah Goldstein [this message]
2022-07-13 18:27         ` Noah Goldstein
2022-07-13 18:29           ` Adhemerval Zanella Netto
2022-07-13 18:53             ` Noah Goldstein
2022-07-13 17:36 ` [PATCH v9 6/9] x86: Add AVX2 " Adhemerval Zanella
2022-07-13 18:07   ` Noah Goldstein
2022-07-13 19:31     ` Adhemerval Zanella Netto
2022-07-13 20:24       ` Noah Goldstein
2022-07-13 20:16     ` Florian Weimer
2022-07-13 20:23       ` Noah Goldstein
2022-07-13 17:36 ` [PATCH v9 7/9] powerpc64: Add " Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 8/9] s390x: " Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 9/9] manual: Add documentation for arc4random functions Adhemerval Zanella
2022-07-14 10:03   ` Mark Harris
2022-07-14 11:08     ` Adhemerval Zanella Netto

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFUsyfJuT6DqS6sNX96-VrnvUxyx8z7S8mvgLMvxdxwCW873nw@mail.gmail.com \
    --to=goldstein.w.n@gmail.com \
    --cc=adhemerval.zanella@linaro.org \
    --cc=fweimer@redhat.com \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).