public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Noah Goldstein <goldstein.w.n@gmail.com>
To: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Cc: GNU C Library <libc-alpha@sourceware.org>
Subject: Re: [PATCH v2 6/8] x86: Add AVX2 optimized chacha20
Date: Mon, 18 Apr 2022 10:58:30 -0500	[thread overview]
Message-ID: <CAFUsyfKk1ppCFsfV_uNE8EtoPQ+zjqVhpafJ4sx7ni3SVdSiag@mail.gmail.com> (raw)
In-Reply-To: <20220418120203.3185943-7-adhemerval.zanella@linaro.org>

On Mon, Apr 18, 2022 at 7:07 AM Adhemerval Zanella via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> It adds vectorized ChaCha20 implementation based on libgcrypt
> cipher/chacha20-amd64-avx2.S.  It is used only if AVX2 is supported
> and enabled by the architecture.
>
> As for generic implementation, the last step that XOR with the
> input is omited.
>
> On a Ryzen 9 5900X it shows the following improvements (using
> formatted bench-arc4random data):
>
> SSE2:
> Function                                 MB/s
> --------------------------------------------------
> arc4random [single-thread]               601.86
> arc4random_buf(16) [single-thread]       880.53
> arc4random_buf(32) [single-thread]       1182.72
> arc4random_buf(48) [single-thread]       1307.22
> arc4random_buf(64) [single-thread]       1381.01
> arc4random_buf(80) [single-thread]       1399.29
> arc4random_buf(96) [single-thread]       1445.00
> arc4random_buf(112) [single-thread]      1465.05
> arc4random_buf(128) [single-thread]      1497.05
> --------------------------------------------------
>
> AVX2:
> Function                                 MB/s
> --------------------------------------------------
> arc4random [single-thread]               744.84
> arc4random_buf(16) [single-thread]       1298.39
> arc4random_buf(32) [single-thread]       1969.08
> arc4random_buf(48) [single-thread]       2327.11
> arc4random_buf(64) [single-thread]       2549.97
> arc4random_buf(80) [single-thread]       2631.39
> arc4random_buf(96) [single-thread]       2802.66
> arc4random_buf(112) [single-thread]      2897.42
> arc4random_buf(128) [single-thread]      2976.55
> --------------------------------------------------
>
> Checked on x86_64-linux-gnu.
> ---
>  LICENSES                       |   5 +-
>  sysdeps/x86_64/Makefile        |   1 +
>  sysdeps/x86_64/chacha20-avx2.S | 313 +++++++++++++++++++++++++++++++++
>  sysdeps/x86_64/chacha20_arch.h |  18 +-
>  4 files changed, 331 insertions(+), 6 deletions(-)
>  create mode 100644 sysdeps/x86_64/chacha20-avx2.S
>
> diff --git a/LICENSES b/LICENSES
> index 415991e208..05a5c07fcf 100644
> --- a/LICENSES
> +++ b/LICENSES
> @@ -390,8 +390,9 @@ Copyright 2001 by Stephen L. Moshier <moshier@na-net.ornl.gov>
>   License along with this library; if not, see
>   <https://www.gnu.org/licenses/>.  */
>
> -sysdeps/aarch64/chacha20.S and sysdeps/x86_64/chacha20-sse2.S
> -import code from libgcrypt, with the following notices:
> +sysdeps/aarch64/chacha20.S, sysdeps/x86_64/chacha20-sse2.S, and
> +sysdeps/x86_64/chacha20-avx2.S import code from libgcrypt, with the
> +following notices:
>
>  Copyright (C) 2017-2019 Jussi Kivilinna <jussi.kivilinna@iki.fi>
>
> diff --git a/sysdeps/x86_64/Makefile b/sysdeps/x86_64/Makefile
> index c8fbc30857..0fa8897404 100644
> --- a/sysdeps/x86_64/Makefile
> +++ b/sysdeps/x86_64/Makefile
> @@ -8,6 +8,7 @@ endif
>  ifeq ($(subdir),stdlib)
>  sysdep_routines += \
>    chacha20-sse2 \
> +  chacha20-avx2 \
>    # sysdep_routines
>  endif
>
> diff --git a/sysdeps/x86_64/chacha20-avx2.S b/sysdeps/x86_64/chacha20-avx2.S
> new file mode 100644
> index 0000000000..fb76865890
> --- /dev/null
> +++ b/sysdeps/x86_64/chacha20-avx2.S
> @@ -0,0 +1,313 @@
> +/* Optimized AVX2 implementation of ChaCha20 cipher.
> +   Copyright (C) 2022 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <https://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +
> +/* Based on D. J. Bernstein reference implementation at
> +   http://cr.yp.to/chacha.html:
> +
> +   chacha-regs.c version 20080118
> +   D. J. Bernstein
> +   Public domain.  */
> +
> +#ifdef PIC
> +#  define rRIP (%rip)
> +#else
> +#  define rRIP
> +#endif
> +
> +/* register macros */
> +#define INPUT %rdi
> +#define DST   %rsi
> +#define SRC   %rdx
> +#define NBLKS %rcx
> +#define ROUND %eax
> +
> +/* stack structure */
> +#define STACK_VEC_X12 (32)
> +#define STACK_VEC_X13 (32 + STACK_VEC_X12)
> +#define STACK_TMP     (32 + STACK_VEC_X13)
> +#define STACK_TMP1    (32 + STACK_TMP)
> +
> +#define STACK_MAX     (32 + STACK_TMP1)
> +
> +/* vector registers */
> +#define X0 %ymm0
> +#define X1 %ymm1
> +#define X2 %ymm2
> +#define X3 %ymm3
> +#define X4 %ymm4
> +#define X5 %ymm5
> +#define X6 %ymm6
> +#define X7 %ymm7
> +#define X8 %ymm8
> +#define X9 %ymm9
> +#define X10 %ymm10
> +#define X11 %ymm11
> +#define X12 %ymm12
> +#define X13 %ymm13
> +#define X14 %ymm14
> +#define X15 %ymm15
> +
> +#define X0h %xmm0
> +#define X1h %xmm1
> +#define X2h %xmm2
> +#define X3h %xmm3
> +#define X4h %xmm4
> +#define X5h %xmm5
> +#define X6h %xmm6
> +#define X7h %xmm7
> +#define X8h %xmm8
> +#define X9h %xmm9
> +#define X10h %xmm10
> +#define X11h %xmm11
> +#define X12h %xmm12
> +#define X13h %xmm13
> +#define X14h %xmm14
> +#define X15h %xmm15
> +
> +/**********************************************************************
> +  helper macros
> + **********************************************************************/
> +
> +/* 4x4 32-bit integer matrix transpose */
> +#define transpose_4x4(x0,x1,x2,x3,t1,t2) \
> +       vpunpckhdq x1, x0, t2; \
> +       vpunpckldq x1, x0, x0; \
> +       \
> +       vpunpckldq x3, x2, t1; \
> +       vpunpckhdq x3, x2, x2; \
> +       \
> +       vpunpckhqdq t1, x0, x1; \
> +       vpunpcklqdq t1, x0, x0; \
> +       \
> +       vpunpckhqdq x2, t2, x3; \
> +       vpunpcklqdq x2, t2, x2;
> +
> +/* 2x2 128-bit matrix transpose */
> +#define transpose_16byte_2x2(x0,x1,t1) \
> +       vmovdqa    x0, t1; \
> +       vperm2i128 $0x20, x1, x0, x0; \
> +       vperm2i128 $0x31, x1, t1, x1;
> +
> +/**********************************************************************
> +  8-way chacha20
> + **********************************************************************/
> +
> +#define ROTATE2(v1,v2,c,tmp)   \
> +       vpsrld $(32 - (c)), v1, tmp;    \
> +       vpslld $(c), v1, v1;            \
> +       vpaddb tmp, v1, v1;             \
> +       vpsrld $(32 - (c)), v2, tmp;    \
> +       vpslld $(c), v2, v2;            \
> +       vpaddb tmp, v2, v2;
> +
> +#define ROTATE_SHUF_2(v1,v2,shuf)      \
> +       vpshufb shuf, v1, v1;           \
> +       vpshufb shuf, v2, v2;
> +
> +#define XOR(ds,s) \
> +       vpxor s, ds, ds;
> +
> +#define PLUS(ds,s) \
> +       vpaddd s, ds, ds;
> +
> +#define QUARTERROUND2(a1,b1,c1,d1,a2,b2,c2,d2,ign,tmp1,\
> +                     interleave_op1,interleave_op2,\
> +                     interleave_op3,interleave_op4)            \
> +       vbroadcasti128 .Lshuf_rol16 rRIP, tmp1;                 \
> +               interleave_op1;                                 \
> +       PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2);       \
> +           ROTATE_SHUF_2(d1, d2, tmp1);                        \
> +               interleave_op2;                                 \
> +       PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2);       \
> +           ROTATE2(b1, b2, 12, tmp1);                          \
> +       vbroadcasti128 .Lshuf_rol8 rRIP, tmp1;                  \
> +               interleave_op3;                                 \
> +       PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2);       \
> +           ROTATE_SHUF_2(d1, d2, tmp1);                        \
> +               interleave_op4;                                 \
> +       PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2);       \
> +           ROTATE2(b1, b2,  7, tmp1);
> +
> +       .section .text.avx2, "ax", @progbits
> +       .align 32
> +chacha20_data:
> +L(shuf_rol16):
> +       .byte 2,3,0,1,6,7,4,5,10,11,8,9,14,15,12,13
> +L(shuf_rol8):
> +       .byte 3,0,1,2,7,4,5,6,11,8,9,10,15,12,13,14
> +L(inc_counter):
> +       .byte 0,1,2,3,4,5,6,7
> +L(unsigned_cmp):
> +       .long 0x80000000
> +
> +       .hidden __chacha20_avx2_blocks8
> +ENTRY (__chacha20_avx2_blocks8)
> +       /* input:
> +        *      %rdi: input
> +        *      %rsi: dst
> +        *      %rdx: src
> +        *      %rcx: nblks (multiple of 8)
> +        */
> +       vzeroupper;
> +
> +       pushq %rbp;
> +       cfi_adjust_cfa_offset(8);
> +       cfi_rel_offset(rbp, 0)
> +       movq %rsp, %rbp;
> +       cfi_def_cfa_register(rbp);
> +
> +       subq $STACK_MAX, %rsp;
> +       andq $~31, %rsp;
> +
> +L(loop8):
> +       mov $20, ROUND;
> +
> +       /* Construct counter vectors X12 and X13 */
> +       vpmovzxbd L(inc_counter) rRIP, X0;
> +       vpbroadcastd L(unsigned_cmp) rRIP, X2;
> +       vpbroadcastd (12 * 4)(INPUT), X12;
> +       vpbroadcastd (13 * 4)(INPUT), X13;
> +       vpaddd X0, X12, X12;
> +       vpxor X2, X0, X0;
> +       vpxor X2, X12, X1;
> +       vpcmpgtd X1, X0, X0;
> +       vpsubd X0, X13, X13;
> +       vmovdqa X12, (STACK_VEC_X12)(%rsp);
> +       vmovdqa X13, (STACK_VEC_X13)(%rsp);
> +
> +       /* Load vectors */
> +       vpbroadcastd (0 * 4)(INPUT), X0;
> +       vpbroadcastd (1 * 4)(INPUT), X1;
> +       vpbroadcastd (2 * 4)(INPUT), X2;
> +       vpbroadcastd (3 * 4)(INPUT), X3;
> +       vpbroadcastd (4 * 4)(INPUT), X4;
> +       vpbroadcastd (5 * 4)(INPUT), X5;
> +       vpbroadcastd (6 * 4)(INPUT), X6;
> +       vpbroadcastd (7 * 4)(INPUT), X7;
> +       vpbroadcastd (8 * 4)(INPUT), X8;
> +       vpbroadcastd (9 * 4)(INPUT), X9;
> +       vpbroadcastd (10 * 4)(INPUT), X10;
> +       vpbroadcastd (11 * 4)(INPUT), X11;
> +       vpbroadcastd (14 * 4)(INPUT), X14;
> +       vpbroadcastd (15 * 4)(INPUT), X15;
> +       vmovdqa X15, (STACK_TMP)(%rsp);
> +
> +L(round2):
> +       QUARTERROUND2(X0, X4,  X8, X12,   X1, X5,  X9, X13, tmp:=,X15,,,,)
> +       vmovdqa (STACK_TMP)(%rsp), X15;
> +       vmovdqa X8, (STACK_TMP)(%rsp);
> +       QUARTERROUND2(X2, X6, X10, X14,   X3, X7, X11, X15, tmp:=,X8,,,,)
> +       QUARTERROUND2(X0, X5, X10, X15,   X1, X6, X11, X12, tmp:=,X8,,,,)
> +       vmovdqa (STACK_TMP)(%rsp), X8;
> +       vmovdqa X15, (STACK_TMP)(%rsp);
> +       QUARTERROUND2(X2, X7,  X8, X13,   X3, X4,  X9, X14, tmp:=,X15,,,,)
> +       sub $2, ROUND;
> +       jnz L(round2);
> +
> +       vmovdqa X8, (STACK_TMP1)(%rsp);
> +
> +       /* tmp := X15 */
> +       vpbroadcastd (0 * 4)(INPUT), X15;
> +       PLUS(X0, X15);
> +       vpbroadcastd (1 * 4)(INPUT), X15;
> +       PLUS(X1, X15);
> +       vpbroadcastd (2 * 4)(INPUT), X15;
> +       PLUS(X2, X15);
> +       vpbroadcastd (3 * 4)(INPUT), X15;
> +       PLUS(X3, X15);
> +       vpbroadcastd (4 * 4)(INPUT), X15;
> +       PLUS(X4, X15);
> +       vpbroadcastd (5 * 4)(INPUT), X15;
> +       PLUS(X5, X15);
> +       vpbroadcastd (6 * 4)(INPUT), X15;
> +       PLUS(X6, X15);
> +       vpbroadcastd (7 * 4)(INPUT), X15;
> +       PLUS(X7, X15);
> +       transpose_4x4(X0, X1, X2, X3, X8, X15);
> +       transpose_4x4(X4, X5, X6, X7, X8, X15);
> +       vmovdqa (STACK_TMP1)(%rsp), X8;
> +       transpose_16byte_2x2(X0, X4, X15);
> +       transpose_16byte_2x2(X1, X5, X15);
> +       transpose_16byte_2x2(X2, X6, X15);
> +       transpose_16byte_2x2(X3, X7, X15);
> +       vmovdqa (STACK_TMP)(%rsp), X15;
> +       vmovdqu X0, (64 * 0 + 16 * 0)(DST)
> +       vmovdqu X1, (64 * 1 + 16 * 0)(DST)
> +       vpbroadcastd (8 * 4)(INPUT), X0;
> +       PLUS(X8, X0);
> +       vpbroadcastd (9 * 4)(INPUT), X0;
> +       PLUS(X9, X0);
> +       vpbroadcastd (10 * 4)(INPUT), X0;
> +       PLUS(X10, X0);
> +       vpbroadcastd (11 * 4)(INPUT), X0;
> +       PLUS(X11, X0);
> +       vmovdqa (STACK_VEC_X12)(%rsp), X0;
> +       PLUS(X12, X0);
> +       vmovdqa (STACK_VEC_X13)(%rsp), X0;
> +       PLUS(X13, X0);
> +       vpbroadcastd (14 * 4)(INPUT), X0;
> +       PLUS(X14, X0);
> +       vpbroadcastd (15 * 4)(INPUT), X0;
> +       PLUS(X15, X0);
> +       vmovdqu X2, (64 * 2 + 16 * 0)(DST)
> +       vmovdqu X3, (64 * 3 + 16 * 0)(DST)
> +
> +       /* Update counter */
> +       addq $8, (12 * 4)(INPUT);
> +
> +       transpose_4x4(X8, X9, X10, X11, X0, X1);
> +       transpose_4x4(X12, X13, X14, X15, X0, X1);
> +       vmovdqu X4, (64 * 4 + 16 * 0)(DST)
> +       vmovdqu X5, (64 * 5 + 16 * 0)(DST)
> +       transpose_16byte_2x2(X8, X12, X0);
> +       transpose_16byte_2x2(X9, X13, X0);
> +       transpose_16byte_2x2(X10, X14, X0);
> +       transpose_16byte_2x2(X11, X15, X0);
> +       vmovdqu X6,  (64 * 6 + 16 * 0)(DST)
> +       vmovdqu X7,  (64 * 7 + 16 * 0)(DST)
> +       vmovdqu X8,  (64 * 0 + 16 * 2)(DST)
> +       vmovdqu X9,  (64 * 1 + 16 * 2)(DST)
> +       vmovdqu X10, (64 * 2 + 16 * 2)(DST)
> +       vmovdqu X11, (64 * 3 + 16 * 2)(DST)
> +       vmovdqu X12, (64 * 4 + 16 * 2)(DST)
> +       vmovdqu X13, (64 * 5 + 16 * 2)(DST)
> +       vmovdqu X14, (64 * 6 + 16 * 2)(DST)
> +       vmovdqu X15, (64 * 7 + 16 * 2)(DST)
> +
> +       sub $8, NBLKS;
> +       lea (8 * 64)(DST), DST;
> +       lea (8 * 64)(SRC), SRC;
> +       jnz L(loop8);
> +
> +       /* clear the used vector registers and stack */
> +       vpxor X0, X0, X0;
> +       vmovdqa X0, (STACK_VEC_X12)(%rsp);
> +       vmovdqa X0, (STACK_VEC_X13)(%rsp);
> +       vmovdqa X0, (STACK_TMP)(%rsp);
> +       vmovdqa X0, (STACK_TMP1)(%rsp);
> +       vzeroall;
> +
> +       /* eax zeroed by round loop. */
> +       leave;
> +       cfi_adjust_cfa_offset(-8)
> +       cfi_def_cfa_register(%rsp);
> +       ret;
> +       int3;
> +END(__chacha20_avx2_blocks8)
> diff --git a/sysdeps/x86_64/chacha20_arch.h b/sysdeps/x86_64/chacha20_arch.h
> index 6fe5f77889..5b3ec7bbc4 100644
> --- a/sysdeps/x86_64/chacha20_arch.h
> +++ b/sysdeps/x86_64/chacha20_arch.h
> @@ -23,16 +23,26 @@
>  unsigned int __chacha20_sse2_blocks4 (uint32_t *state, uint8_t *dst,
>                                       const uint8_t *src, size_t nblks)
>       attribute_hidden;
> +unsigned int __chacha20_avx2_blocks8 (uint32_t *state, uint8_t *dst,
> +                                     const uint8_t *src, size_t nblks)
> +     attribute_hidden;
>
>  static inline void
>  chacha20_crypt (struct chacha20_state *state, uint8_t *dst, const uint8_t *src,
>                 size_t bytes)
>  {
> -  _Static_assert (CHACHA20_BUFSIZE % 4 == 0,
> -                 "CHACHA20_BUFSIZE not multiple of 4");
> +  _Static_assert (CHACHA20_BUFSIZE % 4 == 0 && CHACHA20_BUFSIZE % 8 == 0,
> +                 "CHACHA20_BUFSIZE not multiple of 4 or 8");
>    _Static_assert (CHACHA20_BUFSIZE > CHACHA20_BLOCK_SIZE * 8,
>                   "CHACHA20_BUFSIZE <= CHACHA20_BLOCK_SIZE * 8");
> +  const struct cpu_features* cpu_features = __get_cpu_features ();
>
> -  __chacha20_sse2_blocks4 (state->ctx, dst, src,
> -                          CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE);
> +  /* AVX2 version uses vzeroupper, so disable it if RTM is enabled.  */
> +  if (CPU_FEATURE_USABLE_P (cpu_features, AVX2)
> +      && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER))
> +    __chacha20_avx2_blocks8 (state->ctx, dst, src,
> +                            CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE);
> +  else
> +    __chacha20_sse2_blocks4 (state->ctx, dst, src,
> +                            CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE);
>  }
> --
> 2.32.0
>

LGTM as first draft but changes we should look into for SSE2 also apply here.

  reply	other threads:[~2022-04-18 15:58 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-18 12:01 [PATCH v2 0/8] Add arc4random support Adhemerval Zanella
2022-04-18 12:01 ` [PATCH v2 1/8] stdlib: Add arc4random, arc4random_buf, and arc4random_uniform (BZ #4417) Adhemerval Zanella
2022-04-18 12:01 ` [PATCH v2 2/8] stdlib: Add arc4random tests Adhemerval Zanella
2022-04-18 12:01 ` [PATCH v2 3/8] benchtests: Add arc4random benchtest Adhemerval Zanella
2022-04-18 12:01 ` [PATCH v2 4/8] aarch64: Add optimized chacha20 Adhemerval Zanella
2022-04-18 12:02 ` [PATCH v2 5/8] x86: Add SSE2 " Adhemerval Zanella
2022-04-18 15:56   ` Noah Goldstein
2022-04-18 12:02 ` [PATCH v2 6/8] x86: Add AVX2 " Adhemerval Zanella
2022-04-18 15:58   ` Noah Goldstein [this message]
2022-04-18 12:02 ` [PATCH v2 7/8] powerpc64: Add " Adhemerval Zanella
2022-04-18 12:02 ` [PATCH v2 8/8] s390x: " Adhemerval Zanella

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFUsyfKk1ppCFsfV_uNE8EtoPQ+zjqVhpafJ4sx7ni3SVdSiag@mail.gmail.com \
    --to=goldstein.w.n@gmail.com \
    --cc=adhemerval.zanella@linaro.org \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).