From: Noah Goldstein <goldstein.w.n@gmail.com>
To: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Cc: GNU C Library <libc-alpha@sourceware.org>,
Florian Weimer <fweimer@redhat.com>
Subject: Re: [PATCH v9 6/9] x86: Add AVX2 optimized chacha20
Date: Wed, 13 Jul 2022 11:07:27 -0700 [thread overview]
Message-ID: <CAFUsyfKKu7ExiKVc6XDPvhJdROJpSshSBdW1hfXhsiW9s1VbxA@mail.gmail.com> (raw)
In-Reply-To: <20220713173657.516725-7-adhemerval.zanella@linaro.org>
On Wed, Jul 13, 2022 at 10:40 AM Adhemerval Zanella via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> From: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
>
> It adds vectorized ChaCha20 implementation based on libgcrypt
> cipher/chacha20-amd64-avx2.S. It is used only if AVX2 is supported
> and enabled by the architecture.
>
> As for generic implementation, the last step that XOR with the
> input is omited. The final state register clearing is also
> omitted.
>
> On a Ryzen 9 5900X it shows the following improvements (using
> formatted bench-arc4random data):
>
> SSE MB/s
> -----------------------------------------------
> arc4random [single-thread] 704.25
> arc4random_buf(16) [single-thread] 1018.17
> arc4random_buf(32) [single-thread] 1315.27
> arc4random_buf(48) [single-thread] 1449.36
> arc4random_buf(64) [single-thread] 1511.16
> arc4random_buf(80) [single-thread] 1539.48
> arc4random_buf(96) [single-thread] 1571.06
> arc4random_buf(112) [single-thread] 1596.16
> arc4random_buf(128) [single-thread] 1613.48
> -----------------------------------------------
>
> AVX2 MB/s
> -----------------------------------------------
> arc4random [single-thread] 922.61
> arc4random_buf(16) [single-thread] 1478.70
> arc4random_buf(32) [single-thread] 2241.80
> arc4random_buf(48) [single-thread] 2681.28
> arc4random_buf(64) [single-thread] 2913.43
> arc4random_buf(80) [single-thread] 3009.73
> arc4random_buf(96) [single-thread] 3141.16
> arc4random_buf(112) [single-thread] 3254.46
> arc4random_buf(128) [single-thread] 3305.02
> -----------------------------------------------
>
> Checked on x86_64-linux-gnu.
> ---
> LICENSES | 5 +-
> sysdeps/x86_64/Makefile | 1 +
> sysdeps/x86_64/chacha20-amd64-avx2.S | 328 +++++++++++++++++++++++++++
> sysdeps/x86_64/chacha20_arch.h | 22 +-
> 4 files changed, 348 insertions(+), 8 deletions(-)
> create mode 100644 sysdeps/x86_64/chacha20-amd64-avx2.S
>
> diff --git a/LICENSES b/LICENSES
> index 47e9cd8e31..1617648813 100644
> --- a/LICENSES
> +++ b/LICENSES
> @@ -390,8 +390,9 @@ Copyright 2001 by Stephen L. Moshier <moshier@na-net.ornl.gov>
> License along with this library; if not, see
> <https://www.gnu.org/licenses/>. */
>
> -sysdeps/aarch64/chacha20-aarch64.S and sysdeps/x86_64/chacha20-amd64-sse2.S
> -imports code from libgcrypt, with the following notices:
> +sysdeps/aarch64/chacha20-aarch64.S, sysdeps/x86_64/chacha20-amd64-sse2.S,
> +and sysdeps/x86_64/chacha20-amd64-avx2.S imports code from libgcrypt,
> +with the following notices:
>
> Copyright (C) 2017-2019 Jussi Kivilinna <jussi.kivilinna@iki.fi>
>
> diff --git a/sysdeps/x86_64/Makefile b/sysdeps/x86_64/Makefile
> index a2e5af3ca9..a02fb9a114 100644
> --- a/sysdeps/x86_64/Makefile
> +++ b/sysdeps/x86_64/Makefile
> @@ -8,6 +8,7 @@ endif
> ifeq ($(subdir),stdlib)
> sysdep_routines += \
> chacha20-amd64-sse2 \
> + chacha20-amd64-avx2 \
> # sysdep_routines
> endif
>
> diff --git a/sysdeps/x86_64/chacha20-amd64-avx2.S b/sysdeps/x86_64/chacha20-amd64-avx2.S
> new file mode 100644
> index 0000000000..eb07b99f48
> --- /dev/null
> +++ b/sysdeps/x86_64/chacha20-amd64-avx2.S
> @@ -0,0 +1,328 @@
> +/* Optimized AVX2 implementation of ChaCha20 cipher.
> + Copyright (C) 2022 Free Software Foundation, Inc.
> +
> + This file is part of the GNU C Library.
> +
> + The GNU C Library is free software; you can redistribute it and/or
> + modify it under the terms of the GNU Lesser General Public
> + License as published by the Free Software Foundation; either
> + version 2.1 of the License, or (at your option) any later version.
> +
> + The GNU C Library is distributed in the hope that it will be useful,
> + but WITHOUT ANY WARRANTY; without even the implied warranty of
> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + Lesser General Public License for more details.
> +
> + You should have received a copy of the GNU Lesser General Public
> + License along with the GNU C Library; if not, see
> + <https://www.gnu.org/licenses/>. */
> +
> +/* chacha20-amd64-avx2.S - AVX2 implementation of ChaCha20 cipher
> +
> + Copyright (C) 2017-2019 Jussi Kivilinna <jussi.kivilinna@iki.fi>
> +
> + This file is part of Libgcrypt.
> +
> + Libgcrypt is free software; you can redistribute it and/or modify
> + it under the terms of the GNU Lesser General Public License as
> + published by the Free Software Foundation; either version 2.1 of
> + the License, or (at your option) any later version.
> +
> + Libgcrypt is distributed in the hope that it will be useful,
> + but WITHOUT ANY WARRANTY; without even the implied warranty of
> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + GNU Lesser General Public License for more details.
> +
> + You should have received a copy of the GNU Lesser General Public
> + License along with this program; if not, see <http://www.gnu.org/licenses/>.
> +*/
> +
> +/* Based on D. J. Bernstein reference implementation at
> + http://cr.yp.to/chacha.html:
> +
> + chacha-regs.c version 20080118
> + D. J. Bernstein
> + Public domain. */
> +
> +#include <sysdep.h>
> +
> +#ifdef PIC
> +# define rRIP (%rip)
> +#else
> +# define rRIP
> +#endif
> +
> +/* register macros */
> +#define INPUT %rdi
> +#define DST %rsi
> +#define SRC %rdx
> +#define NBLKS %rcx
> +#define ROUND %eax
> +
> +/* stack structure */
> +#define STACK_VEC_X12 (32)
> +#define STACK_VEC_X13 (32 + STACK_VEC_X12)
> +#define STACK_TMP (32 + STACK_VEC_X13)
> +#define STACK_TMP1 (32 + STACK_TMP)
> +
> +#define STACK_MAX (32 + STACK_TMP1)
> +
> +/* vector registers */
> +#define X0 %ymm0
> +#define X1 %ymm1
> +#define X2 %ymm2
> +#define X3 %ymm3
> +#define X4 %ymm4
> +#define X5 %ymm5
> +#define X6 %ymm6
> +#define X7 %ymm7
> +#define X8 %ymm8
> +#define X9 %ymm9
> +#define X10 %ymm10
> +#define X11 %ymm11
> +#define X12 %ymm12
> +#define X13 %ymm13
> +#define X14 %ymm14
> +#define X15 %ymm15
> +
> +#define X0h %xmm0
> +#define X1h %xmm1
> +#define X2h %xmm2
> +#define X3h %xmm3
> +#define X4h %xmm4
> +#define X5h %xmm5
> +#define X6h %xmm6
> +#define X7h %xmm7
> +#define X8h %xmm8
> +#define X9h %xmm9
> +#define X10h %xmm10
> +#define X11h %xmm11
> +#define X12h %xmm12
> +#define X13h %xmm13
> +#define X14h %xmm14
> +#define X15h %xmm15
> +
> +/**********************************************************************
> + helper macros
> + **********************************************************************/
> +
> +/* 4x4 32-bit integer matrix transpose */
> +#define transpose_4x4(x0,x1,x2,x3,t1,t2) \
> + vpunpckhdq x1, x0, t2; \
> + vpunpckldq x1, x0, x0; \
> + \
> + vpunpckldq x3, x2, t1; \
> + vpunpckhdq x3, x2, x2; \
> + \
> + vpunpckhqdq t1, x0, x1; \
> + vpunpcklqdq t1, x0, x0; \
> + \
> + vpunpckhqdq x2, t2, x3; \
> + vpunpcklqdq x2, t2, x2;
> +
> +/* 2x2 128-bit matrix transpose */
> +#define transpose_16byte_2x2(x0,x1,t1) \
> + vmovdqa x0, t1; \
> + vperm2i128 $0x20, x1, x0, x0; \
> + vperm2i128 $0x31, x1, t1, x1;
> +
> +/**********************************************************************
> + 8-way chacha20
> + **********************************************************************/
> +
> +#define ROTATE2(v1,v2,c,tmp) \
> + vpsrld $(32 - (c)), v1, tmp; \
> + vpslld $(c), v1, v1; \
> + vpaddb tmp, v1, v1; \
> + vpsrld $(32 - (c)), v2, tmp; \
> + vpslld $(c), v2, v2; \
> + vpaddb tmp, v2, v2;
> +
> +#define ROTATE_SHUF_2(v1,v2,shuf) \
> + vpshufb shuf, v1, v1; \
> + vpshufb shuf, v2, v2;
> +
> +#define XOR(ds,s) \
> + vpxor s, ds, ds;
> +
> +#define PLUS(ds,s) \
> + vpaddd s, ds, ds;
> +
> +#define QUARTERROUND2(a1,b1,c1,d1,a2,b2,c2,d2,ign,tmp1,\
> + interleave_op1,interleave_op2,\
> + interleave_op3,interleave_op4) \
> + vbroadcasti128 .Lshuf_rol16 rRIP, tmp1; \
> + interleave_op1; \
> + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \
> + ROTATE_SHUF_2(d1, d2, tmp1); \
> + interleave_op2; \
> + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \
> + ROTATE2(b1, b2, 12, tmp1); \
> + vbroadcasti128 .Lshuf_rol8 rRIP, tmp1; \
> + interleave_op3; \
> + PLUS(a1,b1); PLUS(a2,b2); XOR(d1,a1); XOR(d2,a2); \
> + ROTATE_SHUF_2(d1, d2, tmp1); \
> + interleave_op4; \
> + PLUS(c1,d1); PLUS(c2,d2); XOR(b1,c1); XOR(b2,c2); \
> + ROTATE2(b1, b2, 7, tmp1);
> +
> + .section .text.avx2, "ax", @progbits
> + .align 32
> +chacha20_data:
> +L(shuf_rol16):
> + .byte 2,3,0,1,6,7,4,5,10,11,8,9,14,15,12,13
> +L(shuf_rol8):
> + .byte 3,0,1,2,7,4,5,6,11,8,9,10,15,12,13,14
> +L(inc_counter):
> + .byte 0,1,2,3,4,5,6,7
> +L(unsigned_cmp):
> + .long 0x80000000
> +
> + .hidden __chacha20_avx2_blocks8
> +ENTRY (__chacha20_avx2_blocks8)
> + /* input:
> + * %rdi: input
> + * %rsi: dst
> + * %rdx: src
> + * %rcx: nblks (multiple of 8)
> + */
> + vzeroupper;
> +
> + pushq %rbp;
> + cfi_adjust_cfa_offset(8);
> + cfi_rel_offset(rbp, 0)
> + movq %rsp, %rbp;
> + cfi_def_cfa_register(rbp);
> +
> + subq $STACK_MAX, %rsp;
> + andq $~31, %rsp;
> +
> +L(loop8):
> + mov $20, ROUND;
> +
> + /* Construct counter vectors X12 and X13 */
> + vpmovzxbd L(inc_counter) rRIP, X0;
> + vpbroadcastd L(unsigned_cmp) rRIP, X2;
> + vpbroadcastd (12 * 4)(INPUT), X12;
> + vpbroadcastd (13 * 4)(INPUT), X13;
> + vpaddd X0, X12, X12;
> + vpxor X2, X0, X0;
> + vpxor X2, X12, X1;
> + vpcmpgtd X1, X0, X0;
> + vpsubd X0, X13, X13;
> + vmovdqa X12, (STACK_VEC_X12)(%rsp);
> + vmovdqa X13, (STACK_VEC_X13)(%rsp);
> +
> + /* Load vectors */
> + vpbroadcastd (0 * 4)(INPUT), X0;
> + vpbroadcastd (1 * 4)(INPUT), X1;
> + vpbroadcastd (2 * 4)(INPUT), X2;
> + vpbroadcastd (3 * 4)(INPUT), X3;
> + vpbroadcastd (4 * 4)(INPUT), X4;
> + vpbroadcastd (5 * 4)(INPUT), X5;
> + vpbroadcastd (6 * 4)(INPUT), X6;
> + vpbroadcastd (7 * 4)(INPUT), X7;
> + vpbroadcastd (8 * 4)(INPUT), X8;
> + vpbroadcastd (9 * 4)(INPUT), X9;
> + vpbroadcastd (10 * 4)(INPUT), X10;
> + vpbroadcastd (11 * 4)(INPUT), X11;
> + vpbroadcastd (14 * 4)(INPUT), X14;
> + vpbroadcastd (15 * 4)(INPUT), X15;
> + vmovdqa X15, (STACK_TMP)(%rsp);
> +
> +L(round2):
> + QUARTERROUND2(X0, X4, X8, X12, X1, X5, X9, X13, tmp:=,X15,,,,)
> + vmovdqa (STACK_TMP)(%rsp), X15;
> + vmovdqa X8, (STACK_TMP)(%rsp);
> + QUARTERROUND2(X2, X6, X10, X14, X3, X7, X11, X15, tmp:=,X8,,,,)
> + QUARTERROUND2(X0, X5, X10, X15, X1, X6, X11, X12, tmp:=,X8,,,,)
> + vmovdqa (STACK_TMP)(%rsp), X8;
> + vmovdqa X15, (STACK_TMP)(%rsp);
> + QUARTERROUND2(X2, X7, X8, X13, X3, X4, X9, X14, tmp:=,X15,,,,)
> + sub $2, ROUND;
> + jnz L(round2);
> +
> + vmovdqa X8, (STACK_TMP1)(%rsp);
> +
> + /* tmp := X15 */
> + vpbroadcastd (0 * 4)(INPUT), X15;
> + PLUS(X0, X15);
> + vpbroadcastd (1 * 4)(INPUT), X15;
> + PLUS(X1, X15);
> + vpbroadcastd (2 * 4)(INPUT), X15;
> + PLUS(X2, X15);
> + vpbroadcastd (3 * 4)(INPUT), X15;
> + PLUS(X3, X15);
> + vpbroadcastd (4 * 4)(INPUT), X15;
> + PLUS(X4, X15);
> + vpbroadcastd (5 * 4)(INPUT), X15;
> + PLUS(X5, X15);
> + vpbroadcastd (6 * 4)(INPUT), X15;
> + PLUS(X6, X15);
> + vpbroadcastd (7 * 4)(INPUT), X15;
> + PLUS(X7, X15);
> + transpose_4x4(X0, X1, X2, X3, X8, X15);
> + transpose_4x4(X4, X5, X6, X7, X8, X15);
> + vmovdqa (STACK_TMP1)(%rsp), X8;
> + transpose_16byte_2x2(X0, X4, X15);
> + transpose_16byte_2x2(X1, X5, X15);
> + transpose_16byte_2x2(X2, X6, X15);
> + transpose_16byte_2x2(X3, X7, X15);
> + vmovdqa (STACK_TMP)(%rsp), X15;
> + vmovdqu X0, (64 * 0 + 16 * 0)(DST)
> + vmovdqu X1, (64 * 1 + 16 * 0)(DST)
> + vpbroadcastd (8 * 4)(INPUT), X0;
> + PLUS(X8, X0);
> + vpbroadcastd (9 * 4)(INPUT), X0;
> + PLUS(X9, X0);
> + vpbroadcastd (10 * 4)(INPUT), X0;
> + PLUS(X10, X0);
> + vpbroadcastd (11 * 4)(INPUT), X0;
> + PLUS(X11, X0);
> + vmovdqa (STACK_VEC_X12)(%rsp), X0;
> + PLUS(X12, X0);
> + vmovdqa (STACK_VEC_X13)(%rsp), X0;
> + PLUS(X13, X0);
> + vpbroadcastd (14 * 4)(INPUT), X0;
> + PLUS(X14, X0);
> + vpbroadcastd (15 * 4)(INPUT), X0;
> + PLUS(X15, X0);
> + vmovdqu X2, (64 * 2 + 16 * 0)(DST)
> + vmovdqu X3, (64 * 3 + 16 * 0)(DST)
> +
> + /* Update counter */
> + addq $8, (12 * 4)(INPUT);
> +
> + transpose_4x4(X8, X9, X10, X11, X0, X1);
> + transpose_4x4(X12, X13, X14, X15, X0, X1);
> + vmovdqu X4, (64 * 4 + 16 * 0)(DST)
> + vmovdqu X5, (64 * 5 + 16 * 0)(DST)
> + transpose_16byte_2x2(X8, X12, X0);
> + transpose_16byte_2x2(X9, X13, X0);
> + transpose_16byte_2x2(X10, X14, X0);
> + transpose_16byte_2x2(X11, X15, X0);
> + vmovdqu X6, (64 * 6 + 16 * 0)(DST)
> + vmovdqu X7, (64 * 7 + 16 * 0)(DST)
> + vmovdqu X8, (64 * 0 + 16 * 2)(DST)
> + vmovdqu X9, (64 * 1 + 16 * 2)(DST)
> + vmovdqu X10, (64 * 2 + 16 * 2)(DST)
> + vmovdqu X11, (64 * 3 + 16 * 2)(DST)
> + vmovdqu X12, (64 * 4 + 16 * 2)(DST)
> + vmovdqu X13, (64 * 5 + 16 * 2)(DST)
> + vmovdqu X14, (64 * 6 + 16 * 2)(DST)
> + vmovdqu X15, (64 * 7 + 16 * 2)(DST)
> +
> + sub $8, NBLKS;
> + lea (8 * 64)(DST), DST;
> + lea (8 * 64)(SRC), SRC;
> + jnz L(loop8);
> +
> + vzeroupper;
> +
> + /* eax zeroed by round loop. */
> + leave;
> + cfi_adjust_cfa_offset(-8)
> + cfi_def_cfa_register(%rsp);
> + ret;
> + int3;
> +END(__chacha20_avx2_blocks8)
> diff --git a/sysdeps/x86_64/chacha20_arch.h b/sysdeps/x86_64/chacha20_arch.h
> index 5738c840a9..bfdc6c0a36 100644
> --- a/sysdeps/x86_64/chacha20_arch.h
> +++ b/sysdeps/x86_64/chacha20_arch.h
> @@ -23,16 +23,26 @@
> unsigned int __chacha20_sse2_blocks4 (uint32_t *state, uint8_t *dst,
> const uint8_t *src, size_t nblks)
> attribute_hidden;
> +unsigned int __chacha20_avx2_blocks8 (uint32_t *state, uint8_t *dst,
> + const uint8_t *src, size_t nblks)
> + attribute_hidden;
>
> static inline void
> chacha20_crypt (uint32_t *state, uint8_t *dst, const uint8_t *src,
> size_t bytes)
> {
> - _Static_assert (CHACHA20_BUFSIZE % 4 == 0,
> - "CHACHA20_BUFSIZE not multiple of 4");
> - _Static_assert (CHACHA20_BUFSIZE >= CHACHA20_BLOCK_SIZE * 4,
> - "CHACHA20_BUFSIZE <= CHACHA20_BLOCK_SIZE * 4");
> + _Static_assert (CHACHA20_BUFSIZE % 4 == 0 && CHACHA20_BUFSIZE % 8 == 0,
> + "CHACHA20_BUFSIZE not multiple of 4 or 8");
> + _Static_assert (CHACHA20_BUFSIZE >= CHACHA20_BLOCK_SIZE * 8,
> + "CHACHA20_BUFSIZE < CHACHA20_BLOCK_SIZE * 8");
> + const struct cpu_features* cpu_features = __get_cpu_features ();
>
> - __chacha20_sse2_blocks4 (state, dst, src,
> - CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE);
> + /* AVX2 version uses vzeroupper, so disable it if RTM is enabled. */
Since `arc4random ()` might need to read from /dev/urandom I don't
think this function could ever truly be RTM safe so we may not care.
If im missing something we do want to support RTM, should there be a
'!CPU_FEATURE_USABLE_P (cpu_features, RTM)' check for the avx2
implementation?
> + if (CPU_FEATURE_USABLE_P (cpu_features, AVX2)
> + && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER))
Can you use the X86_ISA_* macro?
In this case the code would be:
if (X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX2)
&& X86_ISA_CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER, !))
> + __chacha20_avx2_blocks8 (state, dst, src,
> + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE);
> + else
> + __chacha20_sse2_blocks4 (state, dst, src,
> + CHACHA20_BUFSIZE / CHACHA20_BLOCK_SIZE);
> }
> --
> 2.34.1
>
next prev parent reply other threads:[~2022-07-13 18:07 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-13 17:36 [PATCH v9 0/9] Add arc4random support Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 1/9] stdlib: Add arc4random, arc4random_buf, and arc4random_uniform (BZ #4417) Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 2/9] stdlib: Add arc4random tests Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 3/9] benchtests: Add arc4random benchtest Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 4/9] aarch64: Add optimized chacha20 Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 5/9] x86: Add SSE2 " Adhemerval Zanella
2022-07-13 18:12 ` Noah Goldstein
2022-07-13 18:20 ` Adhemerval Zanella Netto
2022-07-13 18:22 ` Noah Goldstein
2022-07-13 18:27 ` Noah Goldstein
2022-07-13 18:29 ` Adhemerval Zanella Netto
2022-07-13 18:53 ` Noah Goldstein
2022-07-13 17:36 ` [PATCH v9 6/9] x86: Add AVX2 " Adhemerval Zanella
2022-07-13 18:07 ` Noah Goldstein [this message]
2022-07-13 19:31 ` Adhemerval Zanella Netto
2022-07-13 20:24 ` Noah Goldstein
2022-07-13 20:16 ` Florian Weimer
2022-07-13 20:23 ` Noah Goldstein
2022-07-13 17:36 ` [PATCH v9 7/9] powerpc64: Add " Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 8/9] s390x: " Adhemerval Zanella
2022-07-13 17:36 ` [PATCH v9 9/9] manual: Add documentation for arc4random functions Adhemerval Zanella
2022-07-14 10:03 ` Mark Harris
2022-07-14 11:08 ` Adhemerval Zanella Netto
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAFUsyfKKu7ExiKVc6XDPvhJdROJpSshSBdW1hfXhsiW9s1VbxA@mail.gmail.com \
--to=goldstein.w.n@gmail.com \
--cc=adhemerval.zanella@linaro.org \
--cc=fweimer@redhat.com \
--cc=libc-alpha@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).