public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX
@ 2021-03-17  2:28 Naohiro Tamura
  2021-03-17  2:33 ` [PATCH 1/5] config: Added HAVE_SVE_ASM_SUPPORT for aarch64 Naohiro Tamura
                   ` (7 more replies)
  0 siblings, 8 replies; 36+ messages in thread
From: Naohiro Tamura @ 2021-03-17  2:28 UTC (permalink / raw)
  To: libc-alpha

Fujitsu is in the process of signing the copyright assignment paper.
We'd like to have some feedback in advance.

This series of patches optimize the performance of
memcpy/memmove/memset for A64FX [1] which implements ARMv8-A SVE and
has L1 64KB cache per core and L2 8MB cache per NUMA node.

The first patch is an update of autoconf to check if assembler is
capable for ARMv8-A SVE code generation or not, and then define
HAVE_SVE_ASM_SUPPORT macro.

The second patch is memcpy/memmove performance optimization which makes
use of Scalable Vector Register with several techniques such as
loop unrolling, memory access alignment, cache zero fill, prefetch,
and software pipelining.

The third patch is memset performance optimization which makes
use of Scalable Vector Register with several techniques such as
loop unrolling, memory access alignment, cache zero fill, and
prefetch.

The forth patch is a test helper script to change Vector Length for
child process. This script can be used as test-wrapper for 'make
check'

The fifth patch is to add generic_memcpy and generic_memmove to
bench-memcpy-large.c and bench-memmove-large.c respectively so that we
can compare performance between 512 bit scalable vector register with
scalar 64 bit register consistently among memcpy/memmove/memset
default and large benchtests.


SVE assembler code for memcpy/memmove/memset is implemented as Vector
Length Agnostic code so theoretically it can be run on any SOC which
supports ARMv8-A SVE standard.

We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.

And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.

[1] https://github.com/fujitsu/A64FX


Naohiro Tamura (5):
  config: Added HAVE_SVE_ASM_SUPPORT for aarch64
  aarch64: Added optimized memcpy and memmove for A64FX
  aarch64: Added optimized memset for A64FX
  scripts: Added Vector Length Set test helper script
  benchtests: Added generic_memcpy and generic_memmove to large
    benchtests

 benchtests/bench-memcpy-large.c               |   9 +
 benchtests/bench-memmove-large.c              |   9 +
 config.h.in                                   |   3 +
 manual/tunables.texi                          |   3 +-
 scripts/vltest.py                             |  82 ++
 sysdeps/aarch64/configure                     |  28 +
 sysdeps/aarch64/configure.ac                  |  15 +
 sysdeps/aarch64/multiarch/Makefile            |   3 +-
 sysdeps/aarch64/multiarch/ifunc-impl-list.c   |  17 +-
 sysdeps/aarch64/multiarch/init-arch.h         |   4 +-
 sysdeps/aarch64/multiarch/memcpy.c            |  12 +-
 sysdeps/aarch64/multiarch/memcpy_a64fx.S      | 979 ++++++++++++++++++
 sysdeps/aarch64/multiarch/memmove.c           |  12 +-
 sysdeps/aarch64/multiarch/memset.c            |  11 +-
 sysdeps/aarch64/multiarch/memset_a64fx.S      | 574 ++++++++++
 .../unix/sysv/linux/aarch64/cpu-features.c    |   4 +
 .../unix/sysv/linux/aarch64/cpu-features.h    |   4 +
 17 files changed, 1759 insertions(+), 10 deletions(-)
 create mode 100755 scripts/vltest.py
 create mode 100644 sysdeps/aarch64/multiarch/memcpy_a64fx.S
 create mode 100644 sysdeps/aarch64/multiarch/memset_a64fx.S

-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 1/5] config: Added HAVE_SVE_ASM_SUPPORT for aarch64
  2021-03-17  2:28 [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Naohiro Tamura
@ 2021-03-17  2:33 ` Naohiro Tamura
  2021-03-29 12:11   ` Szabolcs Nagy
  2021-03-17  2:34 ` [PATCH 2/5] aarch64: Added optimized memcpy and memmove for A64FX Naohiro Tamura
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: Naohiro Tamura @ 2021-03-17  2:33 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch checks if assembler supports '-march=armv8.2-a+sve' to
generate SVE code or not, and then define HAVE_SVE_ASM_SUPPORT macro.
---
 config.h.in                  |  3 +++
 sysdeps/aarch64/configure    | 28 ++++++++++++++++++++++++++++
 sysdeps/aarch64/configure.ac | 15 +++++++++++++++
 3 files changed, 46 insertions(+)

diff --git a/config.h.in b/config.h.in
index f21bf04e47..2073816af8 100644
--- a/config.h.in
+++ b/config.h.in
@@ -118,6 +118,9 @@
 /* AArch64 PAC-RET code generation is enabled.  */
 #define HAVE_AARCH64_PAC_RET 0
 
+/* Assembler support ARMv8.2-A SVE */
+#define HAVE_SVE_ASM_SUPPORT 0
+
 /* ARC big endian ABI */
 #undef HAVE_ARC_BE
 
diff --git a/sysdeps/aarch64/configure b/sysdeps/aarch64/configure
index 83c3a23e44..ac16250f8a 100644
--- a/sysdeps/aarch64/configure
+++ b/sysdeps/aarch64/configure
@@ -304,3 +304,31 @@ fi
 $as_echo "$libc_cv_aarch64_variant_pcs" >&6; }
 config_vars="$config_vars
 aarch64-variant-pcs = $libc_cv_aarch64_variant_pcs"
+
+# Check if asm support armv8.2-a+sve
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for SVE support in assembler" >&5
+$as_echo_n "checking for SVE support in assembler... " >&6; }
+if ${libc_cv_asm_sve+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat > conftest.s <<\EOF
+        ptrue p0.b
+EOF
+if { ac_try='${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&5'
+  { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_try\""; } >&5
+  (eval $ac_try) 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; }; then
+  libc_cv_asm_sve=yes
+else
+  libc_cv_asm_sve=no
+fi
+rm -f conftest*
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $libc_cv_asm_sve" >&5
+$as_echo "$libc_cv_asm_sve" >&6; }
+if test $libc_cv_asm_sve = yes; then
+  $as_echo "#define HAVE_SVE_ASM_SUPPORT 1" >>confdefs.h
+
+fi
diff --git a/sysdeps/aarch64/configure.ac b/sysdeps/aarch64/configure.ac
index 66f755078a..389a0b4e8d 100644
--- a/sysdeps/aarch64/configure.ac
+++ b/sysdeps/aarch64/configure.ac
@@ -90,3 +90,18 @@ EOF
   fi
   rm -rf conftest.*])
 LIBC_CONFIG_VAR([aarch64-variant-pcs], [$libc_cv_aarch64_variant_pcs])
+
+# Check if asm support armv8.2-a+sve
+AC_CACHE_CHECK(for SVE support in assembler, libc_cv_asm_sve, [dnl
+cat > conftest.s <<\EOF
+        ptrue p0.b
+EOF
+if AC_TRY_COMMAND(${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&AS_MESSAGE_LOG_FD); then
+  libc_cv_asm_sve=yes
+else
+  libc_cv_asm_sve=no
+fi
+rm -f conftest*])
+if test $libc_cv_asm_sve = yes; then
+  AC_DEFINE(HAVE_SVE_ASM_SUPPORT)
+fi
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 2/5] aarch64: Added optimized memcpy and memmove for A64FX
  2021-03-17  2:28 [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Naohiro Tamura
  2021-03-17  2:33 ` [PATCH 1/5] config: Added HAVE_SVE_ASM_SUPPORT for aarch64 Naohiro Tamura
@ 2021-03-17  2:34 ` Naohiro Tamura
  2021-03-29 12:44   ` Szabolcs Nagy
  2021-03-17  2:34 ` [PATCH 3/5] aarch64: Added optimized memset " Naohiro Tamura
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: Naohiro Tamura @ 2021-03-17  2:34 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch optimizes the performance of memcpy/memmove for A64FX [1]
which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB
cache per NUMA node.

The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill, prefetch, and software pipelining.

SVE assembler code for memcpy/memmove is implemented as Vector Length
Agnostic code so theoretically it can be run on any SOC which supports
ARMv8-A SVE standard.

We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.

And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.

[1] https://github.com/fujitsu/A64FX
---
 manual/tunables.texi                          |   3 +-
 sysdeps/aarch64/multiarch/Makefile            |   2 +-
 sysdeps/aarch64/multiarch/ifunc-impl-list.c   |  12 +-
 sysdeps/aarch64/multiarch/init-arch.h         |   4 +-
 sysdeps/aarch64/multiarch/memcpy.c            |  12 +-
 sysdeps/aarch64/multiarch/memcpy_a64fx.S      | 979 ++++++++++++++++++
 sysdeps/aarch64/multiarch/memmove.c           |  12 +-
 .../unix/sysv/linux/aarch64/cpu-features.c    |   4 +
 .../unix/sysv/linux/aarch64/cpu-features.h    |   4 +
 9 files changed, 1024 insertions(+), 8 deletions(-)
 create mode 100644 sysdeps/aarch64/multiarch/memcpy_a64fx.S

diff --git a/manual/tunables.texi b/manual/tunables.texi
index 1b746c0fa1..81ed5366fc 100644
--- a/manual/tunables.texi
+++ b/manual/tunables.texi
@@ -453,7 +453,8 @@ This tunable is specific to powerpc, powerpc64 and powerpc64le.
 The @code{glibc.cpu.name=xxx} tunable allows the user to tell @theglibc{} to
 assume that the CPU is @code{xxx} where xxx may have one of these values:
 @code{generic}, @code{falkor}, @code{thunderxt88}, @code{thunderx2t99},
-@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng}.
+@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng},
+@code{a64fx}.
 
 This tunable is specific to aarch64.
 @end deftp
diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile
index dc3efffb36..04c3f17121 100644
--- a/sysdeps/aarch64/multiarch/Makefile
+++ b/sysdeps/aarch64/multiarch/Makefile
@@ -1,6 +1,6 @@
 ifeq ($(subdir),string)
 sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \
-		   memcpy_falkor \
+		   memcpy_falkor memcpy_a64fx \
 		   memset_generic memset_falkor memset_emag memset_kunpeng \
 		   memchr_generic memchr_nosimd \
 		   strlen_mte strlen_asimd
diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
index 99a8c68aac..cb78da9692 100644
--- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c
+++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
@@ -25,7 +25,11 @@
 #include <stdio.h>
 
 /* Maximum number of IFUNC implementations.  */
-#define MAX_IFUNC	4
+#if HAVE_SVE_ASM_SUPPORT
+# define MAX_IFUNC	7
+#else
+# define MAX_IFUNC	6
+#endif
 
 size_t
 __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
@@ -43,12 +47,18 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 	      IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2)
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_falkor)
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_simd)
+#if HAVE_SVE_ASM_SUPPORT
+	      IFUNC_IMPL_ADD (array, i, memcpy, sve, __memcpy_a64fx)
+#endif
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_generic))
   IFUNC_IMPL (i, name, memmove,
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_thunderx)
 	      IFUNC_IMPL_ADD (array, i, memmove, !bti, __memmove_thunderx2)
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_falkor)
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_simd)
+#if HAVE_SVE_ASM_SUPPORT
+	      IFUNC_IMPL_ADD (array, i, memmove, sve, __memmove_a64fx)
+#endif
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_generic))
   IFUNC_IMPL (i, name, memset,
 	      /* Enable this on non-falkor processors too so that other cores
diff --git a/sysdeps/aarch64/multiarch/init-arch.h b/sysdeps/aarch64/multiarch/init-arch.h
index a167699e74..d20e7e1b8e 100644
--- a/sysdeps/aarch64/multiarch/init-arch.h
+++ b/sysdeps/aarch64/multiarch/init-arch.h
@@ -33,4 +33,6 @@
   bool __attribute__((unused)) bti =					      \
     HAVE_AARCH64_BTI && GLRO(dl_aarch64_cpu_features).bti;		      \
   bool __attribute__((unused)) mte =					      \
-    MTE_ENABLED ();
+    MTE_ENABLED ();							      \
+  unsigned __attribute__((unused)) sve =				      \
+    GLRO(dl_aarch64_cpu_features).sve;
diff --git a/sysdeps/aarch64/multiarch/memcpy.c b/sysdeps/aarch64/multiarch/memcpy.c
index 0e0a5cbcfb..0006f38eb0 100644
--- a/sysdeps/aarch64/multiarch/memcpy.c
+++ b/sysdeps/aarch64/multiarch/memcpy.c
@@ -33,6 +33,9 @@ extern __typeof (__redirect_memcpy) __memcpy_simd attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_thunderx attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_thunderx2 attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_falkor attribute_hidden;
+#if HAVE_SVE_ASM_SUPPORT
+extern __typeof (__redirect_memcpy) __memcpy_a64fx attribute_hidden;
+#endif
 
 libc_ifunc (__libc_memcpy,
             (IS_THUNDERX (midr)
@@ -44,8 +47,13 @@ libc_ifunc (__libc_memcpy,
 		  : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
 		     || IS_NEOVERSE_V1 (midr)
 		     ? __memcpy_simd
-		     : __memcpy_generic)))));
-
+#if HAVE_SVE_ASM_SUPPORT
+                     : (IS_A64FX (midr)
+                        ? __memcpy_a64fx
+                        : __memcpy_generic))))));
+#else
+                     : __memcpy_generic)))));
+#endif
 # undef memcpy
 strong_alias (__libc_memcpy, memcpy);
 #endif
diff --git a/sysdeps/aarch64/multiarch/memcpy_a64fx.S b/sysdeps/aarch64/multiarch/memcpy_a64fx.S
new file mode 100644
index 0000000000..23438e4e3d
--- /dev/null
+++ b/sysdeps/aarch64/multiarch/memcpy_a64fx.S
@@ -0,0 +1,979 @@
+/* Optimized memcpy for Fujitsu A64FX processor.
+   Copyright (C) 2012-2021 Free Software Foundation, Inc.
+
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library.  If not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+
+#if HAVE_SVE_ASM_SUPPORT
+#if IS_IN (libc)
+# define MEMCPY __memcpy_a64fx
+# define MEMMOVE __memmove_a64fx
+
+/* Assumptions:
+ *
+ * ARMv8.2-a, AArch64, unaligned accesses, sve
+ *
+ */
+
+#define L1_SIZE (64*1024)/2     // L1 64KB
+#define L2_SIZE (7*1024*1024)/2 // L2 8MB - 1MB
+#define CACHE_LINE_SIZE 256
+#define PF_DIST_L1 (CACHE_LINE_SIZE * 16)
+#define PF_DIST_L2 (CACHE_LINE_SIZE * 64)
+#define dest            x0
+#define src             x1
+#define n               x2      // size
+#define tmp1            x3
+#define tmp2            x4
+#define rest            x5
+#define dest_ptr        x6
+#define src_ptr         x7
+#define vector_length   x8
+#define vl_remainder    x9      // vector_length remainder
+#define cl_remainder    x10     // CACHE_LINE_SIZE remainder
+
+    .arch armv8.2-a+sve
+
+ENTRY_ALIGN (MEMCPY, 6)
+
+    PTR_ARG (0)
+    SIZE_ARG (2)
+
+L(fwd_start):
+    cmp         n, 0
+    ccmp        dest, src, 4, ne
+    b.ne        L(init)
+    ret
+
+L(init):
+    mov         rest, n
+    mov         dest_ptr, dest
+    mov         src_ptr, src
+    cntb        vector_length
+    ptrue       p0.b
+
+L(L2):
+    // get block_size
+    mrs         tmp1, dczid_el0
+    cmp         tmp1, 6         // CACHE_LINE_SIZE 256
+    b.ne        L(vl_agnostic)
+
+    // if rest >= L2_SIZE
+    cmp         rest, L2_SIZE
+    b.cc        L(L1_prefetch)
+    // align dest address at vector_length byte boundary
+    sub         tmp1, vector_length, 1
+    and         tmp2, dest_ptr, tmp1
+    // if vl_remainder == 0
+    cmp         tmp2, 0
+    b.eq        1f
+    sub         vl_remainder, vector_length, tmp2
+    // process remainder until the first vector_length boundary
+    whilelt     p0.b, xzr, vl_remainder
+    ld1b        z0.b, p0/z, [src_ptr]
+    st1b        z0.b, p0, [dest_ptr]
+    add         dest_ptr, dest_ptr, vl_remainder
+    add         src_ptr, src_ptr, vl_remainder
+    sub         rest, rest, vl_remainder
+    // align dest address at CACHE_LINE_SIZE byte boundary
+1:  mov         tmp1, CACHE_LINE_SIZE
+    and         tmp2, dest_ptr, CACHE_LINE_SIZE - 1
+    // if cl_remainder == 0
+    cmp         tmp2, 0
+    b.eq        L(L2_dc_zva)
+    sub         cl_remainder, tmp1, tmp2
+    // process remainder until the first CACHE_LINE_SIZE boundary
+    mov         tmp1, xzr       // index
+2:  whilelt     p0.b, tmp1, cl_remainder
+    ld1b        z0.b, p0/z, [src_ptr, tmp1]
+    st1b        z0.b, p0, [dest_ptr, tmp1]
+    incb        tmp1
+    cmp         tmp1, cl_remainder
+    b.lo        2b
+    add         dest_ptr, dest_ptr, cl_remainder
+    add         src_ptr, src_ptr, cl_remainder
+    sub         rest, rest, cl_remainder
+
+L(L2_dc_zva): // unroll zero fill
+    and         tmp1, dest, 0xffffffffffffff
+    and         tmp2, src, 0xffffffffffffff
+    sub         tmp1, tmp2, tmp1        // diff
+    mov         tmp2, CACHE_LINE_SIZE * 20
+    cmp         tmp1, tmp2
+    b.lo        L(L1_prefetch)
+    mov         tmp1, dest_ptr
+    dc          zva, tmp1               // 1
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 2
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 3
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 4
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 5
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 6
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 7
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 8
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 9
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 10
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 11
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 12
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 13
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 14
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 15
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 16
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 17
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 18
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 19
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 20
+
+L(L2_vl_64): // VL64 unroll8
+    cmp         vector_length, 64
+    b.ne        L(L2_vl_32)
+    ptrue       p0.b
+    .p2align 3
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE * 2
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+1:  st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    mov         tmp1, PF_DIST_L1
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    mov         tmp2, CACHE_LINE_SIZE * 19
+    add         tmp2, dest_ptr, tmp2
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 19
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    mov         tmp1, PF_DIST_L1 + CACHE_LINE_SIZE
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2 + CACHE_LINE_SIZE
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    add         tmp2, tmp2, CACHE_LINE_SIZE
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 20
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE * 2
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+    cmp         rest, L2_SIZE
+    b.ge        1b
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2
+
+L(L2_vl_32): // VL32 unroll6
+    cmp         vector_length, 32
+    b.ne        L(L2_vl_16)
+    ptrue       p0.b
+    .p2align 3
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE
+    sub         rest, rest, CACHE_LINE_SIZE
+1:  st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    mov         tmp1, PF_DIST_L1
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    mov         tmp2, CACHE_LINE_SIZE * 19
+    add         tmp2, dest_ptr, tmp2
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 19
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    mov         tmp1, PF_DIST_L1 + CACHE_LINE_SIZE
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2 + CACHE_LINE_SIZE
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    add         tmp2, tmp2, CACHE_LINE_SIZE
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 20
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+    cmp         rest, L2_SIZE
+    b.ge        1b
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE
+
+L(L2_vl_16): // VL16 unroll32
+    cmp         vector_length, 16
+    b.ne        L(L1_prefetch)
+    ptrue       p0.b
+    .p2align 3
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE / 2
+    ld1b        z16.b,  p0/z, [src_ptr, #-8, mul vl]
+    ld1b        z17.b,  p0/z, [src_ptr, #-7, mul vl]
+    ld1b        z18.b, p0/z, [src_ptr,  #-6, mul vl]
+    ld1b        z19.b, p0/z, [src_ptr,  #-5, mul vl]
+    ld1b        z20.b, p0/z, [src_ptr,  #-4, mul vl]
+    ld1b        z21.b, p0/z, [src_ptr,  #-3, mul vl]
+    ld1b        z22.b, p0/z, [src_ptr,  #-2, mul vl]
+    ld1b        z23.b, p0/z, [src_ptr,  #-1, mul vl]
+    ld1b        z0.b,  p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b,  p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z2.b,  p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b,  p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z4.b,  p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b,  p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z6.b,  p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b,  p0/z, [src_ptr,  #7, mul vl]
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE / 2
+    sub         rest, rest, CACHE_LINE_SIZE
+1:  add         dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE / 2
+    st1b        z16.b, p0,   [dest_ptr, #-8, mul vl]
+    st1b        z17.b, p0,   [dest_ptr, #-7, mul vl]
+    ld1b        z16.b, p0/z, [src_ptr,  #-8, mul vl]
+    ld1b        z17.b, p0/z, [src_ptr,  #-7, mul vl]
+    st1b        z18.b, p0,   [dest_ptr, #-6, mul vl]
+    st1b        z19.b, p0,   [dest_ptr, #-5, mul vl]
+    ld1b        z18.b, p0/z, [src_ptr,  #-6, mul vl]
+    ld1b        z19.b, p0/z, [src_ptr,  #-5, mul vl]
+    st1b        z20.b, p0,   [dest_ptr, #-4, mul vl]
+    st1b        z21.b, p0,   [dest_ptr, #-3, mul vl]
+    ld1b        z20.b, p0/z, [src_ptr,  #-4, mul vl]
+    ld1b        z21.b, p0/z, [src_ptr,  #-3, mul vl]
+    st1b        z22.b, p0,   [dest_ptr, #-2, mul vl]
+    st1b        z23.b, p0,   [dest_ptr, #-1, mul vl]
+    ld1b        z22.b, p0/z, [src_ptr,  #-2, mul vl]
+    ld1b        z23.b, p0/z, [src_ptr,  #-1, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    mov         tmp1, PF_DIST_L1
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    mov         tmp2, CACHE_LINE_SIZE * 19
+    add         tmp2, dest_ptr, tmp2
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 19
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE
+    st1b        z16.b, p0,   [dest_ptr, #-8, mul vl]
+    st1b        z17.b, p0,   [dest_ptr, #-7, mul vl]
+    ld1b        z16.b, p0/z, [src_ptr,  #-8, mul vl]
+    ld1b        z17.b, p0/z, [src_ptr,  #-7, mul vl]
+    st1b        z18.b, p0,   [dest_ptr, #-6, mul vl]
+    st1b        z19.b, p0,   [dest_ptr, #-5, mul vl]
+    ld1b        z18.b, p0/z, [src_ptr,  #-6, mul vl]
+    ld1b        z19.b, p0/z, [src_ptr,  #-5, mul vl]
+    st1b        z20.b, p0,   [dest_ptr, #-4, mul vl]
+    st1b        z21.b, p0,   [dest_ptr, #-3, mul vl]
+    ld1b        z20.b, p0/z, [src_ptr,  #-4, mul vl]
+    ld1b        z21.b, p0/z, [src_ptr,  #-3, mul vl]
+    st1b        z22.b, p0,   [dest_ptr, #-2, mul vl]
+    st1b        z23.b, p0,   [dest_ptr, #-1, mul vl]
+    ld1b        z22.b, p0/z, [src_ptr,  #-2, mul vl]
+    ld1b        z23.b, p0/z, [src_ptr,  #-1, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    mov         tmp1, PF_DIST_L1 + CACHE_LINE_SIZE
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2 + CACHE_LINE_SIZE
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    add         tmp2, tmp2, CACHE_LINE_SIZE
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 20
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE / 2
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+    cmp         rest, L2_SIZE
+    b.ge        1b
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2
+    st1b        z16.b, p0, [dest_ptr, #-8, mul vl]
+    st1b        z17.b, p0, [dest_ptr, #-7, mul vl]
+    st1b        z18.b, p0, [dest_ptr, #-6, mul vl]
+    st1b        z19.b, p0, [dest_ptr, #-5, mul vl]
+    st1b        z20.b, p0, [dest_ptr, #-4, mul vl]
+    st1b        z21.b, p0, [dest_ptr, #-3, mul vl]
+    st1b        z22.b, p0, [dest_ptr, #-2, mul vl]
+    st1b        z23.b, p0, [dest_ptr, #-1, mul vl]
+    st1b        z0.b, p0,  [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,  [dest_ptr, #1, mul vl]
+    st1b        z2.b, p0,  [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,  [dest_ptr, #3, mul vl]
+    st1b        z4.b, p0,  [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,  [dest_ptr, #5, mul vl]
+    st1b        z6.b, p0,  [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,  [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2
+
+L(L1_prefetch): // if rest >= L1_SIZE
+    cmp         rest, L1_SIZE
+    b.cc        L(vl_agnostic)
+L(L1_vl_64):
+    cmp         vector_length, 64
+    b.ne        L(L1_vl_32)
+    ptrue       p0.b
+    .p2align 3
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE * 2
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+1:  st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    mov         tmp1, PF_DIST_L1
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    mov         tmp1, PF_DIST_L1 + CACHE_LINE_SIZE
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2 + CACHE_LINE_SIZE
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE * 2
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+    cmp         rest, L1_SIZE
+    b.ge        1b
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2
+
+L(L1_vl_32):
+    cmp         vector_length, 32
+    b.ne        L(L1_vl_16)
+    ptrue       p0.b
+    .p2align 3
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE
+    sub         rest, rest, CACHE_LINE_SIZE
+1:  st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    mov         tmp1, PF_DIST_L1
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    mov         tmp1, PF_DIST_L1 + CACHE_LINE_SIZE
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2 + CACHE_LINE_SIZE
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+    cmp         rest, L1_SIZE
+    b.ge        1b
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE
+
+L(L1_vl_16):
+    cmp         vector_length, 16
+    b.ne        L(vl_agnostic)
+    ptrue       p0.b
+    .p2align 3
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE / 2
+    ld1b        z16.b,  p0/z, [src_ptr, #-8, mul vl]
+    ld1b        z17.b,  p0/z, [src_ptr, #-7, mul vl]
+    ld1b        z18.b, p0/z, [src_ptr,  #-6, mul vl]
+    ld1b        z19.b, p0/z, [src_ptr,  #-5, mul vl]
+    ld1b        z20.b, p0/z, [src_ptr,  #-4, mul vl]
+    ld1b        z21.b, p0/z, [src_ptr,  #-3, mul vl]
+    ld1b        z22.b, p0/z, [src_ptr,  #-2, mul vl]
+    ld1b        z23.b, p0/z, [src_ptr,  #-1, mul vl]
+    ld1b        z0.b,  p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b,  p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z2.b,  p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b,  p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z4.b,  p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b,  p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z6.b,  p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b,  p0/z, [src_ptr,  #7, mul vl]
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE / 2
+    sub         rest, rest, CACHE_LINE_SIZE
+1:  add         dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE / 2
+    st1b        z16.b, p0,   [dest_ptr, #-8, mul vl]
+    st1b        z17.b, p0,   [dest_ptr, #-7, mul vl]
+    ld1b        z16.b, p0/z, [src_ptr,  #-8, mul vl]
+    ld1b        z17.b, p0/z, [src_ptr,  #-7, mul vl]
+    st1b        z18.b, p0,   [dest_ptr, #-6, mul vl]
+    st1b        z19.b, p0,   [dest_ptr, #-5, mul vl]
+    ld1b        z18.b, p0/z, [src_ptr,  #-6, mul vl]
+    ld1b        z19.b, p0/z, [src_ptr,  #-5, mul vl]
+    st1b        z20.b, p0,   [dest_ptr, #-4, mul vl]
+    st1b        z21.b, p0,   [dest_ptr, #-3, mul vl]
+    ld1b        z20.b, p0/z, [src_ptr,  #-4, mul vl]
+    ld1b        z21.b, p0/z, [src_ptr,  #-3, mul vl]
+    st1b        z22.b, p0,   [dest_ptr, #-2, mul vl]
+    st1b        z23.b, p0,   [dest_ptr, #-1, mul vl]
+    ld1b        z22.b, p0/z, [src_ptr,  #-2, mul vl]
+    ld1b        z23.b, p0/z, [src_ptr,  #-1, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    mov         tmp1, PF_DIST_L1
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE
+    st1b        z16.b, p0,   [dest_ptr, #-8, mul vl]
+    st1b        z17.b, p0,   [dest_ptr, #-7, mul vl]
+    ld1b        z16.b, p0/z, [src_ptr,  #-8, mul vl]
+    ld1b        z17.b, p0/z, [src_ptr,  #-7, mul vl]
+    st1b        z18.b, p0,   [dest_ptr, #-6, mul vl]
+    st1b        z19.b, p0,   [dest_ptr, #-5, mul vl]
+    ld1b        z18.b, p0/z, [src_ptr,  #-6, mul vl]
+    ld1b        z19.b, p0/z, [src_ptr,  #-5, mul vl]
+    st1b        z20.b, p0,   [dest_ptr, #-4, mul vl]
+    st1b        z21.b, p0,   [dest_ptr, #-3, mul vl]
+    ld1b        z20.b, p0/z, [src_ptr,  #-4, mul vl]
+    ld1b        z21.b, p0/z, [src_ptr,  #-3, mul vl]
+    st1b        z22.b, p0,   [dest_ptr, #-2, mul vl]
+    st1b        z23.b, p0,   [dest_ptr, #-1, mul vl]
+    ld1b        z22.b, p0/z, [src_ptr,  #-2, mul vl]
+    ld1b        z23.b, p0/z, [src_ptr,  #-1, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    mov         tmp1, PF_DIST_L1 + CACHE_LINE_SIZE
+    prfm        pstl1keep, [dest_ptr, tmp1]
+    mov         tmp1, PF_DIST_L2 + CACHE_LINE_SIZE
+    prfm        pstl2keep, [dest_ptr, tmp1]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE / 2
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+    cmp         rest, L1_SIZE
+    b.ge        1b
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2
+    st1b        z16.b, p0, [dest_ptr, #-8, mul vl]
+    st1b        z17.b, p0, [dest_ptr, #-7, mul vl]
+    st1b        z18.b, p0, [dest_ptr, #-6, mul vl]
+    st1b        z19.b, p0, [dest_ptr, #-5, mul vl]
+    st1b        z20.b, p0, [dest_ptr, #-4, mul vl]
+    st1b        z21.b, p0, [dest_ptr, #-3, mul vl]
+    st1b        z22.b, p0, [dest_ptr, #-2, mul vl]
+    st1b        z23.b, p0, [dest_ptr, #-1, mul vl]
+    st1b        z0.b, p0,  [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,  [dest_ptr, #1, mul vl]
+    st1b        z2.b, p0,  [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,  [dest_ptr, #3, mul vl]
+    st1b        z4.b, p0,  [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,  [dest_ptr, #5, mul vl]
+    st1b        z6.b, p0,  [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,  [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE / 2
+
+L(vl_agnostic): // VL Agnostic
+
+L(unroll32): // unrolling and software pipeline
+    lsl         tmp1, vector_length, 3  // vector_length * 8
+    lsl         tmp2, vector_length, 5  // vector_length * 32
+    ptrue       p0.b
+    .p2align 3
+1:  cmp         rest, tmp2
+    b.cc        L(unroll8)
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, tmp1
+    add         src_ptr, src_ptr, tmp1
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, tmp1
+    add         src_ptr, src_ptr, tmp1
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, tmp1
+    add         src_ptr, src_ptr, tmp1
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, tmp1
+    add         src_ptr, src_ptr, tmp1
+    sub         rest, rest, tmp2
+    b           1b
+
+L(unroll8): // unrolling and software pipeline
+    lsl         tmp1, vector_length, 3  // vector_length * 8
+    ptrue       p0.b
+    .p2align 3
+1:  cmp         rest, tmp1
+    b.cc        L(unroll1)
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    add         dest_ptr, dest_ptr, tmp1
+    add         src_ptr, src_ptr, tmp1
+    sub         rest, rest, tmp1
+    b           1b
+
+ L(unroll1):
+    ptrue       p0.b
+    .p2align 3
+1:  cmp         rest, vector_length
+    b.cc        L(last)
+    ld1b        z0.b, p0/z, [src_ptr]
+    st1b        z0.b, p0,   [dest_ptr]
+    add         dest_ptr, dest_ptr, vector_length
+    add         src_ptr, src_ptr, vector_length
+    sub         rest, rest, vector_length
+    b           1b
+
+L(last):
+    whilelt     p0.b, xzr, rest
+    ld1b        z0.b, p0/z, [src_ptr]
+    st1b        z0.b, p0, [dest_ptr]
+    ret
+
+END (MEMCPY)
+libc_hidden_builtin_def (MEMCPY)
+
+
+    .p2align 4
+ENTRY_ALIGN (MEMMOVE, 6)
+
+    // remove tag address
+    and         tmp1, dest, 0xffffffffffffff
+    and         tmp2, src, 0xffffffffffffff
+    sub         tmp1, tmp1, tmp2         // diff
+    // if diff <= 0 || diff >= n then memcpy
+    cmp         tmp1, 0
+    ccmp        tmp1, n, 2, gt
+    b.cs        L(fwd_start)
+
+L(bwd_start):
+    mov         rest, n
+    add         dest_ptr, dest, n       // dest_end
+    add         src_ptr, src, n         // src_end
+    cntb        vector_length
+    ptrue       p0.b
+    udiv        tmp1, n, vector_length          // quotient
+    mul         tmp1, tmp1, vector_length       // product
+    sub         vl_remainder, n, tmp1
+    // if bwd_remainder == 0 then skip vl_remainder bwd copy
+    cmp         vl_remainder, 0
+    b.eq        L(bwd_main)
+    // vl_remainder bwd copy
+    whilelt     p0.b, xzr, vl_remainder
+    sub         src_ptr, src_ptr, vl_remainder
+    sub         dest_ptr, dest_ptr, vl_remainder
+    ld1b        z0.b, p0/z, [src_ptr]
+    st1b        z0.b, p0, [dest_ptr]
+    sub         rest, rest, vl_remainder
+
+L(bwd_main):
+
+    // VL Agnostic
+L(bwd_unroll32): // unrolling and software pipeline
+    lsl         tmp1, vector_length, 3  // vector_length * 8
+    lsl         tmp2, vector_length, 5  // vector_length * 32
+    ptrue       p0.b
+    .p2align 3
+1:  cmp         rest, tmp2
+    b.cc        L(bwd_unroll8)
+    sub         src_ptr, src_ptr, tmp1
+    sub         dest_ptr, dest_ptr, tmp1
+    ld1b        z0.b, p0/z, [src_ptr,  #7, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #6, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #7, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #6, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #4, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #5, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #4, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #2, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #3, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #2, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #0, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #1, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #0, mul vl]
+    sub         src_ptr, src_ptr, tmp1
+    sub         dest_ptr, dest_ptr, tmp1
+    ld1b        z0.b, p0/z, [src_ptr,  #7, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #6, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #7, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #6, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #4, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #5, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #4, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #2, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #3, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #2, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #0, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #1, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #0, mul vl]
+    sub         src_ptr, src_ptr, tmp1
+    sub         dest_ptr, dest_ptr, tmp1
+    ld1b        z0.b, p0/z, [src_ptr,  #7, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #6, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #7, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #6, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #4, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #5, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #4, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #2, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #3, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #2, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #0, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #1, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #0, mul vl]
+    sub         src_ptr, src_ptr, tmp1
+    sub         dest_ptr, dest_ptr, tmp1
+    ld1b        z0.b, p0/z, [src_ptr,  #7, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #6, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #7, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #6, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #4, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #5, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #4, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #2, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #3, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #2, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #0, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #1, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #0, mul vl]
+    sub         rest, rest, tmp2
+    b           1b
+
+L(bwd_unroll8): // unrolling and software pipeline
+    lsl         tmp1, vector_length, 3  // vector_length * 8
+    ptrue       p0.b
+    .p2align 3
+1:  cmp         rest, tmp1
+    b.cc        L(bwd_unroll1)
+    sub         src_ptr, src_ptr, tmp1
+    sub         dest_ptr, dest_ptr, tmp1
+    ld1b        z0.b, p0/z, [src_ptr,  #7, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #6, mul vl]
+    st1b        z0.b, p0,   [dest_ptr, #7, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #6, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #5, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #4, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #5, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #4, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #3, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #2, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #3, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #2, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #1, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #0, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #1, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #0, mul vl]
+    sub         rest, rest, tmp1
+    b           1b
+
+    .p2align 3
+L(bwd_unroll1):
+    ptrue       p0.b
+1:  cmp         rest, vector_length
+    b.cc        L(bwd_last)
+    sub         src_ptr, src_ptr, vector_length
+    sub         dest_ptr, dest_ptr, vector_length
+    ld1b        z0.b, p0/z, [src_ptr]
+    st1b        z0.b, p0, [dest_ptr]
+    sub         rest, rest, vector_length
+    b           1b
+
+L(bwd_last):
+    whilelt     p0.b, xzr, rest
+    sub         src_ptr, src_ptr, rest
+    sub         dest_ptr, dest_ptr, rest
+    ld1b        z0.b, p0/z, [src_ptr]
+    st1b        z0.b, p0, [dest_ptr]
+    ret
+
+END (MEMMOVE)
+libc_hidden_builtin_def (MEMMOVE)
+#endif /* IS_IN (libc) */
+#endif /* HAVE_SVE_ASM_SUPPORT */
+
diff --git a/sysdeps/aarch64/multiarch/memmove.c b/sysdeps/aarch64/multiarch/memmove.c
index 12d77818a9..1e5ee1c934 100644
--- a/sysdeps/aarch64/multiarch/memmove.c
+++ b/sysdeps/aarch64/multiarch/memmove.c
@@ -33,6 +33,9 @@ extern __typeof (__redirect_memmove) __memmove_simd attribute_hidden;
 extern __typeof (__redirect_memmove) __memmove_thunderx attribute_hidden;
 extern __typeof (__redirect_memmove) __memmove_thunderx2 attribute_hidden;
 extern __typeof (__redirect_memmove) __memmove_falkor attribute_hidden;
+#if HAVE_SVE_ASM_SUPPORT
+extern __typeof (__redirect_memmove) __memmove_a64fx attribute_hidden;
+#endif
 
 libc_ifunc (__libc_memmove,
             (IS_THUNDERX (midr)
@@ -44,8 +47,13 @@ libc_ifunc (__libc_memmove,
 		  : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
 		     || IS_NEOVERSE_V1 (midr)
 		     ? __memmove_simd
-		     : __memmove_generic)))));
-
+#if HAVE_SVE_ASM_SUPPORT
+                     : (IS_A64FX (midr)
+                        ? __memmove_a64fx
+                        : __memmove_generic))))));
+#else
+                        : __memmove_generic)))));
+#endif
 # undef memmove
 strong_alias (__libc_memmove, memmove);
 #endif
diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
index db6aa3516c..6206a2f618 100644
--- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
+++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
@@ -46,6 +46,7 @@ static struct cpu_list cpu_list[] = {
       {"ares",		 0x411FD0C0},
       {"emag",		 0x503F0001},
       {"kunpeng920", 	 0x481FD010},
+      {"a64fx",		 0x460F0010},
       {"generic", 	 0x0}
 };
 
@@ -116,4 +117,7 @@ init_cpu_features (struct cpu_features *cpu_features)
 	     (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_ASYNC | MTE_ALLOWED_TAGS),
 	     0, 0, 0);
 #endif
+
+  /* Check if SVE is supported.  */
+  cpu_features->sve = GLRO (dl_hwcap) & HWCAP_SVE;
 }
diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
index 3b9bfed134..2b322e5414 100644
--- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
+++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
@@ -65,6 +65,9 @@
 #define IS_KUNPENG920(midr) (MIDR_IMPLEMENTOR(midr) == 'H'			   \
                         && MIDR_PARTNUM(midr) == 0xd01)
 
+#define IS_A64FX(midr) (MIDR_IMPLEMENTOR(midr) == 'F'			      \
+			&& MIDR_PARTNUM(midr) == 0x001)
+
 struct cpu_features
 {
   uint64_t midr_el1;
@@ -72,6 +75,7 @@ struct cpu_features
   bool bti;
   /* Currently, the GLIBC memory tagging tunable only defines 8 bits.  */
   uint8_t mte_state;
+  bool sve;
 };
 
 #endif /* _CPU_FEATURES_AARCH64_H  */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 3/5] aarch64: Added optimized memset for A64FX
  2021-03-17  2:28 [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Naohiro Tamura
  2021-03-17  2:33 ` [PATCH 1/5] config: Added HAVE_SVE_ASM_SUPPORT for aarch64 Naohiro Tamura
  2021-03-17  2:34 ` [PATCH 2/5] aarch64: Added optimized memcpy and memmove for A64FX Naohiro Tamura
@ 2021-03-17  2:34 ` Naohiro Tamura
  2021-03-17  2:35 ` [PATCH 4/5] scripts: Added Vector Length Set test helper script Naohiro Tamura
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 36+ messages in thread
From: Naohiro Tamura @ 2021-03-17  2:34 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch optimizes the performance of memset for A64FX [1] which
implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache
per NUMA node.

The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill and prefetch.

SVE assembler code for memset is implemented as Vector Length Agnostic
code so theoretically it can be run on any SOC which supports ARMv8-A
SVE standard.

We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.

And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.

[1] https://github.com/fujitsu/A64FX
---
 sysdeps/aarch64/multiarch/Makefile          |   1 +
 sysdeps/aarch64/multiarch/ifunc-impl-list.c |   5 +-
 sysdeps/aarch64/multiarch/memset.c          |  11 +-
 sysdeps/aarch64/multiarch/memset_a64fx.S    | 574 ++++++++++++++++++++
 4 files changed, 589 insertions(+), 2 deletions(-)
 create mode 100644 sysdeps/aarch64/multiarch/memset_a64fx.S

diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile
index 04c3f17121..7500cf1e93 100644
--- a/sysdeps/aarch64/multiarch/Makefile
+++ b/sysdeps/aarch64/multiarch/Makefile
@@ -2,6 +2,7 @@ ifeq ($(subdir),string)
 sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \
 		   memcpy_falkor memcpy_a64fx \
 		   memset_generic memset_falkor memset_emag memset_kunpeng \
+		   memset_a64fx \
 		   memchr_generic memchr_nosimd \
 		   strlen_mte strlen_asimd
 endif
diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
index cb78da9692..e252a10d88 100644
--- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c
+++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
@@ -41,7 +41,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 
   INIT_ARCH ();
 
-  /* Support sysdeps/aarch64/multiarch/memcpy.c and memmove.c.  */
+  /* Support sysdeps/aarch64/multiarch/memcpy.c, memmove.c and memset.c.  */
   IFUNC_IMPL (i, name, memcpy,
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_thunderx)
 	      IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2)
@@ -66,6 +66,9 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 	      IFUNC_IMPL_ADD (array, i, memset, (zva_size == 64), __memset_falkor)
 	      IFUNC_IMPL_ADD (array, i, memset, (zva_size == 64), __memset_emag)
 	      IFUNC_IMPL_ADD (array, i, memset, 1, __memset_kunpeng)
+#if HAVE_SVE_ASM_SUPPORT
+	      IFUNC_IMPL_ADD (array, i, memset, sve, __memset_a64fx)
+#endif
 	      IFUNC_IMPL_ADD (array, i, memset, 1, __memset_generic))
   IFUNC_IMPL (i, name, memchr,
 	      IFUNC_IMPL_ADD (array, i, memchr, !mte, __memchr_nosimd)
diff --git a/sysdeps/aarch64/multiarch/memset.c b/sysdeps/aarch64/multiarch/memset.c
index 28d3926bc2..df075edddb 100644
--- a/sysdeps/aarch64/multiarch/memset.c
+++ b/sysdeps/aarch64/multiarch/memset.c
@@ -31,6 +31,9 @@ extern __typeof (__redirect_memset) __libc_memset;
 extern __typeof (__redirect_memset) __memset_falkor attribute_hidden;
 extern __typeof (__redirect_memset) __memset_emag attribute_hidden;
 extern __typeof (__redirect_memset) __memset_kunpeng attribute_hidden;
+#if HAVE_SVE_ASM_SUPPORT
+extern __typeof (__redirect_memset) __memset_a64fx attribute_hidden;
+#endif
 extern __typeof (__redirect_memset) __memset_generic attribute_hidden;
 
 libc_ifunc (__libc_memset,
@@ -40,7 +43,13 @@ libc_ifunc (__libc_memset,
 	     ? __memset_falkor
 	     : (IS_EMAG (midr) && zva_size == 64
 	       ? __memset_emag
-	       : __memset_generic)));
+#if HAVE_SVE_ASM_SUPPORT
+	       : (IS_A64FX (midr)
+		  ? __memset_a64fx
+	          : __memset_generic))));
+#else
+	          : __memset_generic)));
+#endif
 
 # undef memset
 strong_alias (__libc_memset, memset);
diff --git a/sysdeps/aarch64/multiarch/memset_a64fx.S b/sysdeps/aarch64/multiarch/memset_a64fx.S
new file mode 100644
index 0000000000..02ae7caab0
--- /dev/null
+++ b/sysdeps/aarch64/multiarch/memset_a64fx.S
@@ -0,0 +1,574 @@
+/* Optimized memset for Fujitsu A64FX processor.
+   Copyright (C) 2012-2021 Free Software Foundation, Inc.
+
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library.  If not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+#include <sysdeps/aarch64/memset-reg.h>
+
+#if HAVE_SVE_ASM_SUPPORT
+#if IS_IN (libc)
+# define MEMSET __memset_a64fx
+
+/* Assumptions:
+ *
+ * ARMv8.2-a, AArch64, unaligned accesses, sve
+ *
+ */
+
+#define L1_SIZE         (64*1024)       // L1 64KB
+#define L2_SIZE         (8*1024*1024)   // L2 8MB - 1MB
+#define CACHE_LINE_SIZE 256
+#define PF_DIST_L1 (CACHE_LINE_SIZE * 16)
+#define PF_DIST_L2 (CACHE_LINE_SIZE * 128)
+#define rest            x8
+#define vector_length   x9
+#define vl_remainder    x10     // vector_length remainder
+#define cl_remainder    x11     // CACHE_LINE_SIZE remainder
+
+    .arch armv8.2-a+sve
+
+ENTRY_ALIGN (MEMSET, 6)
+
+    PTR_ARG (0)
+    SIZE_ARG (2)
+
+    cmp         count, 0
+    b.ne        L(init)
+    ret
+L(init):
+    mov         rest, count
+    mov         dst, dstin
+    add         dstend, dstin, count
+    cntb        vector_length
+    ptrue       p0.b
+    dup         z0.b, valw
+
+    cmp         count, 96
+    b.hi	L(set_long)
+    cmp         count, 16
+    b.hs	L(set_medium)
+    mov         val, v0.D[0]
+
+    /* Set 0..15 bytes.  */
+    tbz         count, 3, 1f
+    str         val, [dstin]
+    str         val, [dstend, -8]
+    ret
+    nop
+1:  tbz         count, 2, 2f
+    str         valw, [dstin]
+    str         valw, [dstend, -4]
+    ret
+2:  cbz         count, 3f
+    strb        valw, [dstin]
+    tbz         count, 1, 3f
+    strh        valw, [dstend, -2]
+3:  ret
+
+    /* Set 17..96 bytes.  */
+L(set_medium):
+    str         q0, [dstin]
+    tbnz        count, 6, L(set96)
+    str         q0, [dstend, -16]
+    tbz         count, 5, 1f
+    str         q0, [dstin, 16]
+    str         q0, [dstend, -32]
+1:  ret
+
+    .p2align 4
+    /* Set 64..96 bytes.  Write 64 bytes from the start and
+       32 bytes from the end.  */
+L(set96):
+    str         q0, [dstin, 16]
+    stp         q0, q0, [dstin, 32]
+    stp         q0, q0, [dstend, -32]
+    ret
+
+L(set_long):
+    // if count > 1280 && vector_length != 16 then L(L2)
+    cmp         count, 1280
+    ccmp        vector_length, 16, 4, gt
+    b.ne        L(L2)
+    bic         dst, dstin, 15
+    str         q0, [dstin]
+    sub         count, dstend, dst      /* Count is 16 too large.  */
+    sub         dst, dst, 16            /* Dst is biased by -32.  */
+    sub         count, count, 64 + 16   /* Adjust count and bias for loop.  */
+1:  stp         q0, q0, [dst, 32]
+    stp         q0, q0, [dst, 64]!
+    subs        count, count, 64
+    b.lo        2f
+    stp         q0, q0, [dst, 32]
+    stp         q0, q0, [dst, 64]!
+    subs        count, count, 64
+    b.lo	2f
+    stp         q0, q0, [dst, 32]
+    stp         q0, q0, [dst, 64]!
+    subs        count, count, 64
+    b.lo        2f
+    stp         q0, q0, [dst, 32]
+    stp         q0, q0, [dst, 64]!
+    subs        count, count, 64
+    b.hi        1b
+2:  stp         q0, q0, [dstend, -64]
+    stp         q0, q0, [dstend, -32]
+    ret
+
+L(L2):
+    // get block_size
+    mrs         tmp1, dczid_el0
+    cmp         tmp1, 6         // CACHE_LINE_SIZE 256
+    b.ne        L(vl_agnostic)
+
+    // if rest >= L2_SIZE
+    cmp         rest, L2_SIZE
+    b.cc        L(L1_prefetch)
+    // align dst address at vector_length byte boundary
+    sub         tmp1, vector_length, 1
+    and         tmp2, dst, tmp1
+    // if vl_remainder == 0
+    cmp         tmp2, 0
+    b.eq        1f
+    sub         vl_remainder, vector_length, tmp2
+    // process remainder until the first vector_length boundary
+    whilelt     p0.b, xzr, vl_remainder
+    st1b        z0.b, p0, [dst]
+    add         dst, dst, vl_remainder
+    sub         rest, rest, vl_remainder
+    // align dstin address at CACHE_LINE_SIZE byte boundary
+1:  mov         tmp1, CACHE_LINE_SIZE
+    and         tmp2, dst, CACHE_LINE_SIZE - 1
+    // if cl_remainder == 0
+    cmp         tmp2, 0
+    b.eq        L(L2_dc_zva)
+    sub         cl_remainder, tmp1, tmp2
+    // process remainder until the first CACHE_LINE_SIZE boundary
+    mov         tmp1, xzr       // index
+2:  whilelt     p0.b, tmp1, cl_remainder
+    st1b        z0.b, p0, [dst, tmp1]
+    incb        tmp1
+    cmp         tmp1, cl_remainder
+    b.lo        2b
+    add         dst, dst, cl_remainder
+    sub         rest, rest, cl_remainder
+
+L(L2_dc_zva): // unroll zero fill
+    mov         tmp1, dst
+    dc          zva, tmp1               // 1
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 2
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 3
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 4
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 5
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 6
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 7
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 8
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 9
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 10
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 11
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 12
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 13
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 14
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 15
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 16
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 17
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 18
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 19
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    dc          zva, tmp1               // 20
+
+L(L2_vl_64): // VL64 unroll8
+    cmp         vector_length, 64
+    b.ne        L(L2_vl_32)
+    ptrue       p0.b
+    .p2align 4
+1:  st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    mov         tmp2, CACHE_LINE_SIZE * 20
+    add         tmp2, dst, tmp2
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 20
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    add         tmp2, tmp2, CACHE_LINE_SIZE
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 21
+    add         dst, dst, 512
+    sub         rest, rest, 512
+    cmp         rest, L2_SIZE
+    b.ge        1b
+
+L(L2_vl_32): // VL32 unroll6
+    cmp         vector_length, 32
+    b.ne        L(L2_vl_16)
+    ptrue       p0.b
+    .p2align 4
+1:  st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    mov         tmp2, CACHE_LINE_SIZE * 21
+    add         tmp2, dst, tmp2
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 21
+    add         dst, dst, CACHE_LINE_SIZE
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    add         tmp2, tmp2, CACHE_LINE_SIZE
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 22
+    add         dst, dst, CACHE_LINE_SIZE
+    sub         rest, rest, 512
+    cmp         rest, L2_SIZE
+    b.ge        1b
+
+L(L2_vl_16):  // VL16 unroll32
+    cmp         vector_length, 16
+    b.ne        L(L1_prefetch)
+    ptrue       p0.b
+    .p2align 4
+1:  add         dst, dst, 128
+    st1b        {z0.b}, p0, [dst, #-8, mul vl]
+    st1b        {z0.b}, p0, [dst, #-7, mul vl]
+    st1b        {z0.b}, p0, [dst, #-6, mul vl]
+    st1b        {z0.b}, p0, [dst, #-5, mul vl]
+    st1b        {z0.b}, p0, [dst, #-4, mul vl]
+    st1b        {z0.b}, p0, [dst, #-3, mul vl]
+    st1b        {z0.b}, p0, [dst, #-2, mul vl]
+    st1b        {z0.b}, p0, [dst, #-1, mul vl]
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    mov         tmp2, CACHE_LINE_SIZE * 20
+    add         tmp2, dst, tmp2
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 20
+    add         dst, dst, CACHE_LINE_SIZE
+    st1b        {z0.b}, p0, [dst, #-8, mul vl]
+    st1b        {z0.b}, p0, [dst, #-7, mul vl]
+    st1b        {z0.b}, p0, [dst, #-6, mul vl]
+    st1b        {z0.b}, p0, [dst, #-5, mul vl]
+    st1b        {z0.b}, p0, [dst, #-4, mul vl]
+    st1b        {z0.b}, p0, [dst, #-3, mul vl]
+    st1b        {z0.b}, p0, [dst, #-2, mul vl]
+    st1b        {z0.b}, p0, [dst, #-1, mul vl]
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    add         tmp2, tmp2, CACHE_LINE_SIZE
+    dc          zva, tmp2       // distance CACHE_LINE_SIZE * 21
+    add         dst, dst, 128
+    sub         rest, rest, 512
+    cmp         rest, L2_SIZE
+    b.ge        1b
+
+L(L1_prefetch): // if rest >= L1_SIZE
+    cmp         rest, L1_SIZE
+    b.cc        L(vl_agnostic)
+L(L1_vl_64):
+    cmp         vector_length, 64
+    b.ne        L(L1_vl_32)
+    ptrue       p0.b
+    .p2align 4
+1:  st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    mov         tmp1, PF_DIST_L1
+    prfm        pstl1keep, [dst, tmp1]
+    mov         tmp1, PF_DIST_L2
+    prfm        pstl2keep, [dst, tmp1]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    mov         tmp1, PF_DIST_L1 + CACHE_LINE_SIZE
+    prfm        pstl1keep, [dst, tmp1]
+    mov         tmp1, PF_DIST_L2 + CACHE_LINE_SIZE
+    prfm        pstl2keep, [dst, tmp1]
+    add         dst, dst, 512
+    sub         rest, rest, 512
+    cmp         rest, L1_SIZE
+    b.ge        1b
+
+L(L1_vl_32):
+    cmp         vector_length, 32
+    b.ne        L(L1_vl_16)
+    ptrue       p0.b
+    .p2align 4
+1:  st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    mov         tmp1, PF_DIST_L1
+    prfm        pstl1keep, [dst, tmp1]
+    mov         tmp1, PF_DIST_L2
+    prfm        pstl2keep, [dst, tmp1]
+    add         dst, dst, CACHE_LINE_SIZE
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    mov         tmp1, PF_DIST_L1 + CACHE_LINE_SIZE
+    prfm        pstl1keep, [dst, tmp1]
+    mov         tmp1, PF_DIST_L2 + CACHE_LINE_SIZE
+    prfm        pstl2keep, [dst, tmp1]
+    add         dst, dst, CACHE_LINE_SIZE
+    sub         rest, rest, 512
+    cmp         rest, L1_SIZE
+    b.ge        1b
+
+L(L1_vl_16):  // VL16 unroll32
+    cmp         vector_length, 16
+    b.ne        L(vl_agnostic)
+    ptrue       p0.b
+    .p2align 4
+1:  mov         tmp1, dst
+    add         dst, dst, 128
+    st1b        {z0.b}, p0, [dst, #-8, mul vl]
+    st1b        {z0.b}, p0, [dst, #-7, mul vl]
+    st1b        {z0.b}, p0, [dst, #-6, mul vl]
+    st1b        {z0.b}, p0, [dst, #-5, mul vl]
+    st1b        {z0.b}, p0, [dst, #-4, mul vl]
+    st1b        {z0.b}, p0, [dst, #-3, mul vl]
+    st1b        {z0.b}, p0, [dst, #-2, mul vl]
+    st1b        {z0.b}, p0, [dst, #-1, mul vl]
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    mov         tmp1, PF_DIST_L1
+    prfm        pstl1keep, [dst, tmp1]
+    mov         tmp1, PF_DIST_L2
+    prfm        pstl2keep, [dst, tmp1]
+    add         dst, dst, CACHE_LINE_SIZE
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    st1b        {z0.b}, p0, [dst, #-8, mul vl]
+    st1b        {z0.b}, p0, [dst, #-7, mul vl]
+    st1b        {z0.b}, p0, [dst, #-6, mul vl]
+    st1b        {z0.b}, p0, [dst, #-5, mul vl]
+    st1b        {z0.b}, p0, [dst, #-4, mul vl]
+    st1b        {z0.b}, p0, [dst, #-3, mul vl]
+    st1b        {z0.b}, p0, [dst, #-2, mul vl]
+    st1b        {z0.b}, p0, [dst, #-1, mul vl]
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    mov         tmp1, PF_DIST_L1 + CACHE_LINE_SIZE
+    prfm        pstl1keep, [dst, tmp1]
+    mov         tmp1, PF_DIST_L2 + CACHE_LINE_SIZE
+    prfm        pstl2keep, [dst, tmp1]
+    add         dst, dst, 128
+    sub         rest, rest, 512
+    cmp         rest, L1_SIZE
+    b.ge        1b
+
+    // VL Agnostic
+L(vl_agnostic):
+L(unroll32):
+    ptrue       p0.b
+    lsl         tmp1, vector_length, 3  // vector_length * 8
+    lsl         tmp2, vector_length, 5  // vector_length * 32
+    .p2align 4
+1:  cmp         rest, tmp2
+    b.cc        L(unroll16)
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    add         dst, dst, tmp1
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    add         dst, dst, tmp1
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    add         dst, dst, tmp1
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    add         dst, dst, tmp1
+    sub         rest, rest, tmp2
+    b           1b
+
+L(unroll16):
+    ptrue       p0.b
+    lsl         tmp1, vector_length, 3  // vector_length * 8
+    lsl         tmp2, vector_length, 4  // vector_length * 16
+    .p2align 4
+1:  cmp         rest, tmp2
+    b.cc        L(unroll8)
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    add         dst, dst, tmp1
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    add         dst, dst, tmp1
+    sub         rest, rest, tmp2
+    b           1b
+
+L(unroll8):
+    lsl         tmp1, vector_length, 3
+    ptrue       p0.b
+    .p2align 4
+1:  cmp         rest, tmp1
+    b.cc        L(unroll4)
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    st1b        {z0.b}, p0, [dst, #4, mul vl]
+    st1b        {z0.b}, p0, [dst, #5, mul vl]
+    st1b        {z0.b}, p0, [dst, #6, mul vl]
+    st1b        {z0.b}, p0, [dst, #7, mul vl]
+    add         dst, dst, tmp1
+    sub         rest, rest, tmp1
+    b           1b
+
+L(unroll4):
+    lsl         tmp1, vector_length, 2
+    ptrue       p0.b
+    .p2align 4
+1:  cmp         rest, tmp1
+    b.cc        L(unroll2)
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    st1b        {z0.b}, p0, [dst, #2, mul vl]
+    st1b        {z0.b}, p0, [dst, #3, mul vl]
+    add         dst, dst, tmp1
+    sub         rest, rest, tmp1
+    b           1b
+
+L(unroll2):
+    lsl         tmp1, vector_length, 1
+    ptrue       p0.b
+    .p2align 4
+1:  cmp         rest, tmp1
+    b.cc        L(unroll1)
+    st1b        {z0.b}, p0, [dst]
+    st1b        {z0.b}, p0, [dst, #1, mul vl]
+    add         dst, dst, tmp1
+    sub         rest, rest, tmp1
+    b           1b
+
+L(unroll1):
+    ptrue       p0.b
+    .p2align 4
+1:  cmp         rest, vector_length
+    b.cc        L(last)
+    st1b        {z0.b}, p0, [dst]
+    sub         rest, rest, vector_length
+    add         dst, dst, vector_length
+    b           1b
+
+    .p2align 4
+L(last):
+    whilelt     p0.b, xzr, rest
+    st1b        z0.b, p0, [dst]
+    ret
+
+END (MEMSET)
+libc_hidden_builtin_def (MEMSET)
+
+#endif /* IS_IN (libc) */
+#endif /* HAVE_SVE_ASM_SUPPORT */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 4/5] scripts: Added Vector Length Set test helper script
  2021-03-17  2:28 [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Naohiro Tamura
                   ` (2 preceding siblings ...)
  2021-03-17  2:34 ` [PATCH 3/5] aarch64: Added optimized memset " Naohiro Tamura
@ 2021-03-17  2:35 ` Naohiro Tamura
  2021-03-29 13:20   ` Szabolcs Nagy
  2021-03-17  2:35 ` [PATCH 5/5] benchtests: Added generic_memcpy and generic_memmove to large benchtests Naohiro Tamura
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 36+ messages in thread
From: Naohiro Tamura @ 2021-03-17  2:35 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch is a test helper script to change Vector Length for child
process. This script can be used as test-wrapper for 'make check'.

Usage examples:

ubuntu@bionic:~/build$ make check subdirs=string \
test-wrapper='~/glibc/scripts/vltest.py 16'

ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 16 make test \
t=string/test-memcpy

ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 32 ./debugglibc.sh \
string/test-memmove

ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 64 ./testrun.sh
string/test-memset
---
 scripts/vltest.py | 82 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 82 insertions(+)
 create mode 100755 scripts/vltest.py

diff --git a/scripts/vltest.py b/scripts/vltest.py
new file mode 100755
index 0000000000..264dfa449f
--- /dev/null
+++ b/scripts/vltest.py
@@ -0,0 +1,82 @@
+#!/usr/bin/python3
+# Set Scalable Vector Length test helper
+# Copyright (C) 2019-2021 Free Software Foundation, Inc.
+# This file is part of the GNU C Library.
+#
+# The GNU C Library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# The GNU C Library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with the GNU C Library; if not, see
+# <https://www.gnu.org/licenses/>.
+"""Set Scalable Vector Length test helper.
+
+Set Scalable Vector Length for child process.
+
+examples:
+
+ubuntu@bionic:~/build$ make check subdirs=string \
+test-wrapper='~/glibc/scripts/vltest.py 16'
+
+ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 16 make test \
+t=string/test-memcpy
+
+ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 32 ./debugglibc.sh \
+string/test-memmove
+
+ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 64 ./testrun.sh \
+string/test-memset
+"""
+import argparse
+from ctypes import cdll, CDLL
+import os
+import sys
+
+EXIT_SUCCESS = 0
+EXIT_FAILURE = 1
+EXIT_UNSUPPORTED = 77
+
+AT_HWCAP = 16
+HWCAP_SVE = (1 << 22)
+
+PR_SVE_GET_VL = 51
+PR_SVE_SET_VL = 50
+PR_SVE_SET_VL_ONEXEC = (1 << 18)
+PR_SVE_VL_INHERIT = (1 << 17)
+PR_SVE_VL_LEN_MASK = 0xffff
+
+def main(args):
+    libc = CDLL("libc.so.6")
+    if not libc.getauxval(AT_HWCAP) & HWCAP_SVE:
+        print("CPU doesn't support SVE")
+        sys.exit(EXIT_UNSUPPORTED)
+
+    libc.prctl(PR_SVE_SET_VL,
+               args.vl[0] | PR_SVE_SET_VL_ONEXEC | PR_SVE_VL_INHERIT)
+    os.execvp(args.args[0], args.args)
+    print("exec system call failure")
+    sys.exit(EXIT_FAILURE)
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser(description=
+            "Set Scalable Vector Length test helper",
+            formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+
+    # positional argument
+    parser.add_argument("vl", nargs=1, type=int,
+                        choices=range(16, 257, 16),
+                        help=('vector length '\
+                              'which is multiples of 16 from 16 to 256'))
+    # remainDer arguments
+    parser.add_argument('args', nargs=argparse.REMAINDER,
+                        help=('args '\
+                              'which is passed to child process'))
+    args = parser.parse_args()
+    main(args)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 5/5] benchtests: Added generic_memcpy and generic_memmove to large benchtests
  2021-03-17  2:28 [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Naohiro Tamura
                   ` (3 preceding siblings ...)
  2021-03-17  2:35 ` [PATCH 4/5] scripts: Added Vector Length Set test helper script Naohiro Tamura
@ 2021-03-17  2:35 ` Naohiro Tamura
  2021-03-29 12:03 ` [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Szabolcs Nagy
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 36+ messages in thread
From: Naohiro Tamura @ 2021-03-17  2:35 UTC (permalink / raw)
  To: libc-alpha

This patch is to add generic_memcpy and generic_memmove to
bench-memcpy-large.c and bench-memmove-large.c respectively so that we
can compare performance between 512 bit scalable vector register with
scalar 64 bit register consistently among memcpy/memmove/memset
default and large benchtests.
---
 benchtests/bench-memcpy-large.c  | 9 +++++++++
 benchtests/bench-memmove-large.c | 9 +++++++++
 2 files changed, 18 insertions(+)

diff --git a/benchtests/bench-memcpy-large.c b/benchtests/bench-memcpy-large.c
index 3df1575514..4a87987202 100644
--- a/benchtests/bench-memcpy-large.c
+++ b/benchtests/bench-memcpy-large.c
@@ -25,7 +25,10 @@
 # define TIMEOUT (20 * 60)
 # include "bench-string.h"
 
+void *generic_memcpy (void *, const void *, size_t);
+
 IMPL (memcpy, 1)
+IMPL (generic_memcpy, 0)
 #endif
 
 #include "json-lib.h"
@@ -124,3 +127,9 @@ test_main (void)
 }
 
 #include <support/test-driver.c>
+
+#define libc_hidden_builtin_def(X)
+#undef MEMCPY
+#define MEMCPY generic_memcpy
+#include <string/memcpy.c>
+#include <string/wordcopy.c>
diff --git a/benchtests/bench-memmove-large.c b/benchtests/bench-memmove-large.c
index 9e2fcd50ab..151dd5a276 100644
--- a/benchtests/bench-memmove-large.c
+++ b/benchtests/bench-memmove-large.c
@@ -25,7 +25,10 @@
 #include "bench-string.h"
 #include "json-lib.h"
 
+void *generic_memmove (void *, const void *, size_t);
+
 IMPL (memmove, 1)
+IMPL (generic_memmove, 0)
 
 typedef char *(*proto_t) (char *, const char *, size_t);
 
@@ -123,3 +126,9 @@ test_main (void)
 }
 
 #include <support/test-driver.c>
+
+#define libc_hidden_builtin_def(X)
+#undef MEMMOVE
+#define MEMMOVE generic_memmove
+#include <string/memmove.c>
+#include <string/wordcopy.c>
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX
  2021-03-17  2:28 [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Naohiro Tamura
                   ` (4 preceding siblings ...)
  2021-03-17  2:35 ` [PATCH 5/5] benchtests: Added generic_memcpy and generic_memmove to large benchtests Naohiro Tamura
@ 2021-03-29 12:03 ` Szabolcs Nagy
  2021-05-10  1:45 ` naohirot
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
  7 siblings, 0 replies; 36+ messages in thread
From: Szabolcs Nagy @ 2021-03-29 12:03 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha

The 03/17/2021 02:28, Naohiro Tamura wrote:
> Fujitsu is in the process of signing the copyright assignment paper.
> We'd like to have some feedback in advance.

thanks for these patches, please let me know when the
copyright is sorted out. i will do some review now.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/5] config: Added HAVE_SVE_ASM_SUPPORT for aarch64
  2021-03-17  2:33 ` [PATCH 1/5] config: Added HAVE_SVE_ASM_SUPPORT for aarch64 Naohiro Tamura
@ 2021-03-29 12:11   ` Szabolcs Nagy
  2021-03-30  6:19     ` naohirot
  0 siblings, 1 reply; 36+ messages in thread
From: Szabolcs Nagy @ 2021-03-29 12:11 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha, Naohiro Tamura

The 03/17/2021 02:33, Naohiro Tamura wrote:
> From: Naohiro Tamura <naohirot@jp.fujitsu.com>
> 
> This patch checks if assembler supports '-march=armv8.2-a+sve' to
> generate SVE code or not, and then define HAVE_SVE_ASM_SUPPORT macro.
> ---
>  config.h.in                  |  3 +++
>  sysdeps/aarch64/configure    | 28 ++++++++++++++++++++++++++++
>  sysdeps/aarch64/configure.ac | 15 +++++++++++++++
>  3 files changed, 46 insertions(+)
> 
> diff --git a/config.h.in b/config.h.in
> index f21bf04e47..2073816af8 100644
> --- a/config.h.in
> +++ b/config.h.in
> @@ -118,6 +118,9 @@
>  /* AArch64 PAC-RET code generation is enabled.  */
>  #define HAVE_AARCH64_PAC_RET 0
>  
> +/* Assembler support ARMv8.2-A SVE */
> +#define HAVE_SVE_ASM_SUPPORT 0
> +

i prefer to use HAVE_AARCH64_ prefix for aarch64 specific
macros in the global config.h, e.g. HAVE_AARCH64_SVE_ASM

and i'd like to have a comment here or in configue.ac with the
binutils version where this becomes obsolete (binutils 2.28 i
think). right now the minimum required version is 2.25, but
glibc may increase that soon to above 2.28.

> diff --git a/sysdeps/aarch64/configure.ac b/sysdeps/aarch64/configure.ac
> index 66f755078a..389a0b4e8d 100644
> --- a/sysdeps/aarch64/configure.ac
> +++ b/sysdeps/aarch64/configure.ac
> @@ -90,3 +90,18 @@ EOF
>    fi
>    rm -rf conftest.*])
>  LIBC_CONFIG_VAR([aarch64-variant-pcs], [$libc_cv_aarch64_variant_pcs])
> +
> +# Check if asm support armv8.2-a+sve
> +AC_CACHE_CHECK(for SVE support in assembler, libc_cv_asm_sve, [dnl
> +cat > conftest.s <<\EOF
> +        ptrue p0.b
> +EOF
> +if AC_TRY_COMMAND(${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&AS_MESSAGE_LOG_FD); then
> +  libc_cv_asm_sve=yes
> +else
> +  libc_cv_asm_sve=no
> +fi
> +rm -f conftest*])
> +if test $libc_cv_asm_sve = yes; then
> +  AC_DEFINE(HAVE_SVE_ASM_SUPPORT)
> +fi

i would use libc_cv_aarch64_sve_asm to make it obvious
that it's aarch64 specific setting.

otherwise OK.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 2/5] aarch64: Added optimized memcpy and memmove for A64FX
  2021-03-17  2:34 ` [PATCH 2/5] aarch64: Added optimized memcpy and memmove for A64FX Naohiro Tamura
@ 2021-03-29 12:44   ` Szabolcs Nagy
  2021-03-30  7:17     ` naohirot
  0 siblings, 1 reply; 36+ messages in thread
From: Szabolcs Nagy @ 2021-03-29 12:44 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha, Naohiro Tamura

The 03/17/2021 02:34, Naohiro Tamura wrote:
> And also we confirmed that the SVE 512 bit vector register performance
> is roughly 4 times better than Advanced SIMD 128 bit register and 8
> times better than scalar 64 bit register by running 'make bench'.

nice speed up. i won't comment on the memcpy asm now.

> diff --git a/manual/tunables.texi b/manual/tunables.texi
> index 1b746c0fa1..81ed5366fc 100644
> --- a/manual/tunables.texi
> +++ b/manual/tunables.texi
> @@ -453,7 +453,8 @@ This tunable is specific to powerpc, powerpc64 and powerpc64le.
>  The @code{glibc.cpu.name=xxx} tunable allows the user to tell @theglibc{} to
>  assume that the CPU is @code{xxx} where xxx may have one of these values:
>  @code{generic}, @code{falkor}, @code{thunderxt88}, @code{thunderx2t99},
> -@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng}.
> +@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng},
> +@code{a64fx}.

OK.

> --- a/sysdeps/aarch64/multiarch/Makefile
> +++ b/sysdeps/aarch64/multiarch/Makefile
> @@ -1,6 +1,6 @@
>  ifeq ($(subdir),string)
>  sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \
> -		   memcpy_falkor \
> +		   memcpy_falkor memcpy_a64fx \
>  		   memset_generic memset_falkor memset_emag memset_kunpeng \

OK.

> --- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c
> +++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
> @@ -25,7 +25,11 @@
>  #include <stdio.h>
>  
>  /* Maximum number of IFUNC implementations.  */
> -#define MAX_IFUNC	4
> +#if HAVE_SVE_ASM_SUPPORT
> +# define MAX_IFUNC	7
> +#else
> +# define MAX_IFUNC	6
> +#endif

hm this MAX_IFUNC looks a bit problematic: currently its only
use is to detect if a target requires more ifuncs than the
array passed to __libc_ifunc_impl_list, but for that ideally
it would be automatic, not manually maintained.

i would just define it to 7 unconditionally (the maximum over
valid configurations).

>  size_t
>  __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
> @@ -43,12 +47,18 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
>  	      IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2)
>  	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_falkor)
>  	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_simd)
> +#if HAVE_SVE_ASM_SUPPORT
> +	      IFUNC_IMPL_ADD (array, i, memcpy, sve, __memcpy_a64fx)
> +#endif

OK.

>  	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_generic))
>    IFUNC_IMPL (i, name, memmove,
>  	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_thunderx)
>  	      IFUNC_IMPL_ADD (array, i, memmove, !bti, __memmove_thunderx2)
>  	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_falkor)
>  	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_simd)
> +#if HAVE_SVE_ASM_SUPPORT
> +	      IFUNC_IMPL_ADD (array, i, memmove, sve, __memmove_a64fx)
> +#endif

OK.

>  	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_generic))
>    IFUNC_IMPL (i, name, memset,
>  	      /* Enable this on non-falkor processors too so that other cores
> diff --git a/sysdeps/aarch64/multiarch/init-arch.h b/sysdeps/aarch64/multiarch/init-arch.h
> index a167699e74..d20e7e1b8e 100644
> --- a/sysdeps/aarch64/multiarch/init-arch.h
> +++ b/sysdeps/aarch64/multiarch/init-arch.h
> @@ -33,4 +33,6 @@
>    bool __attribute__((unused)) bti =					      \
>      HAVE_AARCH64_BTI && GLRO(dl_aarch64_cpu_features).bti;		      \
>    bool __attribute__((unused)) mte =					      \
> -    MTE_ENABLED ();
> +    MTE_ENABLED ();							      \
> +  unsigned __attribute__((unused)) sve =				      \
> +    GLRO(dl_aarch64_cpu_features).sve;

i would use bool here.

> diff --git a/sysdeps/aarch64/multiarch/memcpy.c b/sysdeps/aarch64/multiarch/memcpy.c
> index 0e0a5cbcfb..0006f38eb0 100644
> --- a/sysdeps/aarch64/multiarch/memcpy.c
> +++ b/sysdeps/aarch64/multiarch/memcpy.c
> @@ -33,6 +33,9 @@ extern __typeof (__redirect_memcpy) __memcpy_simd attribute_hidden;
>  extern __typeof (__redirect_memcpy) __memcpy_thunderx attribute_hidden;
>  extern __typeof (__redirect_memcpy) __memcpy_thunderx2 attribute_hidden;
>  extern __typeof (__redirect_memcpy) __memcpy_falkor attribute_hidden;
> +#if HAVE_SVE_ASM_SUPPORT
> +extern __typeof (__redirect_memcpy) __memcpy_a64fx attribute_hidden;
> +#endif

OK.

>  libc_ifunc (__libc_memcpy,
>              (IS_THUNDERX (midr)
> @@ -44,8 +47,13 @@ libc_ifunc (__libc_memcpy,
>  		  : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
>  		     || IS_NEOVERSE_V1 (midr)
>  		     ? __memcpy_simd
> -		     : __memcpy_generic)))));
> -
> +#if HAVE_SVE_ASM_SUPPORT
> +                     : (IS_A64FX (midr)
> +                        ? __memcpy_a64fx
> +                        : __memcpy_generic))))));
> +#else
> +                     : __memcpy_generic)))));
> +#endif

OK.

> new file mode 100644
> index 0000000000..23438e4e3d
> --- /dev/null
> +++ b/sysdeps/aarch64/multiarch/memcpy_a64fx.S

skipping this.

> diff --git a/sysdeps/aarch64/multiarch/memmove.c b/sysdeps/aarch64/multiarch/memmove.c
> index 12d77818a9..1e5ee1c934 100644
> --- a/sysdeps/aarch64/multiarch/memmove.c
> +++ b/sysdeps/aarch64/multiarch/memmove.c
> @@ -33,6 +33,9 @@ extern __typeof (__redirect_memmove) __memmove_simd attribute_hidden;
>  extern __typeof (__redirect_memmove) __memmove_thunderx attribute_hidden;
>  extern __typeof (__redirect_memmove) __memmove_thunderx2 attribute_hidden;
>  extern __typeof (__redirect_memmove) __memmove_falkor attribute_hidden;
> +#if HAVE_SVE_ASM_SUPPORT
> +extern __typeof (__redirect_memmove) __memmove_a64fx attribute_hidden;
> +#endif

OK.

>  
>  libc_ifunc (__libc_memmove,
>              (IS_THUNDERX (midr)
> @@ -44,8 +47,13 @@ libc_ifunc (__libc_memmove,
>  		  : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
>  		     || IS_NEOVERSE_V1 (midr)
>  		     ? __memmove_simd
> -		     : __memmove_generic)))));
> -
> +#if HAVE_SVE_ASM_SUPPORT
> +                     : (IS_A64FX (midr)
> +                        ? __memmove_a64fx
> +                        : __memmove_generic))))));
> +#else
> +                        : __memmove_generic)))));
> +#endif

OK.

> diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
> index db6aa3516c..6206a2f618 100644
> --- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
> +++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
> @@ -46,6 +46,7 @@ static struct cpu_list cpu_list[] = {
>        {"ares",		 0x411FD0C0},
>        {"emag",		 0x503F0001},
>        {"kunpeng920", 	 0x481FD010},
> +      {"a64fx",		 0x460F0010},
>        {"generic", 	 0x0}

OK.

> +
> +  /* Check if SVE is supported.  */
> +  cpu_features->sve = GLRO (dl_hwcap) & HWCAP_SVE;

OK.

>  }
> diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
> index 3b9bfed134..2b322e5414 100644
> --- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
> +++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
> @@ -65,6 +65,9 @@
>  #define IS_KUNPENG920(midr) (MIDR_IMPLEMENTOR(midr) == 'H'			   \
>                          && MIDR_PARTNUM(midr) == 0xd01)
>  
> +#define IS_A64FX(midr) (MIDR_IMPLEMENTOR(midr) == 'F'			      \
> +			&& MIDR_PARTNUM(midr) == 0x001)
> +

OK.

>  struct cpu_features
>  {
>    uint64_t midr_el1;
> @@ -72,6 +75,7 @@ struct cpu_features
>    bool bti;
>    /* Currently, the GLIBC memory tagging tunable only defines 8 bits.  */
>    uint8_t mte_state;
> +  bool sve;
>  };

OK.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 4/5] scripts: Added Vector Length Set test helper script
  2021-03-17  2:35 ` [PATCH 4/5] scripts: Added Vector Length Set test helper script Naohiro Tamura
@ 2021-03-29 13:20   ` Szabolcs Nagy
  2021-03-30  7:25     ` naohirot
  0 siblings, 1 reply; 36+ messages in thread
From: Szabolcs Nagy @ 2021-03-29 13:20 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha, Naohiro Tamura

The 03/17/2021 02:35, Naohiro Tamura wrote:
> +"""Set Scalable Vector Length test helper.
> +
> +Set Scalable Vector Length for child process.
> +
> +examples:
> +
> +ubuntu@bionic:~/build$ make check subdirs=string \
> +test-wrapper='~/glibc/scripts/vltest.py 16'
> +
> +ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 16 make test \
> +t=string/test-memcpy
> +
> +ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 32 ./debugglibc.sh \
> +string/test-memmove
> +
> +ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 64 ./testrun.sh \
> +string/test-memset
> +"""
> +import argparse
> +from ctypes import cdll, CDLL
> +import os
> +import sys
> +
> +EXIT_SUCCESS = 0
> +EXIT_FAILURE = 1
> +EXIT_UNSUPPORTED = 77
> +
> +AT_HWCAP = 16
> +HWCAP_SVE = (1 << 22)
> +
> +PR_SVE_GET_VL = 51
> +PR_SVE_SET_VL = 50
> +PR_SVE_SET_VL_ONEXEC = (1 << 18)
> +PR_SVE_VL_INHERIT = (1 << 17)
> +PR_SVE_VL_LEN_MASK = 0xffff
> +
> +def main(args):
> +    libc = CDLL("libc.so.6")
> +    if not libc.getauxval(AT_HWCAP) & HWCAP_SVE:
> +        print("CPU doesn't support SVE")
> +        sys.exit(EXIT_UNSUPPORTED)
> +
> +    libc.prctl(PR_SVE_SET_VL,
> +               args.vl[0] | PR_SVE_SET_VL_ONEXEC | PR_SVE_VL_INHERIT)
> +    os.execvp(args.args[0], args.args)
> +    print("exec system call failure")
> +    sys.exit(EXIT_FAILURE)


this only works on a (new enough) glibc based system and python's
CDLL path lookup can fail too (it does not follow the host system
configuration).

but i think there is no simple solution without compiling c code and
this seems useful, so i'm happy to have this script.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH 1/5] config: Added HAVE_SVE_ASM_SUPPORT for aarch64
  2021-03-29 12:11   ` Szabolcs Nagy
@ 2021-03-30  6:19     ` naohirot
  0 siblings, 0 replies; 36+ messages in thread
From: naohirot @ 2021-03-30  6:19 UTC (permalink / raw)
  To: 'Szabolcs Nagy'; +Cc: libc-alpha

Szabolcs-san,

Thank you for your review.

> > +/* Assembler support ARMv8.2-A SVE */ #define
> HAVE_SVE_ASM_SUPPORT 0
> > +
> 
> i prefer to use HAVE_AARCH64_ prefix for aarch64 specific macros in the global
> config.h, e.g. HAVE_AARCH64_SVE_ASM

OK, I'll change it to HAVE_AARCH64_SVE_ASM.

> and i'd like to have a comment here or in configue.ac with the binutils version
> where this becomes obsolete (binutils 2.28 i think). right now the minimum
> required version is 2.25, but glibc may increase that soon to above 2.28.

I'll add the comment in config.h.in like this:

+/* Assembler support ARMv8.2-A SVE.
+   This macro becomes obsolete when glibc increased the minimum
+   required version of GNU 'binutils' to 2.28 or later. */
+#define HAVE_AARCH64_SVE_ASM 0

> > diff --git a/sysdeps/aarch64/configure.ac
> > b/sysdeps/aarch64/configure.ac index 66f755078a..389a0b4e8d 100644
> > --- a/sysdeps/aarch64/configure.ac
> > +++ b/sysdeps/aarch64/configure.ac
...
> > +if AC_TRY_COMMAND(${CC-cc} -c -march=armv8.2-a+sve conftest.s
> > +1>&AS_MESSAGE_LOG_FD); then
> > +  libc_cv_asm_sve=yes
> > +else
> > +  libc_cv_asm_sve=no
> > +fi
> > +rm -f conftest*])
> > +if test $libc_cv_asm_sve = yes; then
> > +  AC_DEFINE(HAVE_SVE_ASM_SUPPORT)
> > +fi
> 
> i would use libc_cv_aarch64_sve_asm to make it obvious that it's aarch64 specific
> setting.

OK, I'll change it to libc_cv_aarch64_sve_asm.

Thanks.
Naohiro


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH 2/5] aarch64: Added optimized memcpy and memmove for A64FX
  2021-03-29 12:44   ` Szabolcs Nagy
@ 2021-03-30  7:17     ` naohirot
  0 siblings, 0 replies; 36+ messages in thread
From: naohirot @ 2021-03-30  7:17 UTC (permalink / raw)
  To: 'Szabolcs Nagy'; +Cc: libc-alpha

Szabolcs-san,

Thank you for your review.

> >  /* Maximum number of IFUNC implementations.  */
> > -#define MAX_IFUNC	4
> > +#if HAVE_SVE_ASM_SUPPORT
> > +# define MAX_IFUNC	7
> > +#else
> > +# define MAX_IFUNC	6
> > +#endif
> 
> hm this MAX_IFUNC looks a bit problematic: currently its only use is to detect if a
> target requires more ifuncs than the array passed to __libc_ifunc_impl_list, but for
> that ideally it would be automatic, not manually maintained.
> 
> i would just define it to 7 unconditionally (the maximum over valid configurations).

OK, I'll fix it to 7 unconditionally.

> > cores diff --git a/sysdeps/aarch64/multiarch/init-arch.h
> > b/sysdeps/aarch64/multiarch/init-arch.h
> > index a167699e74..d20e7e1b8e 100644
> > --- a/sysdeps/aarch64/multiarch/init-arch.h
> > +++ b/sysdeps/aarch64/multiarch/init-arch.h
> > @@ -33,4 +33,6 @@
> >    bool __attribute__((unused)) bti =
> \
> >      HAVE_AARCH64_BTI && GLRO(dl_aarch64_cpu_features).bti;
> 	      \
> >    bool __attribute__((unused)) mte =
> \
> > -    MTE_ENABLED ();
> > +    MTE_ENABLED ();
> 	      \
> > +  unsigned __attribute__((unused)) sve =
> \
> > +    GLRO(dl_aarch64_cpu_features).sve;
> 
> i would use bool here.

I'll fix it to the bool.

> > --- /dev/null
> > +++ b/sysdeps/aarch64/multiarch/memcpy_a64fx.S
> 
> skipping this.

I wait for your review.

Thanks.
Naohiro


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH 4/5] scripts: Added Vector Length Set test helper script
  2021-03-29 13:20   ` Szabolcs Nagy
@ 2021-03-30  7:25     ` naohirot
  0 siblings, 0 replies; 36+ messages in thread
From: naohirot @ 2021-03-30  7:25 UTC (permalink / raw)
  To: 'Szabolcs Nagy'; +Cc: libc-alpha

Szabolcs-san,

Thank you for your review.

> > +def main(args):
> > +    libc = CDLL("libc.so.6")
> > +    if not libc.getauxval(AT_HWCAP) & HWCAP_SVE:
> > +        print("CPU doesn't support SVE")
> > +        sys.exit(EXIT_UNSUPPORTED)
> > +
> > +    libc.prctl(PR_SVE_SET_VL,
> > +               args.vl[0] | PR_SVE_SET_VL_ONEXEC |
> PR_SVE_VL_INHERIT)
> > +    os.execvp(args.args[0], args.args)
> > +    print("exec system call failure")
> > +    sys.exit(EXIT_FAILURE)
> 
> 
> this only works on a (new enough) glibc based system and python's CDLL path
> lookup can fail too (it does not follow the host system configuration).

I see, I didn't notice that.

> but i think there is no simple solution without compiling c code and this seems
> useful, so i'm happy to have this script.

OK, thanks!
Naohiro


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX
  2021-03-17  2:28 [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Naohiro Tamura
                   ` (5 preceding siblings ...)
  2021-03-29 12:03 ` [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Szabolcs Nagy
@ 2021-05-10  1:45 ` naohirot
  2021-05-14 13:35   ` Szabolcs Nagy
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
  7 siblings, 1 reply; 36+ messages in thread
From: naohirot @ 2021-05-10  1:45 UTC (permalink / raw)
  To: Szabolcs Nagy, Wilco Dijkstra, Florian Weimer; +Cc: libc-alpha

Hi Szabolcs, Wilco, Florian,

> From: Naohiro Tamura <naohirot@fujitsu.com>
> Sent: Wednesday, March 17, 2021 11:29 AM
 
> Fujitsu is in the process of signing the copyright assignment paper.
> We'd like to have some feedback in advance.

FYI: Fujitsu has submitted the signed assignment finally.

Thanks.
Naohiro


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/6] aarch64: Added optimized memcpy/memmove/memset for A64FX
  2021-03-17  2:28 [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Naohiro Tamura
                   ` (6 preceding siblings ...)
  2021-05-10  1:45 ` naohirot
@ 2021-05-12  9:23 ` Naohiro Tamura
  2021-05-12  9:26   ` [PATCH v2 1/6] config: Added HAVE_AARCH64_SVE_ASM for aarch64 Naohiro Tamura
                     ` (8 more replies)
  7 siblings, 9 replies; 36+ messages in thread
From: Naohiro Tamura @ 2021-05-12  9:23 UTC (permalink / raw)
  To: libc-alpha

Hi Szabolcs, Wilco, Florian,

Thank you for reviewing Patch V1.

Patch V2 has been reflected all of V1 comments which are mainly
related to redundant assembler code.
Consequently assembler code has been minimized, and each line of V2
assembler code has been rationalized by string bench performance
data.
In terms of assembler LOC (lines of code), memcpy/memmove reduced 60%
from 1,000 to 400 lines, memset reduced 55% from 600 to 270 lines.

So please kindly review V2.

Thanks.
Naohiro

Naohiro Tamura (6):
  config: Added HAVE_AARCH64_SVE_ASM for aarch64
  aarch64: define BTI_C and BTI_J macros as NOP unless HAVE_AARCH64_BTI
  aarch64: Added optimized memcpy and memmove for A64FX
  aarch64: Added optimized memset for A64FX
  scripts: Added Vector Length Set test helper script
  benchtests: Fixed bench-memcpy-random: buf1: mprotect failed

 benchtests/bench-memcpy-random.c              |   4 +-
 config.h.in                                   |   5 +
 manual/tunables.texi                          |   3 +-
 scripts/vltest.py                             |  82 ++++
 sysdeps/aarch64/configure                     |  28 ++
 sysdeps/aarch64/configure.ac                  |  15 +
 sysdeps/aarch64/multiarch/Makefile            |   3 +-
 sysdeps/aarch64/multiarch/ifunc-impl-list.c   |  13 +-
 sysdeps/aarch64/multiarch/init-arch.h         |   4 +-
 sysdeps/aarch64/multiarch/memcpy.c            |  12 +-
 sysdeps/aarch64/multiarch/memcpy_a64fx.S      | 405 ++++++++++++++++++
 sysdeps/aarch64/multiarch/memmove.c           |  12 +-
 sysdeps/aarch64/multiarch/memset.c            |  11 +-
 sysdeps/aarch64/multiarch/memset_a64fx.S      | 268 ++++++++++++
 sysdeps/aarch64/sysdep.h                      |   9 +-
 .../unix/sysv/linux/aarch64/cpu-features.c    |   4 +
 .../unix/sysv/linux/aarch64/cpu-features.h    |   4 +
 17 files changed, 868 insertions(+), 14 deletions(-)
 create mode 100755 scripts/vltest.py
 create mode 100644 sysdeps/aarch64/multiarch/memcpy_a64fx.S
 create mode 100644 sysdeps/aarch64/multiarch/memset_a64fx.S

-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 1/6] config: Added HAVE_AARCH64_SVE_ASM for aarch64
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
@ 2021-05-12  9:26   ` Naohiro Tamura
  2021-05-26 10:05     ` Szabolcs Nagy
  2021-05-12  9:27   ` [PATCH v2 2/6] aarch64: define BTI_C and BTI_J macros as NOP unless HAVE_AARCH64_BTI Naohiro Tamura
                     ` (7 subsequent siblings)
  8 siblings, 1 reply; 36+ messages in thread
From: Naohiro Tamura @ 2021-05-12  9:26 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch checks if assembler supports '-march=armv8.2-a+sve' to
generate SVE code or not, and then define HAVE_AARCH64_SVE_ASM macro.
---
 config.h.in                  |  5 +++++
 sysdeps/aarch64/configure    | 28 ++++++++++++++++++++++++++++
 sysdeps/aarch64/configure.ac | 15 +++++++++++++++
 3 files changed, 48 insertions(+)

diff --git a/config.h.in b/config.h.in
index 99036b887f..13fba9bb8d 100644
--- a/config.h.in
+++ b/config.h.in
@@ -121,6 +121,11 @@
 /* AArch64 PAC-RET code generation is enabled.  */
 #define HAVE_AARCH64_PAC_RET 0
 
+/* Assembler support ARMv8.2-A SVE.
+   This macro becomes obsolete when glibc increased the minimum
+   required version of GNU 'binutils' to 2.28 or later. */
+#define HAVE_AARCH64_SVE_ASM 0
+
 /* ARC big endian ABI */
 #undef HAVE_ARC_BE
 
diff --git a/sysdeps/aarch64/configure b/sysdeps/aarch64/configure
index 83c3a23e44..4c1fac49f3 100644
--- a/sysdeps/aarch64/configure
+++ b/sysdeps/aarch64/configure
@@ -304,3 +304,31 @@ fi
 $as_echo "$libc_cv_aarch64_variant_pcs" >&6; }
 config_vars="$config_vars
 aarch64-variant-pcs = $libc_cv_aarch64_variant_pcs"
+
+# Check if asm support armv8.2-a+sve
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for SVE support in assembler" >&5
+$as_echo_n "checking for SVE support in assembler... " >&6; }
+if ${libc_cv_asm_sve+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  cat > conftest.s <<\EOF
+        ptrue p0.b
+EOF
+if { ac_try='${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&5'
+  { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_try\""; } >&5
+  (eval $ac_try) 2>&5
+  ac_status=$?
+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
+  test $ac_status = 0; }; }; then
+  libc_cv_aarch64_sve_asm=yes
+else
+  libc_cv_aarch64_sve_asm=no
+fi
+rm -f conftest*
+fi
+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $libc_cv_asm_sve" >&5
+$as_echo "$libc_cv_asm_sve" >&6; }
+if test $libc_cv_aarch64_sve_asm = yes; then
+  $as_echo "#define HAVE_AARCH64_SVE_ASM 1" >>confdefs.h
+
+fi
diff --git a/sysdeps/aarch64/configure.ac b/sysdeps/aarch64/configure.ac
index 66f755078a..3347c13fa1 100644
--- a/sysdeps/aarch64/configure.ac
+++ b/sysdeps/aarch64/configure.ac
@@ -90,3 +90,18 @@ EOF
   fi
   rm -rf conftest.*])
 LIBC_CONFIG_VAR([aarch64-variant-pcs], [$libc_cv_aarch64_variant_pcs])
+
+# Check if asm support armv8.2-a+sve
+AC_CACHE_CHECK(for SVE support in assembler, libc_cv_asm_sve, [dnl
+cat > conftest.s <<\EOF
+        ptrue p0.b
+EOF
+if AC_TRY_COMMAND(${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&AS_MESSAGE_LOG_FD); then
+  libc_cv_aarch64_sve_asm=yes
+else
+  libc_cv_aarch64_sve_asm=no
+fi
+rm -f conftest*])
+if test $libc_cv_aarch64_sve_asm = yes; then
+  AC_DEFINE(HAVE_AARCH64_SVE_ASM)
+fi
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 2/6] aarch64: define BTI_C and BTI_J macros as NOP unless HAVE_AARCH64_BTI
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
  2021-05-12  9:26   ` [PATCH v2 1/6] config: Added HAVE_AARCH64_SVE_ASM for aarch64 Naohiro Tamura
@ 2021-05-12  9:27   ` Naohiro Tamura
  2021-05-26 10:06     ` Szabolcs Nagy
  2021-05-12  9:28   ` [PATCH v2 3/6] aarch64: Added optimized memcpy and memmove for A64FX Naohiro Tamura
                     ` (6 subsequent siblings)
  8 siblings, 1 reply; 36+ messages in thread
From: Naohiro Tamura @ 2021-05-12  9:27 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch defines BTI_C and BTI_J macros conditionally for
performance.
If HAVE_AARCH64_BTI is true, BTI_C and BTI_J are defined as HINT
instruction for ARMv8.5 BTI (Branch Target Identification).
If HAVE_AARCH64_BTI is false, both BTI_C and BTI_J are defined as
NOP.
---
 sysdeps/aarch64/sysdep.h | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/sysdeps/aarch64/sysdep.h b/sysdeps/aarch64/sysdep.h
index 90acca4e42..b936e29cbd 100644
--- a/sysdeps/aarch64/sysdep.h
+++ b/sysdeps/aarch64/sysdep.h
@@ -62,8 +62,13 @@ strip_pac (void *p)
 #define ASM_SIZE_DIRECTIVE(name) .size name,.-name
 
 /* Branch Target Identitication support.  */
-#define BTI_C		hint	34
-#define BTI_J		hint	36
+#if HAVE_AARCH64_BTI
+# define BTI_C		hint	34
+# define BTI_J		hint	36
+#else
+# define BTI_C		nop
+# define BTI_J		nop
+#endif
 
 /* Return address signing support (pac-ret).  */
 #define PACIASP		hint	25
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 3/6] aarch64: Added optimized memcpy and memmove for A64FX
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
  2021-05-12  9:26   ` [PATCH v2 1/6] config: Added HAVE_AARCH64_SVE_ASM for aarch64 Naohiro Tamura
  2021-05-12  9:27   ` [PATCH v2 2/6] aarch64: define BTI_C and BTI_J macros as NOP unless HAVE_AARCH64_BTI Naohiro Tamura
@ 2021-05-12  9:28   ` Naohiro Tamura
  2021-05-26 10:19     ` Szabolcs Nagy
  2021-05-12  9:28   ` [PATCH v2 4/6] aarch64: Added optimized memset " Naohiro Tamura
                     ` (5 subsequent siblings)
  8 siblings, 1 reply; 36+ messages in thread
From: Naohiro Tamura @ 2021-05-12  9:28 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch optimizes the performance of memcpy/memmove for A64FX [1]
which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB
cache per NUMA node.

The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill, and software pipelining.

SVE assembler code for memcpy/memmove is implemented as Vector Length
Agnostic code so theoretically it can be run on any SOC which supports
ARMv8-A SVE standard.

We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.

And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.

[1] https://github.com/fujitsu/A64FX
---
 manual/tunables.texi                          |   3 +-
 sysdeps/aarch64/multiarch/Makefile            |   2 +-
 sysdeps/aarch64/multiarch/ifunc-impl-list.c   |   8 +-
 sysdeps/aarch64/multiarch/init-arch.h         |   4 +-
 sysdeps/aarch64/multiarch/memcpy.c            |  12 +-
 sysdeps/aarch64/multiarch/memcpy_a64fx.S      | 405 ++++++++++++++++++
 sysdeps/aarch64/multiarch/memmove.c           |  12 +-
 .../unix/sysv/linux/aarch64/cpu-features.c    |   4 +
 .../unix/sysv/linux/aarch64/cpu-features.h    |   4 +
 9 files changed, 446 insertions(+), 8 deletions(-)
 create mode 100644 sysdeps/aarch64/multiarch/memcpy_a64fx.S

diff --git a/manual/tunables.texi b/manual/tunables.texi
index 6de647b426..fe7c1313cc 100644
--- a/manual/tunables.texi
+++ b/manual/tunables.texi
@@ -454,7 +454,8 @@ This tunable is specific to powerpc, powerpc64 and powerpc64le.
 The @code{glibc.cpu.name=xxx} tunable allows the user to tell @theglibc{} to
 assume that the CPU is @code{xxx} where xxx may have one of these values:
 @code{generic}, @code{falkor}, @code{thunderxt88}, @code{thunderx2t99},
-@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng}.
+@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng},
+@code{a64fx}.
 
 This tunable is specific to aarch64.
 @end deftp
diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile
index dc3efffb36..04c3f17121 100644
--- a/sysdeps/aarch64/multiarch/Makefile
+++ b/sysdeps/aarch64/multiarch/Makefile
@@ -1,6 +1,6 @@
 ifeq ($(subdir),string)
 sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \
-		   memcpy_falkor \
+		   memcpy_falkor memcpy_a64fx \
 		   memset_generic memset_falkor memset_emag memset_kunpeng \
 		   memchr_generic memchr_nosimd \
 		   strlen_mte strlen_asimd
diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
index 99a8c68aac..911393565c 100644
--- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c
+++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
@@ -25,7 +25,7 @@
 #include <stdio.h>
 
 /* Maximum number of IFUNC implementations.  */
-#define MAX_IFUNC	4
+#define MAX_IFUNC	7
 
 size_t
 __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
@@ -43,12 +43,18 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 	      IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2)
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_falkor)
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_simd)
+#if HAVE_AARCH64_SVE_ASM
+	      IFUNC_IMPL_ADD (array, i, memcpy, sve, __memcpy_a64fx)
+#endif
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_generic))
   IFUNC_IMPL (i, name, memmove,
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_thunderx)
 	      IFUNC_IMPL_ADD (array, i, memmove, !bti, __memmove_thunderx2)
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_falkor)
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_simd)
+#if HAVE_AARCH64_SVE_ASM
+	      IFUNC_IMPL_ADD (array, i, memmove, sve, __memmove_a64fx)
+#endif
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_generic))
   IFUNC_IMPL (i, name, memset,
 	      /* Enable this on non-falkor processors too so that other cores
diff --git a/sysdeps/aarch64/multiarch/init-arch.h b/sysdeps/aarch64/multiarch/init-arch.h
index a167699e74..6d92c1bcff 100644
--- a/sysdeps/aarch64/multiarch/init-arch.h
+++ b/sysdeps/aarch64/multiarch/init-arch.h
@@ -33,4 +33,6 @@
   bool __attribute__((unused)) bti =					      \
     HAVE_AARCH64_BTI && GLRO(dl_aarch64_cpu_features).bti;		      \
   bool __attribute__((unused)) mte =					      \
-    MTE_ENABLED ();
+    MTE_ENABLED ();							      \
+  bool __attribute__((unused)) sve =					      \
+    GLRO(dl_aarch64_cpu_features).sve;
diff --git a/sysdeps/aarch64/multiarch/memcpy.c b/sysdeps/aarch64/multiarch/memcpy.c
index 0e0a5cbcfb..d90ee51ffc 100644
--- a/sysdeps/aarch64/multiarch/memcpy.c
+++ b/sysdeps/aarch64/multiarch/memcpy.c
@@ -33,6 +33,9 @@ extern __typeof (__redirect_memcpy) __memcpy_simd attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_thunderx attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_thunderx2 attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_falkor attribute_hidden;
+#if HAVE_AARCH64_SVE_ASM
+extern __typeof (__redirect_memcpy) __memcpy_a64fx attribute_hidden;
+#endif
 
 libc_ifunc (__libc_memcpy,
             (IS_THUNDERX (midr)
@@ -44,8 +47,13 @@ libc_ifunc (__libc_memcpy,
 		  : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
 		     || IS_NEOVERSE_V1 (midr)
 		     ? __memcpy_simd
-		     : __memcpy_generic)))));
-
+#if HAVE_AARCH64_SVE_ASM
+                     : (IS_A64FX (midr)
+                        ? __memcpy_a64fx
+                        : __memcpy_generic))))));
+#else
+                     : __memcpy_generic)))));
+#endif
 # undef memcpy
 strong_alias (__libc_memcpy, memcpy);
 #endif
diff --git a/sysdeps/aarch64/multiarch/memcpy_a64fx.S b/sysdeps/aarch64/multiarch/memcpy_a64fx.S
new file mode 100644
index 0000000000..e28afd708f
--- /dev/null
+++ b/sysdeps/aarch64/multiarch/memcpy_a64fx.S
@@ -0,0 +1,405 @@
+/* Optimized memcpy for Fujitsu A64FX processor.
+   Copyright (C) 2012-2021 Free Software Foundation, Inc.
+
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library.  If not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+
+#if HAVE_AARCH64_SVE_ASM
+#if IS_IN (libc)
+# define MEMCPY __memcpy_a64fx
+# define MEMMOVE __memmove_a64fx
+
+/* Assumptions:
+ *
+ * ARMv8.2-a, AArch64, unaligned accesses, sve
+ *
+ */
+
+#define L2_SIZE         (8*1024*1024)/2 // L2 8MB/2
+#define CACHE_LINE_SIZE 256
+#define ZF_DIST         (CACHE_LINE_SIZE * 21)  // Zerofill distance
+#define dest            x0
+#define src             x1
+#define n               x2      // size
+#define tmp1            x3
+#define tmp2            x4
+#define tmp3            x5
+#define rest            x6
+#define dest_ptr        x7
+#define src_ptr         x8
+#define vector_length   x9
+#define cl_remainder    x10     // CACHE_LINE_SIZE remainder
+
+    .arch armv8.2-a+sve
+
+    .macro dc_zva times
+    dc          zva, tmp1
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    .if \times-1
+    dc_zva "(\times-1)"
+    .endif
+    .endm
+
+    .macro ld1b_unroll8
+    ld1b        z0.b, p0/z, [src_ptr, #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr, #1, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr, #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr, #3, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr, #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr, #5, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr, #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr, #7, mul vl]
+    .endm
+
+    .macro stld1b_unroll4a
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    ld1b        z0.b, p0/z, [src_ptr,  #0, mul vl]
+    ld1b        z1.b, p0/z, [src_ptr,  #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    ld1b        z2.b, p0/z, [src_ptr,  #2, mul vl]
+    ld1b        z3.b, p0/z, [src_ptr,  #3, mul vl]
+    .endm
+
+    .macro stld1b_unroll4b
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    ld1b        z4.b, p0/z, [src_ptr,  #4, mul vl]
+    ld1b        z5.b, p0/z, [src_ptr,  #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    ld1b        z6.b, p0/z, [src_ptr,  #6, mul vl]
+    ld1b        z7.b, p0/z, [src_ptr,  #7, mul vl]
+    .endm
+
+    .macro stld1b_unroll8
+    stld1b_unroll4a
+    stld1b_unroll4b
+    .endm
+
+    .macro st1b_unroll8
+    st1b        z0.b, p0,   [dest_ptr, #0, mul vl]
+    st1b        z1.b, p0,   [dest_ptr, #1, mul vl]
+    st1b        z2.b, p0,   [dest_ptr, #2, mul vl]
+    st1b        z3.b, p0,   [dest_ptr, #3, mul vl]
+    st1b        z4.b, p0,   [dest_ptr, #4, mul vl]
+    st1b        z5.b, p0,   [dest_ptr, #5, mul vl]
+    st1b        z6.b, p0,   [dest_ptr, #6, mul vl]
+    st1b        z7.b, p0,   [dest_ptr, #7, mul vl]
+    .endm
+
+    .macro shortcut_for_small_size exit
+    // if rest <= vector_length * 2
+    whilelo     p0.b, xzr, n
+    whilelo     p1.b, vector_length, n
+    b.last      1f
+    ld1b        z0.b, p0/z, [src, #0, mul vl]
+    ld1b        z1.b, p1/z, [src, #1, mul vl]
+    st1b        z0.b, p0, [dest, #0, mul vl]
+    st1b        z1.b, p1, [dest, #1, mul vl]
+    ret
+1:  // if rest > vector_length * 8
+    cmp         n, vector_length, lsl 3 // vector_length * 8
+    b.hi        \exit
+    // if rest <= vector_length * 4
+    lsl         tmp1, vector_length, 1  // vector_length * 2
+    whilelo     p2.b, tmp1, n
+    incb        tmp1
+    whilelo     p3.b, tmp1, n
+    b.last      1f
+    ld1b        z0.b, p0/z, [src, #0, mul vl]
+    ld1b        z1.b, p1/z, [src, #1, mul vl]
+    ld1b        z2.b, p2/z, [src, #2, mul vl]
+    ld1b        z3.b, p3/z, [src, #3, mul vl]
+    st1b        z0.b, p0, [dest, #0, mul vl]
+    st1b        z1.b, p1, [dest, #1, mul vl]
+    st1b        z2.b, p2, [dest, #2, mul vl]
+    st1b        z3.b, p3, [dest, #3, mul vl]
+    ret
+1:  // if rest <= vector_length * 8
+    lsl         tmp1, vector_length, 2  // vector_length * 4
+    whilelo     p4.b, tmp1, n
+    incb        tmp1
+    whilelo     p5.b, tmp1, n
+    b.last      1f
+    ld1b        z0.b, p0/z, [src, #0, mul vl]
+    ld1b        z1.b, p1/z, [src, #1, mul vl]
+    ld1b        z2.b, p2/z, [src, #2, mul vl]
+    ld1b        z3.b, p3/z, [src, #3, mul vl]
+    ld1b        z4.b, p4/z, [src, #4, mul vl]
+    ld1b        z5.b, p5/z, [src, #5, mul vl]
+    st1b        z0.b, p0, [dest, #0, mul vl]
+    st1b        z1.b, p1, [dest, #1, mul vl]
+    st1b        z2.b, p2, [dest, #2, mul vl]
+    st1b        z3.b, p3, [dest, #3, mul vl]
+    st1b        z4.b, p4, [dest, #4, mul vl]
+    st1b        z5.b, p5, [dest, #5, mul vl]
+    ret
+1:  lsl         tmp1, vector_length, 2  // vector_length * 4
+    incb        tmp1                    // vector_length * 5
+    incb        tmp1                    // vector_length * 6
+    whilelo     p6.b, tmp1, n
+    incb        tmp1
+    whilelo     p7.b, tmp1, n
+    ld1b        z0.b, p0/z, [src, #0, mul vl]
+    ld1b        z1.b, p1/z, [src, #1, mul vl]
+    ld1b        z2.b, p2/z, [src, #2, mul vl]
+    ld1b        z3.b, p3/z, [src, #3, mul vl]
+    ld1b        z4.b, p4/z, [src, #4, mul vl]
+    ld1b        z5.b, p5/z, [src, #5, mul vl]
+    ld1b        z6.b, p6/z, [src, #6, mul vl]
+    ld1b        z7.b, p7/z, [src, #7, mul vl]
+    st1b        z0.b, p0, [dest, #0, mul vl]
+    st1b        z1.b, p1, [dest, #1, mul vl]
+    st1b        z2.b, p2, [dest, #2, mul vl]
+    st1b        z3.b, p3, [dest, #3, mul vl]
+    st1b        z4.b, p4, [dest, #4, mul vl]
+    st1b        z5.b, p5, [dest, #5, mul vl]
+    st1b        z6.b, p6, [dest, #6, mul vl]
+    st1b        z7.b, p7, [dest, #7, mul vl]
+    ret
+    .endm
+
+ENTRY (MEMCPY)
+
+    PTR_ARG (0)
+    PTR_ARG (1)
+    SIZE_ARG (2)
+
+L(memcpy):
+    cntb        vector_length
+    // shortcut for less than vector_length * 8
+    // gives a free ptrue to p0.b for n >= vector_length
+    shortcut_for_small_size L(vl_agnostic)
+    // end of shortcut
+
+L(vl_agnostic): // VL Agnostic
+    mov         rest, n
+    mov         dest_ptr, dest
+    mov         src_ptr, src
+    // if rest >= L2_SIZE && vector_length == 64 then L(L2)
+    mov         tmp1, 64
+    cmp         rest, L2_SIZE
+    ccmp        vector_length, tmp1, 0, cs
+    b.eq        L(L2)
+
+L(unroll8): // unrolling and software pipeline
+    lsl         tmp1, vector_length, 3  // vector_length * 8
+    .p2align 3
+    cmp         rest, tmp1
+    b.cc        L(last)
+    ld1b_unroll8
+    add         src_ptr, src_ptr, tmp1
+    sub         rest, rest, tmp1
+    cmp         rest, tmp1
+    b.cc        2f
+    .p2align 3
+1:  stld1b_unroll8
+    add         dest_ptr, dest_ptr, tmp1
+    add         src_ptr, src_ptr, tmp1
+    sub         rest, rest, tmp1
+    cmp         rest, tmp1
+    b.ge        1b
+2:  st1b_unroll8
+    add         dest_ptr, dest_ptr, tmp1
+
+    .p2align 3
+L(last):
+    whilelo     p0.b, xzr, rest
+    whilelo     p1.b, vector_length, rest
+    b.last      1f
+    ld1b        z0.b, p0/z, [src_ptr, #0, mul vl]
+    ld1b        z1.b, p1/z, [src_ptr, #1, mul vl]
+    st1b        z0.b, p0, [dest_ptr, #0, mul vl]
+    st1b        z1.b, p1, [dest_ptr, #1, mul vl]
+    ret
+1:  lsl         tmp1, vector_length, 1  // vector_length * 2
+    whilelo     p2.b, tmp1, rest
+    incb        tmp1
+    whilelo     p3.b, tmp1, rest
+    b.last      1f
+    ld1b        z0.b, p0/z, [src_ptr, #0, mul vl]
+    ld1b        z1.b, p1/z, [src_ptr, #1, mul vl]
+    ld1b        z2.b, p2/z, [src_ptr, #2, mul vl]
+    ld1b        z3.b, p3/z, [src_ptr, #3, mul vl]
+    st1b        z0.b, p0, [dest_ptr, #0, mul vl]
+    st1b        z1.b, p1, [dest_ptr, #1, mul vl]
+    st1b        z2.b, p2, [dest_ptr, #2, mul vl]
+    st1b        z3.b, p3, [dest_ptr, #3, mul vl]
+    ret
+1:  lsl         tmp1, vector_length, 2  // vector_length * 4
+    whilelo     p4.b, tmp1, rest
+    incb        tmp1
+    whilelo     p5.b, tmp1, rest
+    incb        tmp1
+    whilelo     p6.b, tmp1, rest
+    incb        tmp1
+    whilelo     p7.b, tmp1, rest
+    ld1b        z0.b, p0/z, [src_ptr, #0, mul vl]
+    ld1b        z1.b, p1/z, [src_ptr, #1, mul vl]
+    ld1b        z2.b, p2/z, [src_ptr, #2, mul vl]
+    ld1b        z3.b, p3/z, [src_ptr, #3, mul vl]
+    ld1b        z4.b, p4/z, [src_ptr, #4, mul vl]
+    ld1b        z5.b, p5/z, [src_ptr, #5, mul vl]
+    ld1b        z6.b, p6/z, [src_ptr, #6, mul vl]
+    ld1b        z7.b, p7/z, [src_ptr, #7, mul vl]
+    st1b        z0.b, p0, [dest_ptr, #0, mul vl]
+    st1b        z1.b, p1, [dest_ptr, #1, mul vl]
+    st1b        z2.b, p2, [dest_ptr, #2, mul vl]
+    st1b        z3.b, p3, [dest_ptr, #3, mul vl]
+    st1b        z4.b, p4, [dest_ptr, #4, mul vl]
+    st1b        z5.b, p5, [dest_ptr, #5, mul vl]
+    st1b        z6.b, p6, [dest_ptr, #6, mul vl]
+    st1b        z7.b, p7, [dest_ptr, #7, mul vl]
+    ret
+
+L(L2):
+    // align dest address at CACHE_LINE_SIZE byte boundary
+    mov         tmp1, CACHE_LINE_SIZE
+    ands        tmp2, dest_ptr, CACHE_LINE_SIZE - 1
+    // if cl_remainder == 0
+    b.eq        L(L2_dc_zva)
+    sub         cl_remainder, tmp1, tmp2
+    // process remainder until the first CACHE_LINE_SIZE boundary
+    whilelo     p1.b, xzr, cl_remainder        // keep p0.b all true
+    whilelo     p2.b, vector_length, cl_remainder
+    b.last      1f
+    ld1b        z1.b, p1/z, [src_ptr, #0, mul vl]
+    ld1b        z2.b, p2/z, [src_ptr, #1, mul vl]
+    st1b        z1.b, p1, [dest_ptr, #0, mul vl]
+    st1b        z2.b, p2, [dest_ptr, #1, mul vl]
+    b           2f
+1:  lsl         tmp1, vector_length, 1  // vector_length * 2
+    whilelo     p3.b, tmp1, cl_remainder
+    incb        tmp1
+    whilelo     p4.b, tmp1, cl_remainder
+    ld1b        z1.b, p1/z, [src_ptr, #0, mul vl]
+    ld1b        z2.b, p2/z, [src_ptr, #1, mul vl]
+    ld1b        z3.b, p3/z, [src_ptr, #2, mul vl]
+    ld1b        z4.b, p4/z, [src_ptr, #3, mul vl]
+    st1b        z1.b, p1, [dest_ptr, #0, mul vl]
+    st1b        z2.b, p2, [dest_ptr, #1, mul vl]
+    st1b        z3.b, p3, [dest_ptr, #2, mul vl]
+    st1b        z4.b, p4, [dest_ptr, #3, mul vl]
+2:  add         dest_ptr, dest_ptr, cl_remainder
+    add         src_ptr, src_ptr, cl_remainder
+    sub         rest, rest, cl_remainder
+
+L(L2_dc_zva):
+    // zero fill
+    and         tmp1, dest, 0xffffffffffffff
+    and         tmp2, src, 0xffffffffffffff
+    subs        tmp1, tmp1, tmp2     // diff
+    b.ge        1f
+    neg         tmp1, tmp1
+1:  mov         tmp3, ZF_DIST + CACHE_LINE_SIZE * 2
+    cmp         tmp1, tmp3
+    b.lo        L(unroll8)
+    mov         tmp1, dest_ptr
+    dc_zva      (ZF_DIST / CACHE_LINE_SIZE) - 1
+    // unroll
+    ld1b_unroll8        // this line has to be after "b.lo L(unroll8)"
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE * 2
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+    mov         tmp1, ZF_DIST
+    .p2align 3
+1:  stld1b_unroll4a
+    add         tmp2, dest_ptr, tmp1    // dest_ptr + ZF_DIST
+    dc          zva, tmp2
+    stld1b_unroll4b
+    add         tmp2, tmp2, CACHE_LINE_SIZE
+    dc          zva, tmp2
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2
+    add         src_ptr, src_ptr, CACHE_LINE_SIZE * 2
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+    cmp         rest, tmp3      // ZF_DIST + CACHE_LINE_SIZE * 2
+    b.ge        1b
+    st1b_unroll8
+    add         dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2
+    b           L(unroll8)
+
+END (MEMCPY)
+libc_hidden_builtin_def (MEMCPY)
+
+
+ENTRY (MEMMOVE)
+
+    PTR_ARG (0)
+    PTR_ARG (1)
+    SIZE_ARG (2)
+
+    // remove tag address
+    // dest has to be immutable because it is the return value
+    // src has to be immutable because it is used in L(bwd_last)
+    and         tmp2, dest, 0xffffffffffffff    // save dest_notag into tmp2
+    and         tmp3, src, 0xffffffffffffff     // save src_notag intp tmp3
+    cmp         n, 0
+    ccmp        tmp2, tmp3, 4, ne
+    b.ne        1f
+    ret
+1:  cntb        vector_length
+    // shortcut for less than vector_length * 8
+    // gives a free ptrue to p0.b for n >= vector_length
+    // tmp2 and tmp3 should not be used in this macro to keep notag addresses
+    shortcut_for_small_size L(dispatch)
+    // end of shortcut
+
+L(dispatch):
+    // tmp2 = dest_notag, tmp3 = src_notag
+    // diff = dest_notag - src_notag
+    sub         tmp1, tmp2, tmp3
+    // if diff <= 0 || diff >= n then memcpy
+    cmp         tmp1, 0
+    ccmp        tmp1, n, 2, gt
+    b.cs        L(vl_agnostic)
+
+L(bwd_start):
+    mov         rest, n
+    add         dest_ptr, dest, n       // dest_end
+    add         src_ptr, src, n         // src_end
+
+L(bwd_unroll8): // unrolling and software pipeline
+    lsl         tmp1, vector_length, 3  // vector_length * 8
+    .p2align 3
+    cmp         rest, tmp1
+    b.cc        L(bwd_last)
+    sub         src_ptr, src_ptr, tmp1
+    ld1b_unroll8
+    sub         rest, rest, tmp1
+    cmp         rest, tmp1
+    b.cc        2f
+    .p2align 3
+1:  sub         src_ptr, src_ptr, tmp1
+    sub         dest_ptr, dest_ptr, tmp1
+    stld1b_unroll8
+    sub         rest, rest, tmp1
+    cmp         rest, tmp1
+    b.ge        1b
+2:  sub         dest_ptr, dest_ptr, tmp1
+    st1b_unroll8
+
+L(bwd_last):
+    mov         dest_ptr, dest
+    mov         src_ptr, src
+    b           L(last)
+
+END (MEMMOVE)
+libc_hidden_builtin_def (MEMMOVE)
+#endif /* IS_IN (libc) */
+#endif /* HAVE_AARCH64_SVE_ASM */
diff --git a/sysdeps/aarch64/multiarch/memmove.c b/sysdeps/aarch64/multiarch/memmove.c
index 12d77818a9..be2d35a251 100644
--- a/sysdeps/aarch64/multiarch/memmove.c
+++ b/sysdeps/aarch64/multiarch/memmove.c
@@ -33,6 +33,9 @@ extern __typeof (__redirect_memmove) __memmove_simd attribute_hidden;
 extern __typeof (__redirect_memmove) __memmove_thunderx attribute_hidden;
 extern __typeof (__redirect_memmove) __memmove_thunderx2 attribute_hidden;
 extern __typeof (__redirect_memmove) __memmove_falkor attribute_hidden;
+#if HAVE_AARCH64_SVE_ASM
+extern __typeof (__redirect_memmove) __memmove_a64fx attribute_hidden;
+#endif
 
 libc_ifunc (__libc_memmove,
             (IS_THUNDERX (midr)
@@ -44,8 +47,13 @@ libc_ifunc (__libc_memmove,
 		  : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
 		     || IS_NEOVERSE_V1 (midr)
 		     ? __memmove_simd
-		     : __memmove_generic)))));
-
+#if HAVE_AARCH64_SVE_ASM
+                     : (IS_A64FX (midr)
+                        ? __memmove_a64fx
+                        : __memmove_generic))))));
+#else
+                        : __memmove_generic)))));
+#endif
 # undef memmove
 strong_alias (__libc_memmove, memmove);
 #endif
diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
index db6aa3516c..6206a2f618 100644
--- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
+++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
@@ -46,6 +46,7 @@ static struct cpu_list cpu_list[] = {
       {"ares",		 0x411FD0C0},
       {"emag",		 0x503F0001},
       {"kunpeng920", 	 0x481FD010},
+      {"a64fx",		 0x460F0010},
       {"generic", 	 0x0}
 };
 
@@ -116,4 +117,7 @@ init_cpu_features (struct cpu_features *cpu_features)
 	     (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_ASYNC | MTE_ALLOWED_TAGS),
 	     0, 0, 0);
 #endif
+
+  /* Check if SVE is supported.  */
+  cpu_features->sve = GLRO (dl_hwcap) & HWCAP_SVE;
 }
diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
index 3b9bfed134..2b322e5414 100644
--- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
+++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
@@ -65,6 +65,9 @@
 #define IS_KUNPENG920(midr) (MIDR_IMPLEMENTOR(midr) == 'H'			   \
                         && MIDR_PARTNUM(midr) == 0xd01)
 
+#define IS_A64FX(midr) (MIDR_IMPLEMENTOR(midr) == 'F'			      \
+			&& MIDR_PARTNUM(midr) == 0x001)
+
 struct cpu_features
 {
   uint64_t midr_el1;
@@ -72,6 +75,7 @@ struct cpu_features
   bool bti;
   /* Currently, the GLIBC memory tagging tunable only defines 8 bits.  */
   uint8_t mte_state;
+  bool sve;
 };
 
 #endif /* _CPU_FEATURES_AARCH64_H  */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 4/6] aarch64: Added optimized memset for A64FX
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
                     ` (2 preceding siblings ...)
  2021-05-12  9:28   ` [PATCH v2 3/6] aarch64: Added optimized memcpy and memmove for A64FX Naohiro Tamura
@ 2021-05-12  9:28   ` Naohiro Tamura
  2021-05-26 10:22     ` Szabolcs Nagy
  2021-05-12  9:29   ` [PATCH v2 5/6] scripts: Added Vector Length Set test helper script Naohiro Tamura
                     ` (4 subsequent siblings)
  8 siblings, 1 reply; 36+ messages in thread
From: Naohiro Tamura @ 2021-05-12  9:28 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch optimizes the performance of memset for A64FX [1] which
implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache
per NUMA node.

The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill and prefetch.

SVE assembler code for memset is implemented as Vector Length Agnostic
code so theoretically it can be run on any SOC which supports ARMv8-A
SVE standard.

We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.

And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.

[1] https://github.com/fujitsu/A64FX
---
 sysdeps/aarch64/multiarch/Makefile          |   1 +
 sysdeps/aarch64/multiarch/ifunc-impl-list.c |   5 +-
 sysdeps/aarch64/multiarch/memset.c          |  11 +-
 sysdeps/aarch64/multiarch/memset_a64fx.S    | 268 ++++++++++++++++++++
 4 files changed, 283 insertions(+), 2 deletions(-)
 create mode 100644 sysdeps/aarch64/multiarch/memset_a64fx.S

diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile
index 04c3f17121..7500cf1e93 100644
--- a/sysdeps/aarch64/multiarch/Makefile
+++ b/sysdeps/aarch64/multiarch/Makefile
@@ -2,6 +2,7 @@ ifeq ($(subdir),string)
 sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \
 		   memcpy_falkor memcpy_a64fx \
 		   memset_generic memset_falkor memset_emag memset_kunpeng \
+		   memset_a64fx \
 		   memchr_generic memchr_nosimd \
 		   strlen_mte strlen_asimd
 endif
diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
index 911393565c..4e1a641d9f 100644
--- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c
+++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
@@ -37,7 +37,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 
   INIT_ARCH ();
 
-  /* Support sysdeps/aarch64/multiarch/memcpy.c and memmove.c.  */
+  /* Support sysdeps/aarch64/multiarch/memcpy.c, memmove.c and memset.c.  */
   IFUNC_IMPL (i, name, memcpy,
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_thunderx)
 	      IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2)
@@ -62,6 +62,9 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 	      IFUNC_IMPL_ADD (array, i, memset, (zva_size == 64), __memset_falkor)
 	      IFUNC_IMPL_ADD (array, i, memset, (zva_size == 64), __memset_emag)
 	      IFUNC_IMPL_ADD (array, i, memset, 1, __memset_kunpeng)
+#if HAVE_AARCH64_SVE_ASM
+	      IFUNC_IMPL_ADD (array, i, memset, sve, __memset_a64fx)
+#endif
 	      IFUNC_IMPL_ADD (array, i, memset, 1, __memset_generic))
   IFUNC_IMPL (i, name, memchr,
 	      IFUNC_IMPL_ADD (array, i, memchr, !mte, __memchr_nosimd)
diff --git a/sysdeps/aarch64/multiarch/memset.c b/sysdeps/aarch64/multiarch/memset.c
index 28d3926bc2..48a59574dd 100644
--- a/sysdeps/aarch64/multiarch/memset.c
+++ b/sysdeps/aarch64/multiarch/memset.c
@@ -31,6 +31,9 @@ extern __typeof (__redirect_memset) __libc_memset;
 extern __typeof (__redirect_memset) __memset_falkor attribute_hidden;
 extern __typeof (__redirect_memset) __memset_emag attribute_hidden;
 extern __typeof (__redirect_memset) __memset_kunpeng attribute_hidden;
+#if HAVE_AARCH64_SVE_ASM
+extern __typeof (__redirect_memset) __memset_a64fx attribute_hidden;
+#endif
 extern __typeof (__redirect_memset) __memset_generic attribute_hidden;
 
 libc_ifunc (__libc_memset,
@@ -40,7 +43,13 @@ libc_ifunc (__libc_memset,
 	     ? __memset_falkor
 	     : (IS_EMAG (midr) && zva_size == 64
 	       ? __memset_emag
-	       : __memset_generic)));
+#if HAVE_AARCH64_SVE_ASM
+	       : (IS_A64FX (midr)
+		  ? __memset_a64fx
+	          : __memset_generic))));
+#else
+	          : __memset_generic)));
+#endif
 
 # undef memset
 strong_alias (__libc_memset, memset);
diff --git a/sysdeps/aarch64/multiarch/memset_a64fx.S b/sysdeps/aarch64/multiarch/memset_a64fx.S
new file mode 100644
index 0000000000..9bd58cab6d
--- /dev/null
+++ b/sysdeps/aarch64/multiarch/memset_a64fx.S
@@ -0,0 +1,268 @@
+/* Optimized memset for Fujitsu A64FX processor.
+   Copyright (C) 2012-2021 Free Software Foundation, Inc.
+
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library.  If not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+#include <sysdeps/aarch64/memset-reg.h>
+
+#if HAVE_AARCH64_SVE_ASM
+#if IS_IN (libc)
+# define MEMSET __memset_a64fx
+
+/* Assumptions:
+ *
+ * ARMv8.2-a, AArch64, unaligned accesses, sve
+ *
+ */
+
+#define L1_SIZE         (64*1024)       // L1 64KB
+#define L2_SIZE         (8*1024*1024)   // L2 8MB - 1MB
+#define CACHE_LINE_SIZE 256
+#define PF_DIST_L1      (CACHE_LINE_SIZE * 16)  // Prefetch distance L1
+#define ZF_DIST         (CACHE_LINE_SIZE * 21)  // Zerofill distance
+#define rest            x8
+#define vector_length   x9
+#define vl_remainder    x10     // vector_length remainder
+#define cl_remainder    x11     // CACHE_LINE_SIZE remainder
+
+    .arch armv8.2-a+sve
+
+    .macro dc_zva times
+    dc          zva, tmp1
+    add         tmp1, tmp1, CACHE_LINE_SIZE
+    .if \times-1
+    dc_zva "(\times-1)"
+    .endif
+    .endm
+
+    .macro st1b_unroll first=0, last=7
+    st1b        z0.b, p0, [dst, #\first, mul vl]
+    .if \last-\first
+    st1b_unroll "(\first+1)", \last
+    .endif
+    .endm
+
+    .macro shortcut_for_small_size exit
+    // if rest <= vector_length * 2
+    whilelo     p0.b, xzr, count
+    whilelo     p1.b, vector_length, count
+    b.last      1f
+    st1b        z0.b, p0, [dstin, #0, mul vl]
+    st1b        z0.b, p1, [dstin, #1, mul vl]
+    ret
+1:  // if rest > vector_length * 8
+    cmp         count, vector_length, lsl 3     // vector_length * 8
+    b.hi        \exit
+    // if rest <= vector_length * 4
+    lsl         tmp1, vector_length, 1  // vector_length * 2
+    whilelo     p2.b, tmp1, count
+    incb        tmp1
+    whilelo     p3.b, tmp1, count
+    b.last      1f
+    st1b        z0.b, p0, [dstin, #0, mul vl]
+    st1b        z0.b, p1, [dstin, #1, mul vl]
+    st1b        z0.b, p2, [dstin, #2, mul vl]
+    st1b        z0.b, p3, [dstin, #3, mul vl]
+    ret
+1:  // if rest <= vector_length * 8
+    lsl         tmp1, vector_length, 2  // vector_length * 4
+    whilelo     p4.b, tmp1, count
+    incb        tmp1
+    whilelo     p5.b, tmp1, count
+    b.last      1f
+    st1b        z0.b, p0, [dstin, #0, mul vl]
+    st1b        z0.b, p1, [dstin, #1, mul vl]
+    st1b        z0.b, p2, [dstin, #2, mul vl]
+    st1b        z0.b, p3, [dstin, #3, mul vl]
+    st1b        z0.b, p4, [dstin, #4, mul vl]
+    st1b        z0.b, p5, [dstin, #5, mul vl]
+    ret
+1:  lsl         tmp1, vector_length, 2  // vector_length * 4
+    incb        tmp1                    // vector_length * 5
+    incb        tmp1                    // vector_length * 6
+    whilelo     p6.b, tmp1, count
+    incb        tmp1
+    whilelo     p7.b, tmp1, count
+    st1b        z0.b, p0, [dstin, #0, mul vl]
+    st1b        z0.b, p1, [dstin, #1, mul vl]
+    st1b        z0.b, p2, [dstin, #2, mul vl]
+    st1b        z0.b, p3, [dstin, #3, mul vl]
+    st1b        z0.b, p4, [dstin, #4, mul vl]
+    st1b        z0.b, p5, [dstin, #5, mul vl]
+    st1b        z0.b, p6, [dstin, #6, mul vl]
+    st1b        z0.b, p7, [dstin, #7, mul vl]
+    ret
+    .endm
+
+ENTRY (MEMSET)
+
+    PTR_ARG (0)
+    SIZE_ARG (2)
+
+    cbnz        count, 1f
+    ret
+1:  dup         z0.b, valw
+    cntb        vector_length
+    // shortcut for less than vector_length * 8
+    // gives a free ptrue to p0.b for n >= vector_length
+    shortcut_for_small_size L(vl_agnostic)
+    // end of shortcut
+
+L(vl_agnostic): // VL Agnostic
+    mov         rest, count
+    mov         dst, dstin
+    add         dstend, dstin, count
+    // if rest >= L2_SIZE && vector_length == 64 then L(L2)
+    mov         tmp1, 64
+    cmp         rest, L2_SIZE
+    ccmp        vector_length, tmp1, 0, cs
+    b.eq        L(L2)
+    // if rest >= L1_SIZE && vector_length == 64 then L(L1_prefetch)
+    cmp         rest, L1_SIZE
+    ccmp        vector_length, tmp1, 0, cs
+    b.eq        L(L1_prefetch)
+
+L(unroll32):
+    lsl         tmp1, vector_length, 3  // vector_length * 8
+    lsl         tmp2, vector_length, 5  // vector_length * 32
+    .p2align 3
+1:  cmp         rest, tmp2
+    b.cc        L(unroll8)
+    st1b_unroll
+    add         dst, dst, tmp1
+    st1b_unroll
+    add         dst, dst, tmp1
+    st1b_unroll
+    add         dst, dst, tmp1
+    st1b_unroll
+    add         dst, dst, tmp1
+    sub         rest, rest, tmp2
+    b           1b
+
+L(unroll8):
+    lsl         tmp1, vector_length, 3
+    .p2align 3
+1:  cmp         rest, tmp1
+    b.cc        L(last)
+    st1b_unroll
+    add         dst, dst, tmp1
+    sub         rest, rest, tmp1
+    b           1b
+
+L(last):
+    whilelo     p0.b, xzr, rest
+    whilelo     p1.b, vector_length, rest
+    b.last      1f
+    st1b        z0.b, p0, [dst, #0, mul vl]
+    st1b        z0.b, p1, [dst, #1, mul vl]
+    ret
+1:  lsl         tmp1, vector_length, 1  // vector_length * 2
+    whilelo     p2.b, tmp1, rest
+    incb        tmp1
+    whilelo     p3.b, tmp1, rest
+    b.last      1f
+    st1b        z0.b, p0, [dst, #0, mul vl]
+    st1b        z0.b, p1, [dst, #1, mul vl]
+    st1b        z0.b, p2, [dst, #2, mul vl]
+    st1b        z0.b, p3, [dst, #3, mul vl]
+    ret
+1:  lsl         tmp1, vector_length, 2  // vector_length * 4
+    whilelo     p4.b, tmp1, rest
+    incb        tmp1
+    whilelo     p5.b, tmp1, rest
+    incb        tmp1
+    whilelo     p6.b, tmp1, rest
+    incb        tmp1
+    whilelo     p7.b, tmp1, rest
+    st1b        z0.b, p0, [dst, #0, mul vl]
+    st1b        z0.b, p1, [dst, #1, mul vl]
+    st1b        z0.b, p2, [dst, #2, mul vl]
+    st1b        z0.b, p3, [dst, #3, mul vl]
+    st1b        z0.b, p4, [dst, #4, mul vl]
+    st1b        z0.b, p5, [dst, #5, mul vl]
+    st1b        z0.b, p6, [dst, #6, mul vl]
+    st1b        z0.b, p7, [dst, #7, mul vl]
+    ret
+
+L(L1_prefetch): // if rest >= L1_SIZE
+    .p2align 3
+1:  st1b_unroll 0, 3
+    prfm        pstl1keep, [dst, PF_DIST_L1]
+    st1b_unroll 4, 7
+    prfm        pstl1keep, [dst, PF_DIST_L1 + CACHE_LINE_SIZE]
+    add         dst, dst, CACHE_LINE_SIZE * 2
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+    cmp         rest, L1_SIZE
+    b.ge        1b
+    cbnz        rest, L(unroll32)
+    ret
+
+L(L2):
+    // align dst address at vector_length byte boundary
+    sub         tmp1, vector_length, 1
+    ands        tmp2, dst, tmp1
+    // if vl_remainder == 0
+    b.eq        1f
+    sub         vl_remainder, vector_length, tmp2
+    // process remainder until the first vector_length boundary
+    whilelt     p2.b, xzr, vl_remainder
+    st1b        z0.b, p2, [dst]
+    add         dst, dst, vl_remainder
+    sub         rest, rest, vl_remainder
+    // align dstin address at CACHE_LINE_SIZE byte boundary
+1:  mov         tmp1, CACHE_LINE_SIZE
+    ands        tmp2, dst, CACHE_LINE_SIZE - 1
+    // if cl_remainder == 0
+    b.eq        L(L2_dc_zva)
+    sub         cl_remainder, tmp1, tmp2
+    // process remainder until the first CACHE_LINE_SIZE boundary
+    mov         tmp1, xzr       // index
+2:  whilelt     p2.b, tmp1, cl_remainder
+    st1b        z0.b, p2, [dst, tmp1]
+    incb        tmp1
+    cmp         tmp1, cl_remainder
+    b.lo        2b
+    add         dst, dst, cl_remainder
+    sub         rest, rest, cl_remainder
+
+L(L2_dc_zva):
+    // zero fill
+    mov         tmp1, dst
+    dc_zva      (ZF_DIST / CACHE_LINE_SIZE) - 1
+    mov         zva_len, ZF_DIST
+    add         tmp1, zva_len, CACHE_LINE_SIZE * 2
+    // unroll
+    .p2align 3
+1:  st1b_unroll 0, 3
+    add         tmp2, dst, zva_len
+    dc          zva, tmp2
+    st1b_unroll 4, 7
+    add         tmp2, tmp2, CACHE_LINE_SIZE
+    dc          zva, tmp2
+    add         dst, dst, CACHE_LINE_SIZE * 2
+    sub         rest, rest, CACHE_LINE_SIZE * 2
+    cmp         rest, tmp1      // ZF_DIST + CACHE_LINE_SIZE * 2
+    b.ge        1b
+    cbnz        rest, L(unroll8)
+    ret
+
+END (MEMSET)
+libc_hidden_builtin_def (MEMSET)
+
+#endif /* IS_IN (libc) */
+#endif /* HAVE_AARCH64_SVE_ASM */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 5/6] scripts: Added Vector Length Set test helper script
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
                     ` (3 preceding siblings ...)
  2021-05-12  9:28   ` [PATCH v2 4/6] aarch64: Added optimized memset " Naohiro Tamura
@ 2021-05-12  9:29   ` Naohiro Tamura
  2021-05-12 16:58     ` Joseph Myers
  2021-05-20  7:34     ` Naohiro Tamura
  2021-05-12  9:29   ` [PATCH v2 6/6] benchtests: Fixed bench-memcpy-random: buf1: mprotect failed Naohiro Tamura
                     ` (3 subsequent siblings)
  8 siblings, 2 replies; 36+ messages in thread
From: Naohiro Tamura @ 2021-05-12  9:29 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch is a test helper script to change Vector Length for child
process. This script can be used as test-wrapper for 'make check'.

Usage examples:

ubuntu@bionic:~/build$ make check subdirs=string \
test-wrapper='~/glibc/scripts/vltest.py 16'

ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 16 make test \
t=string/test-memcpy

ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 32 ./debugglibc.sh \
string/test-memmove

ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 64 ./testrun.sh
string/test-memset
---
 scripts/vltest.py | 82 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 82 insertions(+)
 create mode 100755 scripts/vltest.py

diff --git a/scripts/vltest.py b/scripts/vltest.py
new file mode 100755
index 0000000000..264dfa449f
--- /dev/null
+++ b/scripts/vltest.py
@@ -0,0 +1,82 @@
+#!/usr/bin/python3
+# Set Scalable Vector Length test helper
+# Copyright (C) 2019-2021 Free Software Foundation, Inc.
+# This file is part of the GNU C Library.
+#
+# The GNU C Library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# The GNU C Library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with the GNU C Library; if not, see
+# <https://www.gnu.org/licenses/>.
+"""Set Scalable Vector Length test helper.
+
+Set Scalable Vector Length for child process.
+
+examples:
+
+ubuntu@bionic:~/build$ make check subdirs=string \
+test-wrapper='~/glibc/scripts/vltest.py 16'
+
+ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 16 make test \
+t=string/test-memcpy
+
+ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 32 ./debugglibc.sh \
+string/test-memmove
+
+ubuntu@bionic:~/build$ ~/glibc/scripts/vltest.py 64 ./testrun.sh \
+string/test-memset
+"""
+import argparse
+from ctypes import cdll, CDLL
+import os
+import sys
+
+EXIT_SUCCESS = 0
+EXIT_FAILURE = 1
+EXIT_UNSUPPORTED = 77
+
+AT_HWCAP = 16
+HWCAP_SVE = (1 << 22)
+
+PR_SVE_GET_VL = 51
+PR_SVE_SET_VL = 50
+PR_SVE_SET_VL_ONEXEC = (1 << 18)
+PR_SVE_VL_INHERIT = (1 << 17)
+PR_SVE_VL_LEN_MASK = 0xffff
+
+def main(args):
+    libc = CDLL("libc.so.6")
+    if not libc.getauxval(AT_HWCAP) & HWCAP_SVE:
+        print("CPU doesn't support SVE")
+        sys.exit(EXIT_UNSUPPORTED)
+
+    libc.prctl(PR_SVE_SET_VL,
+               args.vl[0] | PR_SVE_SET_VL_ONEXEC | PR_SVE_VL_INHERIT)
+    os.execvp(args.args[0], args.args)
+    print("exec system call failure")
+    sys.exit(EXIT_FAILURE)
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser(description=
+            "Set Scalable Vector Length test helper",
+            formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+
+    # positional argument
+    parser.add_argument("vl", nargs=1, type=int,
+                        choices=range(16, 257, 16),
+                        help=('vector length '\
+                              'which is multiples of 16 from 16 to 256'))
+    # remainDer arguments
+    parser.add_argument('args', nargs=argparse.REMAINDER,
+                        help=('args '\
+                              'which is passed to child process'))
+    args = parser.parse_args()
+    main(args)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 6/6] benchtests: Fixed bench-memcpy-random: buf1: mprotect failed
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
                     ` (4 preceding siblings ...)
  2021-05-12  9:29   ` [PATCH v2 5/6] scripts: Added Vector Length Set test helper script Naohiro Tamura
@ 2021-05-12  9:29   ` Naohiro Tamura
  2021-05-26 10:25     ` Szabolcs Nagy
  2021-05-27  0:22   ` [PATCH v2 0/6] aarch64: Added optimized memcpy/memmove/memset for A64FX naohirot
                     ` (2 subsequent siblings)
  8 siblings, 1 reply; 36+ messages in thread
From: Naohiro Tamura @ 2021-05-12  9:29 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch fixed mprotect system call failure on AArch64.
This failure happened on not only A64FX but also ThunderX2.

Also this patch updated a JSON key from "max-size" to "length" so that
'plot_strings.py' can process 'bench-memcpy-random.out'
---
 benchtests/bench-memcpy-random.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/benchtests/bench-memcpy-random.c b/benchtests/bench-memcpy-random.c
index 9b62033379..c490b73ed0 100644
--- a/benchtests/bench-memcpy-random.c
+++ b/benchtests/bench-memcpy-random.c
@@ -16,7 +16,7 @@
    License along with the GNU C Library; if not, see
    <https://www.gnu.org/licenses/>.  */
 
-#define MIN_PAGE_SIZE (512*1024+4096)
+#define MIN_PAGE_SIZE (512*1024+getpagesize())
 #define TEST_MAIN
 #define TEST_NAME "memcpy"
 #include "bench-string.h"
@@ -160,7 +160,7 @@ do_test (json_ctx_t *json_ctx, size_t max_size)
     }
 
   json_element_object_begin (json_ctx);
-  json_attr_uint (json_ctx, "max-size", (double) max_size);
+  json_attr_uint (json_ctx, "length", (double) max_size);
   json_array_begin (json_ctx, "timings");
 
   FOR_EACH_IMPL (impl, 0)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] scripts: Added Vector Length Set test helper script
  2021-05-12  9:29   ` [PATCH v2 5/6] scripts: Added Vector Length Set test helper script Naohiro Tamura
@ 2021-05-12 16:58     ` Joseph Myers
  2021-05-13  9:53       ` naohirot
  2021-05-20  7:34     ` Naohiro Tamura
  1 sibling, 1 reply; 36+ messages in thread
From: Joseph Myers @ 2021-05-12 16:58 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha, Naohiro Tamura

On Wed, 12 May 2021, Naohiro Tamura wrote:

> From: Naohiro Tamura <naohirot@jp.fujitsu.com>
> 
> This patch is a test helper script to change Vector Length for child
> process. This script can be used as test-wrapper for 'make check'.

This is specific to AArch64, so I think it would better go under 
sysdeps/unix/sysv/linux/aarch64/ rather than under scripts/.

There is also the question of how to make this discoverable to people 
developing glibc.  Maybe this script should be mentioned in install.texi 
(with INSTALL regenerated accordingly), with the documentation there 
clearly explaining that it's specific to AArch64 GNU/Linux.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v2 5/6] scripts: Added Vector Length Set test helper script
  2021-05-12 16:58     ` Joseph Myers
@ 2021-05-13  9:53       ` naohirot
  0 siblings, 0 replies; 36+ messages in thread
From: naohirot @ 2021-05-13  9:53 UTC (permalink / raw)
  To: 'Joseph Myers'; +Cc: libc-alpha

Hi Joseph,

Thank you for the review.

> From: Joseph Myers <joseph@codesourcery.com>

> > This patch is a test helper script to change Vector Length for child
> > process. This script can be used as test-wrapper for 'make check'.
> 
> This is specific to AArch64, so I think it would better go under
> sysdeps/unix/sysv/linux/aarch64/ rather than under scripts/.

OK, I moved it to sysdeps/unix/sysv/linux/aarch64/.

> There is also the question of how to make this discoverable to people developing
> glibc.  Maybe this script should be mentioned in install.texi (with INSTALL
> regenerated accordingly), with the documentation there clearly explaining that it's
> specific to AArch64 GNU/Linux.

OK, I updated install.texi, INSTALL, vlset.py doc part as well as commit message
such as the followings or my github [1].

[1] https://github.com/NaohiroTamura/glibc/commit/37a5832fea109ab939ffdf58a2a19d5707849cc5

[commit message] aarch64: Added Vector Length Set test helper script

This patch is a test helper script to change Vector Length for child
process. This script can be used as test-wrapper for 'make check'.

Usage examples:

~/build$ make check subdirs=string \
test-wrapper='~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16'

~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16 \
make test t=string/test-memcpy

~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 32 \
./debugglibc.sh string/test-memmove

~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 64 \
./testrun.sh string/test-memset
---
 INSTALL                                   |  4 ++
 manual/install.texi                       |  3 +
 sysdeps/unix/sysv/linux/aarch64/vltest.py | 82 +++++++++++++++++++++++
 3 files changed, 89 insertions(+)
 create mode 100755 sysdeps/unix/sysv/linux/aarch64/vltest.py

diff --git a/INSTALL b/INSTALL
index 065a568585..bc761ab98b 100644
--- a/INSTALL
+++ b/INSTALL
@@ -380,6 +380,10 @@ the same syntax as 'test-wrapper-env', the only difference in its
 semantics being starting with an empty set of environment variables
 rather than the ambient set.

+   For AArch64 with SVE, when testing the GNU C Library, 'test-wrapper'
+may be set to "SRCDIR/sysdeps/unix/sysv/linux/aarch64/vltest.py
+VECTOR-LENGTH" to change Vector Length.
+
 Installing the C Library
 ========================

diff --git a/manual/install.texi b/manual/install.texi
index eb41fbd0b5..f1d858fb78 100644
--- a/manual/install.texi
+++ b/manual/install.texi
@@ -418,6 +418,9 @@ use has the same syntax as @samp{test-wrapper-env}, the only
 difference in its semantics being starting with an empty set of
 environment variables rather than the ambient set.

+For AArch64 with SVE, when testing @theglibc{}, @samp{test-wrapper}
+may be set to "@var{srcdir}/sysdeps/unix/sysv/linux/aarch64/vltest.py
+@var{vector-length}" to change Vector Length.

 @node Running make install
 @appendixsec Installing the C Library
diff --git a/sysdeps/unix/sysv/linux/aarch64/vltest.py b/sysdeps/unix/sysv/linux/aarch64/vltest.py
new file mode 100755
index 0000000000..bed62ad151
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/aarch64/vltest.py
@@ -0,0 +1,82 @@
+#!/usr/bin/python3
+# Set Scalable Vector Length test helper
+# Copyright (C) 2021 Free Software Foundation, Inc.
+# This file is part of the GNU C Library.
+#
+# The GNU C Library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# The GNU C Library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with the GNU C Library; if not, see
+# <https://www.gnu.org/licenses/>.
+"""Set Scalable Vector Length test helper.
+
+Set Scalable Vector Length for child process.
+
+examples:
+
+~/build$ make check subdirs=string \
+test-wrapper='~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16'
+
+~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16 \
+make test t=string/test-memcpy
+
+~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 32 \
+./debugglibc.sh string/test-memmove
+
+~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 64 \
+./testrun.sh string/test-memset
+"""

Thanks.
Naohiro


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX
  2021-05-10  1:45 ` naohirot
@ 2021-05-14 13:35   ` Szabolcs Nagy
  2021-05-19  0:11     ` naohirot
  0 siblings, 1 reply; 36+ messages in thread
From: Szabolcs Nagy @ 2021-05-14 13:35 UTC (permalink / raw)
  To: naohirot, Carlos O'Donell; +Cc: Wilco Dijkstra, Florian Weimer, libc-alpha

The 05/10/2021 01:45, naohirot@fujitsu.com wrote:
> FYI: Fujitsu has submitted the signed assignment finally.

Carlos, can we commit patches from fujitsu now?
(i dont know if we are still waiting for something)

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX
  2021-05-14 13:35   ` Szabolcs Nagy
@ 2021-05-19  0:11     ` naohirot
  0 siblings, 0 replies; 36+ messages in thread
From: naohirot @ 2021-05-19  0:11 UTC (permalink / raw)
  To: 'Szabolcs Nagy', Carlos O'Donell
  Cc: Wilco Dijkstra, Florian Weimer, libc-alpha

Hi Szabolcs, Carlos,

> From: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
> Sent: Friday, May 14, 2021 10:36 PM
> 
> The 05/10/2021 01:45, naohirot@fujitsu.com wrote:
> > FYI: Fujitsu has submitted the signed assignment finally.
> 
> Carlos, can we commit patches from fujitsu now?
> (i dont know if we are still waiting for something)

Fujitsu has received FSF signed assignment.
So the contract process has completed.

Thanks.
Naohiro


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 5/6] scripts: Added Vector Length Set test helper script
  2021-05-12  9:29   ` [PATCH v2 5/6] scripts: Added Vector Length Set test helper script Naohiro Tamura
  2021-05-12 16:58     ` Joseph Myers
@ 2021-05-20  7:34     ` Naohiro Tamura
  2021-05-26 10:24       ` Szabolcs Nagy
  1 sibling, 1 reply; 36+ messages in thread
From: Naohiro Tamura @ 2021-05-20  7:34 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

Let me send the whole updated patch.
Thanks.
Naohiro

-- >8 --
Subject: [PATCH v2 5/6] aarch64: Added Vector Length Set test helper script

This patch is a test helper script to change Vector Length for child
process. This script can be used as test-wrapper for 'make check'.

Usage examples:

~/build$ make check subdirs=string \
test-wrapper='~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16'

~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16 \
make test t=string/test-memcpy

~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 32 \
./debugglibc.sh string/test-memmove

~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 64 \
./testrun.sh string/test-memset
---
 INSTALL                                   |  4 ++
 manual/install.texi                       |  3 +
 sysdeps/unix/sysv/linux/aarch64/vltest.py | 82 +++++++++++++++++++++++
 3 files changed, 89 insertions(+)
 create mode 100755 sysdeps/unix/sysv/linux/aarch64/vltest.py

diff --git a/INSTALL b/INSTALL
index 065a568585e6..bc761ab98bbf 100644
--- a/INSTALL
+++ b/INSTALL
@@ -380,6 +380,10 @@ the same syntax as 'test-wrapper-env', the only difference in its
 semantics being starting with an empty set of environment variables
 rather than the ambient set.
 
+   For AArch64 with SVE, when testing the GNU C Library, 'test-wrapper'
+may be set to "SRCDIR/sysdeps/unix/sysv/linux/aarch64/vltest.py
+VECTOR-LENGTH" to change Vector Length.
+
 Installing the C Library
 ========================
 
diff --git a/manual/install.texi b/manual/install.texi
index eb41fbd0b5ab..f1d858fb789c 100644
--- a/manual/install.texi
+++ b/manual/install.texi
@@ -418,6 +418,9 @@ use has the same syntax as @samp{test-wrapper-env}, the only
 difference in its semantics being starting with an empty set of
 environment variables rather than the ambient set.
 
+For AArch64 with SVE, when testing @theglibc{}, @samp{test-wrapper}
+may be set to "@var{srcdir}/sysdeps/unix/sysv/linux/aarch64/vltest.py
+@var{vector-length}" to change Vector Length.
 
 @node Running make install
 @appendixsec Installing the C Library
diff --git a/sysdeps/unix/sysv/linux/aarch64/vltest.py b/sysdeps/unix/sysv/linux/aarch64/vltest.py
new file mode 100755
index 000000000000..bed62ad151e0
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/aarch64/vltest.py
@@ -0,0 +1,82 @@
+#!/usr/bin/python3
+# Set Scalable Vector Length test helper
+# Copyright (C) 2021 Free Software Foundation, Inc.
+# This file is part of the GNU C Library.
+#
+# The GNU C Library is free software; you can redistribute it and/or
+# modify it under the terms of the GNU Lesser General Public
+# License as published by the Free Software Foundation; either
+# version 2.1 of the License, or (at your option) any later version.
+#
+# The GNU C Library is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+# Lesser General Public License for more details.
+#
+# You should have received a copy of the GNU Lesser General Public
+# License along with the GNU C Library; if not, see
+# <https://www.gnu.org/licenses/>.
+"""Set Scalable Vector Length test helper.
+
+Set Scalable Vector Length for child process.
+
+examples:
+
+~/build$ make check subdirs=string \
+test-wrapper='~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16'
+
+~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16 \
+make test t=string/test-memcpy
+
+~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 32 \
+./debugglibc.sh string/test-memmove
+
+~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 64 \
+./testrun.sh string/test-memset
+"""
+import argparse
+from ctypes import cdll, CDLL
+import os
+import sys
+
+EXIT_SUCCESS = 0
+EXIT_FAILURE = 1
+EXIT_UNSUPPORTED = 77
+
+AT_HWCAP = 16
+HWCAP_SVE = (1 << 22)
+
+PR_SVE_GET_VL = 51
+PR_SVE_SET_VL = 50
+PR_SVE_SET_VL_ONEXEC = (1 << 18)
+PR_SVE_VL_INHERIT = (1 << 17)
+PR_SVE_VL_LEN_MASK = 0xffff
+
+def main(args):
+    libc = CDLL("libc.so.6")
+    if not libc.getauxval(AT_HWCAP) & HWCAP_SVE:
+        print("CPU doesn't support SVE")
+        sys.exit(EXIT_UNSUPPORTED)
+
+    libc.prctl(PR_SVE_SET_VL,
+               args.vl[0] | PR_SVE_SET_VL_ONEXEC | PR_SVE_VL_INHERIT)
+    os.execvp(args.args[0], args.args)
+    print("exec system call failure")
+    sys.exit(EXIT_FAILURE)
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser(description=
+            "Set Scalable Vector Length test helper",
+            formatter_class=argparse.ArgumentDefaultsHelpFormatter)
+
+    # positional argument
+    parser.add_argument("vl", nargs=1, type=int,
+                        choices=range(16, 257, 16),
+                        help=('vector length '\
+                              'which is multiples of 16 from 16 to 256'))
+    # remainDer arguments
+    parser.add_argument('args', nargs=argparse.REMAINDER,
+                        help=('args '\
+                              'which is passed to child process'))
+    args = parser.parse_args()
+    main(args)
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 1/6] config: Added HAVE_AARCH64_SVE_ASM for aarch64
  2021-05-12  9:26   ` [PATCH v2 1/6] config: Added HAVE_AARCH64_SVE_ASM for aarch64 Naohiro Tamura
@ 2021-05-26 10:05     ` Szabolcs Nagy
  0 siblings, 0 replies; 36+ messages in thread
From: Szabolcs Nagy @ 2021-05-26 10:05 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha, Naohiro Tamura

The 05/12/2021 09:26, Naohiro Tamura wrote:
> From: Naohiro Tamura <naohirot@jp.fujitsu.com>
> 
> This patch checks if assembler supports '-march=armv8.2-a+sve' to
> generate SVE code or not, and then define HAVE_AARCH64_SVE_ASM macro.

this is ok for master.

i will commit it for you.

> ---
>  config.h.in                  |  5 +++++
>  sysdeps/aarch64/configure    | 28 ++++++++++++++++++++++++++++
>  sysdeps/aarch64/configure.ac | 15 +++++++++++++++
>  3 files changed, 48 insertions(+)
> 
> diff --git a/config.h.in b/config.h.in
> index 99036b887f..13fba9bb8d 100644
> --- a/config.h.in
> +++ b/config.h.in
> @@ -121,6 +121,11 @@
>  /* AArch64 PAC-RET code generation is enabled.  */
>  #define HAVE_AARCH64_PAC_RET 0
>  
> +/* Assembler support ARMv8.2-A SVE.
> +   This macro becomes obsolete when glibc increased the minimum
> +   required version of GNU 'binutils' to 2.28 or later. */
> +#define HAVE_AARCH64_SVE_ASM 0
> +
>  /* ARC big endian ABI */
>  #undef HAVE_ARC_BE
>  
> diff --git a/sysdeps/aarch64/configure b/sysdeps/aarch64/configure
> index 83c3a23e44..4c1fac49f3 100644
> --- a/sysdeps/aarch64/configure
> +++ b/sysdeps/aarch64/configure
> @@ -304,3 +304,31 @@ fi
>  $as_echo "$libc_cv_aarch64_variant_pcs" >&6; }
>  config_vars="$config_vars
>  aarch64-variant-pcs = $libc_cv_aarch64_variant_pcs"
> +
> +# Check if asm support armv8.2-a+sve
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for SVE support in assembler" >&5
> +$as_echo_n "checking for SVE support in assembler... " >&6; }
> +if ${libc_cv_asm_sve+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +  cat > conftest.s <<\EOF
> +        ptrue p0.b
> +EOF
> +if { ac_try='${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&5'
> +  { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_try\""; } >&5
> +  (eval $ac_try) 2>&5
> +  ac_status=$?
> +  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
> +  test $ac_status = 0; }; }; then
> +  libc_cv_aarch64_sve_asm=yes
> +else
> +  libc_cv_aarch64_sve_asm=no
> +fi
> +rm -f conftest*
> +fi
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $libc_cv_asm_sve" >&5
> +$as_echo "$libc_cv_asm_sve" >&6; }
> +if test $libc_cv_aarch64_sve_asm = yes; then
> +  $as_echo "#define HAVE_AARCH64_SVE_ASM 1" >>confdefs.h
> +
> +fi
> diff --git a/sysdeps/aarch64/configure.ac b/sysdeps/aarch64/configure.ac
> index 66f755078a..3347c13fa1 100644
> --- a/sysdeps/aarch64/configure.ac
> +++ b/sysdeps/aarch64/configure.ac
> @@ -90,3 +90,18 @@ EOF
>    fi
>    rm -rf conftest.*])
>  LIBC_CONFIG_VAR([aarch64-variant-pcs], [$libc_cv_aarch64_variant_pcs])
> +
> +# Check if asm support armv8.2-a+sve
> +AC_CACHE_CHECK(for SVE support in assembler, libc_cv_asm_sve, [dnl
> +cat > conftest.s <<\EOF
> +        ptrue p0.b
> +EOF
> +if AC_TRY_COMMAND(${CC-cc} -c -march=armv8.2-a+sve conftest.s 1>&AS_MESSAGE_LOG_FD); then
> +  libc_cv_aarch64_sve_asm=yes
> +else
> +  libc_cv_aarch64_sve_asm=no
> +fi
> +rm -f conftest*])
> +if test $libc_cv_aarch64_sve_asm = yes; then
> +  AC_DEFINE(HAVE_AARCH64_SVE_ASM)
> +fi
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 2/6] aarch64: define BTI_C and BTI_J macros as NOP unless HAVE_AARCH64_BTI
  2021-05-12  9:27   ` [PATCH v2 2/6] aarch64: define BTI_C and BTI_J macros as NOP unless HAVE_AARCH64_BTI Naohiro Tamura
@ 2021-05-26 10:06     ` Szabolcs Nagy
  0 siblings, 0 replies; 36+ messages in thread
From: Szabolcs Nagy @ 2021-05-26 10:06 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha, Naohiro Tamura

The 05/12/2021 09:27, Naohiro Tamura wrote:
> From: Naohiro Tamura <naohirot@jp.fujitsu.com>
> 
> This patch defines BTI_C and BTI_J macros conditionally for
> performance.
> If HAVE_AARCH64_BTI is true, BTI_C and BTI_J are defined as HINT
> instruction for ARMv8.5 BTI (Branch Target Identification).
> If HAVE_AARCH64_BTI is false, both BTI_C and BTI_J are defined as
> NOP.

thanks. this is ok for master.

i will commit it.

> ---
>  sysdeps/aarch64/sysdep.h | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/sysdeps/aarch64/sysdep.h b/sysdeps/aarch64/sysdep.h
> index 90acca4e42..b936e29cbd 100644
> --- a/sysdeps/aarch64/sysdep.h
> +++ b/sysdeps/aarch64/sysdep.h
> @@ -62,8 +62,13 @@ strip_pac (void *p)
>  #define ASM_SIZE_DIRECTIVE(name) .size name,.-name
>  
>  /* Branch Target Identitication support.  */
> -#define BTI_C		hint	34
> -#define BTI_J		hint	36
> +#if HAVE_AARCH64_BTI
> +# define BTI_C		hint	34
> +# define BTI_J		hint	36
> +#else
> +# define BTI_C		nop
> +# define BTI_J		nop
> +#endif
>  
>  /* Return address signing support (pac-ret).  */
>  #define PACIASP		hint	25
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 3/6] aarch64: Added optimized memcpy and memmove for A64FX
  2021-05-12  9:28   ` [PATCH v2 3/6] aarch64: Added optimized memcpy and memmove for A64FX Naohiro Tamura
@ 2021-05-26 10:19     ` Szabolcs Nagy
  0 siblings, 0 replies; 36+ messages in thread
From: Szabolcs Nagy @ 2021-05-26 10:19 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha, Naohiro Tamura

The 05/12/2021 09:28, Naohiro Tamura wrote:
> From: Naohiro Tamura <naohirot@jp.fujitsu.com>
> 
> This patch optimizes the performance of memcpy/memmove for A64FX [1]
> which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB
> cache per NUMA node.
> 
> The performance optimization makes use of Scalable Vector Register
> with several techniques such as loop unrolling, memory access
> alignment, cache zero fill, and software pipelining.
> 
> SVE assembler code for memcpy/memmove is implemented as Vector Length
> Agnostic code so theoretically it can be run on any SOC which supports
> ARMv8-A SVE standard.
> 
> We confirmed that all testcases have been passed by running 'make
> check' and 'make xcheck' not only on A64FX but also on ThunderX2.
> 
> And also we confirmed that the SVE 512 bit vector register performance
> is roughly 4 times better than Advanced SIMD 128 bit register and 8
> times better than scalar 64 bit register by running 'make bench'.
> 
> [1] https://github.com/fujitsu/A64FX

thanks. this looks ok, except for whitespace usage.

can you please send a version with fixed whitespaces?

> --- a/sysdeps/aarch64/multiarch/memcpy.c
> +++ b/sysdeps/aarch64/multiarch/memcpy.c
> @@ -33,6 +33,9 @@ extern __typeof (__redirect_memcpy) __memcpy_simd attribute_hidden;
>  extern __typeof (__redirect_memcpy) __memcpy_thunderx attribute_hidden;
>  extern __typeof (__redirect_memcpy) __memcpy_thunderx2 attribute_hidden;
>  extern __typeof (__redirect_memcpy) __memcpy_falkor attribute_hidden;
> +#if HAVE_AARCH64_SVE_ASM
> +extern __typeof (__redirect_memcpy) __memcpy_a64fx attribute_hidden;
> +#endif
>  
>  libc_ifunc (__libc_memcpy,
>              (IS_THUNDERX (midr)
> @@ -44,8 +47,13 @@ libc_ifunc (__libc_memcpy,
>  		  : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
>  		     || IS_NEOVERSE_V1 (midr)
>  		     ? __memcpy_simd
> -		     : __memcpy_generic)))));
> -
> +#if HAVE_AARCH64_SVE_ASM
> +                     : (IS_A64FX (midr)
> +                        ? __memcpy_a64fx
> +                        : __memcpy_generic))))));
> +#else
> +                     : __memcpy_generic)))));
> +#endif

glibc uses a mix of tabs and spaces, you used space only.

> --- /dev/null
> +++ b/sysdeps/aarch64/multiarch/memcpy_a64fx.S
> @@ -0,0 +1,405 @@
> +/* Optimized memcpy for Fujitsu A64FX processor.
> +   Copyright (C) 2012-2021 Free Software Foundation, Inc.
> +
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library.  If not, see
> +   <https://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +
> +#if HAVE_AARCH64_SVE_ASM
> +#if IS_IN (libc)
> +# define MEMCPY __memcpy_a64fx
> +# define MEMMOVE __memmove_a64fx
> +
> +/* Assumptions:
> + *
> + * ARMv8.2-a, AArch64, unaligned accesses, sve
> + *
> + */
> +
> +#define L2_SIZE         (8*1024*1024)/2 // L2 8MB/2
> +#define CACHE_LINE_SIZE 256
> +#define ZF_DIST         (CACHE_LINE_SIZE * 21)  // Zerofill distance
> +#define dest            x0
> +#define src             x1
> +#define n               x2      // size
> +#define tmp1            x3
> +#define tmp2            x4
> +#define tmp3            x5
> +#define rest            x6
> +#define dest_ptr        x7
> +#define src_ptr         x8
> +#define vector_length   x9
> +#define cl_remainder    x10     // CACHE_LINE_SIZE remainder
> +
> +    .arch armv8.2-a+sve
> +
> +    .macro dc_zva times
> +    dc          zva, tmp1
> +    add         tmp1, tmp1, CACHE_LINE_SIZE
> +    .if \times-1
> +    dc_zva "(\times-1)"
> +    .endif
> +    .endm
> +
> +    .macro ld1b_unroll8
> +    ld1b        z0.b, p0/z, [src_ptr, #0, mul vl]
> +    ld1b        z1.b, p0/z, [src_ptr, #1, mul vl]
> +    ld1b        z2.b, p0/z, [src_ptr, #2, mul vl]
> +    ld1b        z3.b, p0/z, [src_ptr, #3, mul vl]
> +    ld1b        z4.b, p0/z, [src_ptr, #4, mul vl]
> +    ld1b        z5.b, p0/z, [src_ptr, #5, mul vl]
> +    ld1b        z6.b, p0/z, [src_ptr, #6, mul vl]
> +    ld1b        z7.b, p0/z, [src_ptr, #7, mul vl]
> +    .endm
...

please indent all asm code with one tab, see other asm files.

> --- a/sysdeps/aarch64/multiarch/memmove.c
> +++ b/sysdeps/aarch64/multiarch/memmove.c
> @@ -33,6 +33,9 @@ extern __typeof (__redirect_memmove) __memmove_simd attribute_hidden;
>  extern __typeof (__redirect_memmove) __memmove_thunderx attribute_hidden;
>  extern __typeof (__redirect_memmove) __memmove_thunderx2 attribute_hidden;
>  extern __typeof (__redirect_memmove) __memmove_falkor attribute_hidden;
> +#if HAVE_AARCH64_SVE_ASM
> +extern __typeof (__redirect_memmove) __memmove_a64fx attribute_hidden;
> +#endif
>  
>  libc_ifunc (__libc_memmove,
>              (IS_THUNDERX (midr)
> @@ -44,8 +47,13 @@ libc_ifunc (__libc_memmove,
>  		  : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
>  		     || IS_NEOVERSE_V1 (midr)
>  		     ? __memmove_simd
> -		     : __memmove_generic)))));
> -
> +#if HAVE_AARCH64_SVE_ASM
> +                     : (IS_A64FX (midr)
> +                        ? __memmove_a64fx
> +                        : __memmove_generic))))));
> +#else
> +                        : __memmove_generic)))));
> +#endif

same as above.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 4/6] aarch64: Added optimized memset for A64FX
  2021-05-12  9:28   ` [PATCH v2 4/6] aarch64: Added optimized memset " Naohiro Tamura
@ 2021-05-26 10:22     ` Szabolcs Nagy
  0 siblings, 0 replies; 36+ messages in thread
From: Szabolcs Nagy @ 2021-05-26 10:22 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha, Naohiro Tamura

The 05/12/2021 09:28, Naohiro Tamura wrote:
> From: Naohiro Tamura <naohirot@jp.fujitsu.com>
> 
> This patch optimizes the performance of memset for A64FX [1] which
> implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache
> per NUMA node.
> 
> The performance optimization makes use of Scalable Vector Register
> with several techniques such as loop unrolling, memory access
> alignment, cache zero fill and prefetch.
> 
> SVE assembler code for memset is implemented as Vector Length Agnostic
> code so theoretically it can be run on any SOC which supports ARMv8-A
> SVE standard.
> 
> We confirmed that all testcases have been passed by running 'make
> check' and 'make xcheck' not only on A64FX but also on ThunderX2.
> 
> And also we confirmed that the SVE 512 bit vector register performance
> is roughly 4 times better than Advanced SIMD 128 bit register and 8
> times better than scalar 64 bit register by running 'make bench'.
> 
> [1] https://github.com/fujitsu/A64FX

thanks, this looks good, except for whitespace.

can you please send a version with fixed whitespaces?

> --- a/sysdeps/aarch64/multiarch/memset.c
> +++ b/sysdeps/aarch64/multiarch/memset.c
...
> -	       : __memset_generic)));
> +#if HAVE_AARCH64_SVE_ASM
> +	       : (IS_A64FX (midr)
> +		  ? __memset_a64fx
> +	          : __memset_generic))));
> +#else
> +	          : __memset_generic)));
> +#endif

replace 8 spaces with 1 tab.

> --- /dev/null
> +++ b/sysdeps/aarch64/multiarch/memset_a64fx.S
...
> +    .arch armv8.2-a+sve
> +
> +    .macro dc_zva times
> +    dc          zva, tmp1
> +    add         tmp1, tmp1, CACHE_LINE_SIZE
> +    .if \times-1
> +    dc_zva "(\times-1)"
> +    .endif
> +    .endm

use 1 tab indentation throughout.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 5/6] scripts: Added Vector Length Set test helper script
  2021-05-20  7:34     ` Naohiro Tamura
@ 2021-05-26 10:24       ` Szabolcs Nagy
  0 siblings, 0 replies; 36+ messages in thread
From: Szabolcs Nagy @ 2021-05-26 10:24 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha, Naohiro Tamura

The 05/20/2021 07:34, Naohiro Tamura wrote:
> From: Naohiro Tamura <naohirot@jp.fujitsu.com>
> 
> Let me send the whole updated patch.
> Thanks.
> Naohiro
> 
> -- >8 --
> Subject: [PATCH v2 5/6] aarch64: Added Vector Length Set test helper script
> 
> This patch is a test helper script to change Vector Length for child
> process. This script can be used as test-wrapper for 'make check'.
> 
> Usage examples:
> 
> ~/build$ make check subdirs=string \
> test-wrapper='~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16'
> 
> ~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16 \
> make test t=string/test-memcpy
> 
> ~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 32 \
> ./debugglibc.sh string/test-memmove
> 
> ~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 64 \
> ./testrun.sh string/test-memset

thanks, this is ok for master.
i will commit it.


> ---
>  INSTALL                                   |  4 ++
>  manual/install.texi                       |  3 +
>  sysdeps/unix/sysv/linux/aarch64/vltest.py | 82 +++++++++++++++++++++++
>  3 files changed, 89 insertions(+)
>  create mode 100755 sysdeps/unix/sysv/linux/aarch64/vltest.py
> 
> diff --git a/INSTALL b/INSTALL
> index 065a568585e6..bc761ab98bbf 100644
> --- a/INSTALL
> +++ b/INSTALL
> @@ -380,6 +380,10 @@ the same syntax as 'test-wrapper-env', the only difference in its
>  semantics being starting with an empty set of environment variables
>  rather than the ambient set.
>  
> +   For AArch64 with SVE, when testing the GNU C Library, 'test-wrapper'
> +may be set to "SRCDIR/sysdeps/unix/sysv/linux/aarch64/vltest.py
> +VECTOR-LENGTH" to change Vector Length.
> +
>  Installing the C Library
>  ========================
>  
> diff --git a/manual/install.texi b/manual/install.texi
> index eb41fbd0b5ab..f1d858fb789c 100644
> --- a/manual/install.texi
> +++ b/manual/install.texi
> @@ -418,6 +418,9 @@ use has the same syntax as @samp{test-wrapper-env}, the only
>  difference in its semantics being starting with an empty set of
>  environment variables rather than the ambient set.
>  
> +For AArch64 with SVE, when testing @theglibc{}, @samp{test-wrapper}
> +may be set to "@var{srcdir}/sysdeps/unix/sysv/linux/aarch64/vltest.py
> +@var{vector-length}" to change Vector Length.
>  
>  @node Running make install
>  @appendixsec Installing the C Library
> diff --git a/sysdeps/unix/sysv/linux/aarch64/vltest.py b/sysdeps/unix/sysv/linux/aarch64/vltest.py
> new file mode 100755
> index 000000000000..bed62ad151e0
> --- /dev/null
> +++ b/sysdeps/unix/sysv/linux/aarch64/vltest.py
> @@ -0,0 +1,82 @@
> +#!/usr/bin/python3
> +# Set Scalable Vector Length test helper
> +# Copyright (C) 2021 Free Software Foundation, Inc.
> +# This file is part of the GNU C Library.
> +#
> +# The GNU C Library is free software; you can redistribute it and/or
> +# modify it under the terms of the GNU Lesser General Public
> +# License as published by the Free Software Foundation; either
> +# version 2.1 of the License, or (at your option) any later version.
> +#
> +# The GNU C Library is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +# Lesser General Public License for more details.
> +#
> +# You should have received a copy of the GNU Lesser General Public
> +# License along with the GNU C Library; if not, see
> +# <https://www.gnu.org/licenses/>.
> +"""Set Scalable Vector Length test helper.
> +
> +Set Scalable Vector Length for child process.
> +
> +examples:
> +
> +~/build$ make check subdirs=string \
> +test-wrapper='~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16'
> +
> +~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 16 \
> +make test t=string/test-memcpy
> +
> +~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 32 \
> +./debugglibc.sh string/test-memmove
> +
> +~/build$ ~/glibc/sysdeps/unix/sysv/linux/aarch64/vltest.py 64 \
> +./testrun.sh string/test-memset
> +"""
> +import argparse
> +from ctypes import cdll, CDLL
> +import os
> +import sys
> +
> +EXIT_SUCCESS = 0
> +EXIT_FAILURE = 1
> +EXIT_UNSUPPORTED = 77
> +
> +AT_HWCAP = 16
> +HWCAP_SVE = (1 << 22)
> +
> +PR_SVE_GET_VL = 51
> +PR_SVE_SET_VL = 50
> +PR_SVE_SET_VL_ONEXEC = (1 << 18)
> +PR_SVE_VL_INHERIT = (1 << 17)
> +PR_SVE_VL_LEN_MASK = 0xffff
> +
> +def main(args):
> +    libc = CDLL("libc.so.6")
> +    if not libc.getauxval(AT_HWCAP) & HWCAP_SVE:
> +        print("CPU doesn't support SVE")
> +        sys.exit(EXIT_UNSUPPORTED)
> +
> +    libc.prctl(PR_SVE_SET_VL,
> +               args.vl[0] | PR_SVE_SET_VL_ONEXEC | PR_SVE_VL_INHERIT)
> +    os.execvp(args.args[0], args.args)
> +    print("exec system call failure")
> +    sys.exit(EXIT_FAILURE)
> +
> +if __name__ == '__main__':
> +    parser = argparse.ArgumentParser(description=
> +            "Set Scalable Vector Length test helper",
> +            formatter_class=argparse.ArgumentDefaultsHelpFormatter)
> +
> +    # positional argument
> +    parser.add_argument("vl", nargs=1, type=int,
> +                        choices=range(16, 257, 16),
> +                        help=('vector length '\
> +                              'which is multiples of 16 from 16 to 256'))
> +    # remainDer arguments
> +    parser.add_argument('args', nargs=argparse.REMAINDER,
> +                        help=('args '\
> +                              'which is passed to child process'))
> +    args = parser.parse_args()
> +    main(args)
> -- 
> 2.17.1
> 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 6/6] benchtests: Fixed bench-memcpy-random: buf1: mprotect failed
  2021-05-12  9:29   ` [PATCH v2 6/6] benchtests: Fixed bench-memcpy-random: buf1: mprotect failed Naohiro Tamura
@ 2021-05-26 10:25     ` Szabolcs Nagy
  0 siblings, 0 replies; 36+ messages in thread
From: Szabolcs Nagy @ 2021-05-26 10:25 UTC (permalink / raw)
  To: Naohiro Tamura; +Cc: libc-alpha, Naohiro Tamura

The 05/12/2021 09:29, Naohiro Tamura wrote:
> From: Naohiro Tamura <naohirot@jp.fujitsu.com>
> 
> This patch fixed mprotect system call failure on AArch64.
> This failure happened on not only A64FX but also ThunderX2.
> 
> Also this patch updated a JSON key from "max-size" to "length" so that
> 'plot_strings.py' can process 'bench-memcpy-random.out'

thanks, this is ok for master.
i will commit it.

> ---
>  benchtests/bench-memcpy-random.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/benchtests/bench-memcpy-random.c b/benchtests/bench-memcpy-random.c
> index 9b62033379..c490b73ed0 100644
> --- a/benchtests/bench-memcpy-random.c
> +++ b/benchtests/bench-memcpy-random.c
> @@ -16,7 +16,7 @@
>     License along with the GNU C Library; if not, see
>     <https://www.gnu.org/licenses/>.  */
>  
> -#define MIN_PAGE_SIZE (512*1024+4096)
> +#define MIN_PAGE_SIZE (512*1024+getpagesize())
>  #define TEST_MAIN
>  #define TEST_NAME "memcpy"
>  #include "bench-string.h"
> @@ -160,7 +160,7 @@ do_test (json_ctx_t *json_ctx, size_t max_size)
>      }
>  
>    json_element_object_begin (json_ctx);
> -  json_attr_uint (json_ctx, "max-size", (double) max_size);
> +  json_attr_uint (json_ctx, "length", (double) max_size);
>    json_array_begin (json_ctx, "timings");
>  
>    FOR_EACH_IMPL (impl, 0)
> -- 
> 2.17.1
> 

-- 

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v2 0/6] aarch64: Added optimized memcpy/memmove/memset for A64FX
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
                     ` (5 preceding siblings ...)
  2021-05-12  9:29   ` [PATCH v2 6/6] benchtests: Fixed bench-memcpy-random: buf1: mprotect failed Naohiro Tamura
@ 2021-05-27  0:22   ` naohirot
  2021-05-27 23:50     ` naohirot
  2021-05-27  7:42   ` [PATCH v3 1/2] aarch64: Added optimized memcpy and memmove " Naohiro Tamura
  2021-05-27  7:44   ` [PATCH v3 2/2] aarch64: Added optimized memset " Naohiro Tamura
  8 siblings, 1 reply; 36+ messages in thread
From: naohirot @ 2021-05-27  0:22 UTC (permalink / raw)
  To: 'Szabolcs Nagy', libc-alpha

Hi Szabolcs,

>   config: Added HAVE_AARCH64_SVE_ASM for aarch64
>   aarch64: define BTI_C and BTI_J macros as NOP unless HAVE_AARCH64_BTI
>   scripts: Added Vector Length Set test helper script
>   benchtests: Fixed bench-memcpy-random: buf1: mprotect failed

Thank you for the merges!

>   aarch64: Added optimized memcpy and memmove for A64FX
>   aarch64: Added optimized memset for A64FX

I'll fix the whitespaces.

Thanks
Naohiro



^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v3 1/2] aarch64: Added optimized memcpy and memmove for A64FX
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
                     ` (6 preceding siblings ...)
  2021-05-27  0:22   ` [PATCH v2 0/6] aarch64: Added optimized memcpy/memmove/memset for A64FX naohirot
@ 2021-05-27  7:42   ` Naohiro Tamura
  2021-05-27  7:44   ` [PATCH v3 2/2] aarch64: Added optimized memset " Naohiro Tamura
  8 siblings, 0 replies; 36+ messages in thread
From: Naohiro Tamura @ 2021-05-27  7:42 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch optimizes the performance of memcpy/memmove for A64FX [1]
which implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB
cache per NUMA node.

The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill, and software pipelining.

SVE assembler code for memcpy/memmove is implemented as Vector Length
Agnostic code so theoretically it can be run on any SOC which supports
ARMv8-A SVE standard.

We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.

And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.

[1] https://github.com/fujitsu/A64FX

Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reviewed-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
---
 manual/tunables.texi                          |   3 +-
 sysdeps/aarch64/multiarch/Makefile            |   2 +-
 sysdeps/aarch64/multiarch/ifunc-impl-list.c   |   8 +-
 sysdeps/aarch64/multiarch/init-arch.h         |   4 +-
 sysdeps/aarch64/multiarch/memcpy.c            |  18 +-
 sysdeps/aarch64/multiarch/memcpy_a64fx.S      | 406 ++++++++++++++++++
 sysdeps/aarch64/multiarch/memmove.c           |  18 +-
 .../unix/sysv/linux/aarch64/cpu-features.c    |   4 +
 .../unix/sysv/linux/aarch64/cpu-features.h    |   4 +
 9 files changed, 453 insertions(+), 14 deletions(-)
 create mode 100644 sysdeps/aarch64/multiarch/memcpy_a64fx.S

diff --git a/manual/tunables.texi b/manual/tunables.texi
index 6de647b4262c..fe7c1313ccc4 100644
--- a/manual/tunables.texi
+++ b/manual/tunables.texi
@@ -454,7 +454,8 @@ This tunable is specific to powerpc, powerpc64 and powerpc64le.
 The @code{glibc.cpu.name=xxx} tunable allows the user to tell @theglibc{} to
 assume that the CPU is @code{xxx} where xxx may have one of these values:
 @code{generic}, @code{falkor}, @code{thunderxt88}, @code{thunderx2t99},
-@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng}.
+@code{thunderx2t99p1}, @code{ares}, @code{emag}, @code{kunpeng},
+@code{a64fx}.
 
 This tunable is specific to aarch64.
 @end deftp
diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile
index dc3efffb36b6..04c3f171215e 100644
--- a/sysdeps/aarch64/multiarch/Makefile
+++ b/sysdeps/aarch64/multiarch/Makefile
@@ -1,6 +1,6 @@
 ifeq ($(subdir),string)
 sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \
-		   memcpy_falkor \
+		   memcpy_falkor memcpy_a64fx \
 		   memset_generic memset_falkor memset_emag memset_kunpeng \
 		   memchr_generic memchr_nosimd \
 		   strlen_mte strlen_asimd
diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
index 99a8c68aaca0..911393565c21 100644
--- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c
+++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
@@ -25,7 +25,7 @@
 #include <stdio.h>
 
 /* Maximum number of IFUNC implementations.  */
-#define MAX_IFUNC	4
+#define MAX_IFUNC	7
 
 size_t
 __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
@@ -43,12 +43,18 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 	      IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2)
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_falkor)
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_simd)
+#if HAVE_AARCH64_SVE_ASM
+	      IFUNC_IMPL_ADD (array, i, memcpy, sve, __memcpy_a64fx)
+#endif
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_generic))
   IFUNC_IMPL (i, name, memmove,
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_thunderx)
 	      IFUNC_IMPL_ADD (array, i, memmove, !bti, __memmove_thunderx2)
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_falkor)
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_simd)
+#if HAVE_AARCH64_SVE_ASM
+	      IFUNC_IMPL_ADD (array, i, memmove, sve, __memmove_a64fx)
+#endif
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_generic))
   IFUNC_IMPL (i, name, memset,
 	      /* Enable this on non-falkor processors too so that other cores
diff --git a/sysdeps/aarch64/multiarch/init-arch.h b/sysdeps/aarch64/multiarch/init-arch.h
index a167699e74f4..6d92c1bcff6a 100644
--- a/sysdeps/aarch64/multiarch/init-arch.h
+++ b/sysdeps/aarch64/multiarch/init-arch.h
@@ -33,4 +33,6 @@
   bool __attribute__((unused)) bti =					      \
     HAVE_AARCH64_BTI && GLRO(dl_aarch64_cpu_features).bti;		      \
   bool __attribute__((unused)) mte =					      \
-    MTE_ENABLED ();
+    MTE_ENABLED ();							      \
+  bool __attribute__((unused)) sve =					      \
+    GLRO(dl_aarch64_cpu_features).sve;
diff --git a/sysdeps/aarch64/multiarch/memcpy.c b/sysdeps/aarch64/multiarch/memcpy.c
index 0e0a5cbcfb1b..25e0081eeb51 100644
--- a/sysdeps/aarch64/multiarch/memcpy.c
+++ b/sysdeps/aarch64/multiarch/memcpy.c
@@ -33,6 +33,9 @@ extern __typeof (__redirect_memcpy) __memcpy_simd attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_thunderx attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_thunderx2 attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_falkor attribute_hidden;
+# if HAVE_AARCH64_SVE_ASM
+extern __typeof (__redirect_memcpy) __memcpy_a64fx attribute_hidden;
+# endif
 
 libc_ifunc (__libc_memcpy,
             (IS_THUNDERX (midr)
@@ -40,12 +43,17 @@ libc_ifunc (__libc_memcpy,
 	     : (IS_FALKOR (midr) || IS_PHECDA (midr)
 		? __memcpy_falkor
 		: (IS_THUNDERX2 (midr) || IS_THUNDERX2PA (midr)
-		  ? __memcpy_thunderx2
-		  : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
-		     || IS_NEOVERSE_V1 (midr)
-		     ? __memcpy_simd
+		   ? __memcpy_thunderx2
+		   : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
+		      || IS_NEOVERSE_V1 (midr)
+		      ? __memcpy_simd
+# if HAVE_AARCH64_SVE_ASM
+		     : (IS_A64FX (midr)
+			? __memcpy_a64fx
+			: __memcpy_generic))))));
+# else
 		     : __memcpy_generic)))));
-
+# endif
 # undef memcpy
 strong_alias (__libc_memcpy, memcpy);
 #endif
diff --git a/sysdeps/aarch64/multiarch/memcpy_a64fx.S b/sysdeps/aarch64/multiarch/memcpy_a64fx.S
new file mode 100644
index 000000000000..65528405bb12
--- /dev/null
+++ b/sysdeps/aarch64/multiarch/memcpy_a64fx.S
@@ -0,0 +1,406 @@
+/* Optimized memcpy for Fujitsu A64FX processor.
+   Copyright (C) 2021 Free Software Foundation, Inc.
+
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library.  If not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+
+/* Assumptions:
+ *
+ * ARMv8.2-a, AArch64, unaligned accesses, sve
+ *
+ */
+
+#define L2_SIZE		(8*1024*1024)/2	// L2 8MB/2
+#define CACHE_LINE_SIZE	256
+#define ZF_DIST		(CACHE_LINE_SIZE * 21)	// Zerofill distance
+#define dest		x0
+#define src		x1
+#define n		x2	// size
+#define tmp1		x3
+#define tmp2		x4
+#define tmp3		x5
+#define rest		x6
+#define dest_ptr	x7
+#define src_ptr		x8
+#define vector_length	x9
+#define cl_remainder	x10	// CACHE_LINE_SIZE remainder
+
+#if HAVE_AARCH64_SVE_ASM
+# if IS_IN (libc)
+#  define MEMCPY __memcpy_a64fx
+#  define MEMMOVE __memmove_a64fx
+
+	.arch armv8.2-a+sve
+
+	.macro dc_zva times
+	dc	zva, tmp1
+	add	tmp1, tmp1, CACHE_LINE_SIZE
+	.if \times-1
+	dc_zva "(\times-1)"
+	.endif
+	.endm
+
+	.macro ld1b_unroll8
+	ld1b	z0.b, p0/z, [src_ptr, #0, mul vl]
+	ld1b	z1.b, p0/z, [src_ptr, #1, mul vl]
+	ld1b	z2.b, p0/z, [src_ptr, #2, mul vl]
+	ld1b	z3.b, p0/z, [src_ptr, #3, mul vl]
+	ld1b	z4.b, p0/z, [src_ptr, #4, mul vl]
+	ld1b	z5.b, p0/z, [src_ptr, #5, mul vl]
+	ld1b	z6.b, p0/z, [src_ptr, #6, mul vl]
+	ld1b	z7.b, p0/z, [src_ptr, #7, mul vl]
+	.endm
+
+	.macro stld1b_unroll4a
+	st1b	z0.b, p0,   [dest_ptr, #0, mul vl]
+	st1b	z1.b, p0,   [dest_ptr, #1, mul vl]
+	ld1b	z0.b, p0/z, [src_ptr,  #0, mul vl]
+	ld1b	z1.b, p0/z, [src_ptr,  #1, mul vl]
+	st1b	z2.b, p0,   [dest_ptr, #2, mul vl]
+	st1b	z3.b, p0,   [dest_ptr, #3, mul vl]
+	ld1b	z2.b, p0/z, [src_ptr,  #2, mul vl]
+	ld1b	z3.b, p0/z, [src_ptr,  #3, mul vl]
+	.endm
+
+	.macro stld1b_unroll4b
+	st1b	z4.b, p0,   [dest_ptr, #4, mul vl]
+	st1b	z5.b, p0,   [dest_ptr, #5, mul vl]
+	ld1b	z4.b, p0/z, [src_ptr,  #4, mul vl]
+	ld1b	z5.b, p0/z, [src_ptr,  #5, mul vl]
+	st1b	z6.b, p0,   [dest_ptr, #6, mul vl]
+	st1b	z7.b, p0,   [dest_ptr, #7, mul vl]
+	ld1b	z6.b, p0/z, [src_ptr,  #6, mul vl]
+	ld1b	z7.b, p0/z, [src_ptr,  #7, mul vl]
+	.endm
+
+	.macro stld1b_unroll8
+	stld1b_unroll4a
+	stld1b_unroll4b
+	.endm
+
+	.macro st1b_unroll8
+	st1b	z0.b, p0, [dest_ptr, #0, mul vl]
+	st1b	z1.b, p0, [dest_ptr, #1, mul vl]
+	st1b	z2.b, p0, [dest_ptr, #2, mul vl]
+	st1b	z3.b, p0, [dest_ptr, #3, mul vl]
+	st1b	z4.b, p0, [dest_ptr, #4, mul vl]
+	st1b	z5.b, p0, [dest_ptr, #5, mul vl]
+	st1b	z6.b, p0, [dest_ptr, #6, mul vl]
+	st1b	z7.b, p0, [dest_ptr, #7, mul vl]
+	.endm
+
+	.macro shortcut_for_small_size exit
+	// if rest <= vector_length * 2
+	whilelo	p0.b, xzr, n
+	whilelo	p1.b, vector_length, n
+	b.last	1f
+	ld1b	z0.b, p0/z, [src, #0, mul vl]
+	ld1b	z1.b, p1/z, [src, #1, mul vl]
+	st1b	z0.b, p0, [dest, #0, mul vl]
+	st1b	z1.b, p1, [dest, #1, mul vl]
+	ret
+1:	// if rest > vector_length * 8
+	cmp	n, vector_length, lsl 3 // vector_length * 8
+	b.hi	\exit
+	// if rest <= vector_length * 4
+	lsl	tmp1, vector_length, 1  // vector_length * 2
+	whilelo	p2.b, tmp1, n
+	incb	tmp1
+	whilelo	p3.b, tmp1, n
+	b.last	1f
+	ld1b	z0.b, p0/z, [src, #0, mul vl]
+	ld1b	z1.b, p1/z, [src, #1, mul vl]
+	ld1b	z2.b, p2/z, [src, #2, mul vl]
+	ld1b	z3.b, p3/z, [src, #3, mul vl]
+	st1b	z0.b, p0, [dest, #0, mul vl]
+	st1b	z1.b, p1, [dest, #1, mul vl]
+	st1b	z2.b, p2, [dest, #2, mul vl]
+	st1b	z3.b, p3, [dest, #3, mul vl]
+	ret
+1:	// if rest <= vector_length * 8
+	lsl	tmp1, vector_length, 2  // vector_length * 4
+	whilelo	p4.b, tmp1, n
+	incb	tmp1
+	whilelo	p5.b, tmp1, n
+	b.last	1f
+	ld1b	z0.b, p0/z, [src, #0, mul vl]
+	ld1b	z1.b, p1/z, [src, #1, mul vl]
+	ld1b	z2.b, p2/z, [src, #2, mul vl]
+	ld1b	z3.b, p3/z, [src, #3, mul vl]
+	ld1b	z4.b, p4/z, [src, #4, mul vl]
+	ld1b	z5.b, p5/z, [src, #5, mul vl]
+	st1b	z0.b, p0, [dest, #0, mul vl]
+	st1b	z1.b, p1, [dest, #1, mul vl]
+	st1b	z2.b, p2, [dest, #2, mul vl]
+	st1b	z3.b, p3, [dest, #3, mul vl]
+	st1b	z4.b, p4, [dest, #4, mul vl]
+	st1b	z5.b, p5, [dest, #5, mul vl]
+	ret
+1:	lsl	tmp1, vector_length, 2	// vector_length * 4
+	incb	tmp1			// vector_length * 5
+	incb	tmp1			// vector_length * 6
+	whilelo	p6.b, tmp1, n
+	incb	tmp1
+	whilelo	p7.b, tmp1, n
+	ld1b	z0.b, p0/z, [src, #0, mul vl]
+	ld1b	z1.b, p1/z, [src, #1, mul vl]
+	ld1b	z2.b, p2/z, [src, #2, mul vl]
+	ld1b	z3.b, p3/z, [src, #3, mul vl]
+	ld1b	z4.b, p4/z, [src, #4, mul vl]
+	ld1b	z5.b, p5/z, [src, #5, mul vl]
+	ld1b	z6.b, p6/z, [src, #6, mul vl]
+	ld1b	z7.b, p7/z, [src, #7, mul vl]
+	st1b	z0.b, p0, [dest, #0, mul vl]
+	st1b	z1.b, p1, [dest, #1, mul vl]
+	st1b	z2.b, p2, [dest, #2, mul vl]
+	st1b	z3.b, p3, [dest, #3, mul vl]
+	st1b	z4.b, p4, [dest, #4, mul vl]
+	st1b	z5.b, p5, [dest, #5, mul vl]
+	st1b	z6.b, p6, [dest, #6, mul vl]
+	st1b	z7.b, p7, [dest, #7, mul vl]
+	ret
+	.endm
+
+ENTRY (MEMCPY)
+
+	PTR_ARG (0)
+	PTR_ARG (1)
+	SIZE_ARG (2)
+
+L(memcpy):
+	cntb	vector_length
+	// shortcut for less than vector_length * 8
+	// gives a free ptrue to p0.b for n >= vector_length
+	shortcut_for_small_size L(vl_agnostic)
+	// end of shortcut
+
+L(vl_agnostic): // VL Agnostic
+	mov	rest, n
+	mov	dest_ptr, dest
+	mov	src_ptr, src
+	// if rest >= L2_SIZE && vector_length == 64 then L(L2)
+	mov	tmp1, 64
+	cmp	rest, L2_SIZE
+	ccmp	vector_length, tmp1, 0, cs
+	b.eq	L(L2)
+
+L(unroll8): // unrolling and software pipeline
+	lsl	tmp1, vector_length, 3	// vector_length * 8
+	.p2align 3
+	cmp	 rest, tmp1
+	b.cc	L(last)
+	ld1b_unroll8
+	add	src_ptr, src_ptr, tmp1
+	sub	rest, rest, tmp1
+	cmp	rest, tmp1
+	b.cc	2f
+	.p2align 3
+1:	stld1b_unroll8
+	add	dest_ptr, dest_ptr, tmp1
+	add	src_ptr, src_ptr, tmp1
+	sub	rest, rest, tmp1
+	cmp	rest, tmp1
+	b.ge	1b
+2:	st1b_unroll8
+	add	dest_ptr, dest_ptr, tmp1
+
+	.p2align 3
+L(last):
+	whilelo	p0.b, xzr, rest
+	whilelo	p1.b, vector_length, rest
+	b.last	1f
+	ld1b	z0.b, p0/z, [src_ptr, #0, mul vl]
+	ld1b	z1.b, p1/z, [src_ptr, #1, mul vl]
+	st1b	z0.b, p0, [dest_ptr, #0, mul vl]
+	st1b	z1.b, p1, [dest_ptr, #1, mul vl]
+	ret
+1:	lsl	tmp1, vector_length, 1	// vector_length * 2
+	whilelo	p2.b, tmp1, rest
+	incb	tmp1
+	whilelo	p3.b, tmp1, rest
+	b.last	1f
+	ld1b	z0.b, p0/z, [src_ptr, #0, mul vl]
+	ld1b	z1.b, p1/z, [src_ptr, #1, mul vl]
+	ld1b	z2.b, p2/z, [src_ptr, #2, mul vl]
+	ld1b	z3.b, p3/z, [src_ptr, #3, mul vl]
+	st1b	z0.b, p0, [dest_ptr, #0, mul vl]
+	st1b	z1.b, p1, [dest_ptr, #1, mul vl]
+	st1b	z2.b, p2, [dest_ptr, #2, mul vl]
+	st1b	z3.b, p3, [dest_ptr, #3, mul vl]
+	ret
+1:	lsl	tmp1, vector_length, 2	// vector_length * 4
+	whilelo	p4.b, tmp1, rest
+	incb	tmp1
+	whilelo	p5.b, tmp1, rest
+	incb	tmp1
+	whilelo	p6.b, tmp1, rest
+	incb	tmp1
+	whilelo	p7.b, tmp1, rest
+	ld1b	z0.b, p0/z, [src_ptr, #0, mul vl]
+	ld1b	z1.b, p1/z, [src_ptr, #1, mul vl]
+	ld1b	z2.b, p2/z, [src_ptr, #2, mul vl]
+	ld1b	z3.b, p3/z, [src_ptr, #3, mul vl]
+	ld1b	z4.b, p4/z, [src_ptr, #4, mul vl]
+	ld1b	z5.b, p5/z, [src_ptr, #5, mul vl]
+	ld1b	z6.b, p6/z, [src_ptr, #6, mul vl]
+	ld1b	z7.b, p7/z, [src_ptr, #7, mul vl]
+	st1b	z0.b, p0, [dest_ptr, #0, mul vl]
+	st1b	z1.b, p1, [dest_ptr, #1, mul vl]
+	st1b	z2.b, p2, [dest_ptr, #2, mul vl]
+	st1b	z3.b, p3, [dest_ptr, #3, mul vl]
+	st1b	z4.b, p4, [dest_ptr, #4, mul vl]
+	st1b	z5.b, p5, [dest_ptr, #5, mul vl]
+	st1b	z6.b, p6, [dest_ptr, #6, mul vl]
+	st1b	z7.b, p7, [dest_ptr, #7, mul vl]
+	ret
+
+L(L2):
+	// align dest address at CACHE_LINE_SIZE byte boundary
+	mov	tmp1, CACHE_LINE_SIZE
+	ands	tmp2, dest_ptr, CACHE_LINE_SIZE - 1
+	// if cl_remainder == 0
+	b.eq	L(L2_dc_zva)
+	sub	cl_remainder, tmp1, tmp2
+	// process remainder until the first CACHE_LINE_SIZE boundary
+	whilelo	p1.b, xzr, cl_remainder	// keep p0.b all true
+	whilelo	p2.b, vector_length, cl_remainder
+	b.last	1f
+	ld1b	z1.b, p1/z, [src_ptr, #0, mul vl]
+	ld1b	z2.b, p2/z, [src_ptr, #1, mul vl]
+	st1b	z1.b, p1, [dest_ptr, #0, mul vl]
+	st1b	z2.b, p2, [dest_ptr, #1, mul vl]
+	b	2f
+1:	lsl	tmp1, vector_length, 1	// vector_length * 2
+	whilelo	p3.b, tmp1, cl_remainder
+	incb	tmp1
+	whilelo	p4.b, tmp1, cl_remainder
+	ld1b	z1.b, p1/z, [src_ptr, #0, mul vl]
+	ld1b	z2.b, p2/z, [src_ptr, #1, mul vl]
+	ld1b	z3.b, p3/z, [src_ptr, #2, mul vl]
+	ld1b	z4.b, p4/z, [src_ptr, #3, mul vl]
+	st1b	z1.b, p1, [dest_ptr, #0, mul vl]
+	st1b	z2.b, p2, [dest_ptr, #1, mul vl]
+	st1b	z3.b, p3, [dest_ptr, #2, mul vl]
+	st1b	z4.b, p4, [dest_ptr, #3, mul vl]
+2:	add	dest_ptr, dest_ptr, cl_remainder
+	add	src_ptr, src_ptr, cl_remainder
+	sub	rest, rest, cl_remainder
+
+L(L2_dc_zva):
+	// zero fill
+	and	tmp1, dest, 0xffffffffffffff
+	and	tmp2, src, 0xffffffffffffff
+	subs	tmp1, tmp1, tmp2	// diff
+	b.ge	1f
+	neg	tmp1, tmp1
+1:	mov	tmp3, ZF_DIST + CACHE_LINE_SIZE * 2
+	cmp	tmp1, tmp3
+	b.lo	L(unroll8)
+	mov	tmp1, dest_ptr
+	dc_zva	(ZF_DIST / CACHE_LINE_SIZE) - 1
+	// unroll
+	ld1b_unroll8	// this line has to be after "b.lo L(unroll8)"
+	add	 src_ptr, src_ptr, CACHE_LINE_SIZE * 2
+	sub	 rest, rest, CACHE_LINE_SIZE * 2
+	mov	 tmp1, ZF_DIST
+	.p2align 3
+1:	stld1b_unroll4a
+	add	tmp2, dest_ptr, tmp1	// dest_ptr + ZF_DIST
+	dc	zva, tmp2
+	stld1b_unroll4b
+	add	tmp2, tmp2, CACHE_LINE_SIZE
+	dc	zva, tmp2
+	add	dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2
+	add	src_ptr, src_ptr, CACHE_LINE_SIZE * 2
+	sub	rest, rest, CACHE_LINE_SIZE * 2
+	cmp	rest, tmp3	// ZF_DIST + CACHE_LINE_SIZE * 2
+	b.ge	1b
+	st1b_unroll8
+	add	dest_ptr, dest_ptr, CACHE_LINE_SIZE * 2
+	b	L(unroll8)
+
+END (MEMCPY)
+libc_hidden_builtin_def (MEMCPY)
+
+
+ENTRY (MEMMOVE)
+
+	PTR_ARG (0)
+	PTR_ARG (1)
+	SIZE_ARG (2)
+
+	// remove tag address
+	// dest has to be immutable because it is the return value
+	// src has to be immutable because it is used in L(bwd_last)
+	and	tmp2, dest, 0xffffffffffffff	// save dest_notag into tmp2
+	and	tmp3, src, 0xffffffffffffff	// save src_notag intp tmp3
+	cmp	n, 0
+	ccmp	tmp2, tmp3, 4, ne
+	b.ne	1f
+	ret
+1:	cntb	vector_length
+	// shortcut for less than vector_length * 8
+	// gives a free ptrue to p0.b for n >= vector_length
+	// tmp2 and tmp3 should not be used in this macro to keep
+	// notag addresses
+	shortcut_for_small_size L(dispatch)
+	// end of shortcut
+
+L(dispatch):
+	// tmp2 = dest_notag, tmp3 = src_notag
+	// diff = dest_notag - src_notag
+	sub	tmp1, tmp2, tmp3
+	// if diff <= 0 || diff >= n then memcpy
+	cmp	tmp1, 0
+	ccmp	tmp1, n, 2, gt
+	b.cs	L(vl_agnostic)
+
+L(bwd_start):
+	mov	rest, n
+	add	dest_ptr, dest, n	// dest_end
+	add	src_ptr, src, n		// src_end
+
+L(bwd_unroll8): // unrolling and software pipeline
+	lsl	tmp1, vector_length, 3	// vector_length * 8
+	.p2align 3
+	cmp	rest, tmp1
+	b.cc	L(bwd_last)
+	sub	src_ptr, src_ptr, tmp1
+	ld1b_unroll8
+	sub	rest, rest, tmp1
+	cmp	rest, tmp1
+	b.cc	2f
+	.p2align 3
+1:	sub	src_ptr, src_ptr, tmp1
+	sub	dest_ptr, dest_ptr, tmp1
+	stld1b_unroll8
+	sub	rest, rest, tmp1
+	cmp	rest, tmp1
+	b.ge	1b
+2:	sub	dest_ptr, dest_ptr, tmp1
+	st1b_unroll8
+
+L(bwd_last):
+	mov	dest_ptr, dest
+	mov	src_ptr, src
+	b	L(last)
+
+END (MEMMOVE)
+libc_hidden_builtin_def (MEMMOVE)
+# endif /* IS_IN (libc) */
+#endif /* HAVE_AARCH64_SVE_ASM */
diff --git a/sysdeps/aarch64/multiarch/memmove.c b/sysdeps/aarch64/multiarch/memmove.c
index 12d77818a999..d0adefc547f6 100644
--- a/sysdeps/aarch64/multiarch/memmove.c
+++ b/sysdeps/aarch64/multiarch/memmove.c
@@ -33,6 +33,9 @@ extern __typeof (__redirect_memmove) __memmove_simd attribute_hidden;
 extern __typeof (__redirect_memmove) __memmove_thunderx attribute_hidden;
 extern __typeof (__redirect_memmove) __memmove_thunderx2 attribute_hidden;
 extern __typeof (__redirect_memmove) __memmove_falkor attribute_hidden;
+# if HAVE_AARCH64_SVE_ASM
+extern __typeof (__redirect_memmove) __memmove_a64fx attribute_hidden;
+# endif
 
 libc_ifunc (__libc_memmove,
             (IS_THUNDERX (midr)
@@ -40,12 +43,17 @@ libc_ifunc (__libc_memmove,
 	     : (IS_FALKOR (midr) || IS_PHECDA (midr)
 		? __memmove_falkor
 		: (IS_THUNDERX2 (midr) || IS_THUNDERX2PA (midr)
-		  ? __memmove_thunderx2
-		  : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
-		     || IS_NEOVERSE_V1 (midr)
-		     ? __memmove_simd
+		   ? __memmove_thunderx2
+		   : (IS_NEOVERSE_N1 (midr) || IS_NEOVERSE_N2 (midr)
+		      || IS_NEOVERSE_V1 (midr)
+		      ? __memmove_simd
+# if HAVE_AARCH64_SVE_ASM
+		     : (IS_A64FX (midr)
+			? __memmove_a64fx
+			: __memmove_generic))))));
+# else
 		     : __memmove_generic)))));
-
+# endif
 # undef memmove
 strong_alias (__libc_memmove, memmove);
 #endif
diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
index db6aa3516c1b..6206a2f618b0 100644
--- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
+++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c
@@ -46,6 +46,7 @@ static struct cpu_list cpu_list[] = {
       {"ares",		 0x411FD0C0},
       {"emag",		 0x503F0001},
       {"kunpeng920", 	 0x481FD010},
+      {"a64fx",		 0x460F0010},
       {"generic", 	 0x0}
 };
 
@@ -116,4 +117,7 @@ init_cpu_features (struct cpu_features *cpu_features)
 	     (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_ASYNC | MTE_ALLOWED_TAGS),
 	     0, 0, 0);
 #endif
+
+  /* Check if SVE is supported.  */
+  cpu_features->sve = GLRO (dl_hwcap) & HWCAP_SVE;
 }
diff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
index 3b9bfed1349c..2b322e5414be 100644
--- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
+++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.h
@@ -65,6 +65,9 @@
 #define IS_KUNPENG920(midr) (MIDR_IMPLEMENTOR(midr) == 'H'			   \
                         && MIDR_PARTNUM(midr) == 0xd01)
 
+#define IS_A64FX(midr) (MIDR_IMPLEMENTOR(midr) == 'F'			      \
+			&& MIDR_PARTNUM(midr) == 0x001)
+
 struct cpu_features
 {
   uint64_t midr_el1;
@@ -72,6 +75,7 @@ struct cpu_features
   bool bti;
   /* Currently, the GLIBC memory tagging tunable only defines 8 bits.  */
   uint8_t mte_state;
+  bool sve;
 };
 
 #endif /* _CPU_FEATURES_AARCH64_H  */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v3 2/2] aarch64: Added optimized memset for A64FX
  2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
                     ` (7 preceding siblings ...)
  2021-05-27  7:42   ` [PATCH v3 1/2] aarch64: Added optimized memcpy and memmove " Naohiro Tamura
@ 2021-05-27  7:44   ` Naohiro Tamura
  8 siblings, 0 replies; 36+ messages in thread
From: Naohiro Tamura @ 2021-05-27  7:44 UTC (permalink / raw)
  To: libc-alpha; +Cc: Naohiro Tamura

From: Naohiro Tamura <naohirot@jp.fujitsu.com>

This patch optimizes the performance of memset for A64FX [1] which
implements ARMv8-A SVE and has L1 64KB cache per core and L2 8MB cache
per NUMA node.

The performance optimization makes use of Scalable Vector Register
with several techniques such as loop unrolling, memory access
alignment, cache zero fill and prefetch.

SVE assembler code for memset is implemented as Vector Length Agnostic
code so theoretically it can be run on any SOC which supports ARMv8-A
SVE standard.

We confirmed that all testcases have been passed by running 'make
check' and 'make xcheck' not only on A64FX but also on ThunderX2.

And also we confirmed that the SVE 512 bit vector register performance
is roughly 4 times better than Advanced SIMD 128 bit register and 8
times better than scalar 64 bit register by running 'make bench'.

[1] https://github.com/fujitsu/A64FX

Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reviewed-by: Szabolcs Nagy <Szabolcs.Nagy@arm.com>
---
 sysdeps/aarch64/multiarch/Makefile          |   1 +
 sysdeps/aarch64/multiarch/ifunc-impl-list.c |   5 +-
 sysdeps/aarch64/multiarch/memset.c          |  17 +-
 sysdeps/aarch64/multiarch/memset_a64fx.S    | 268 ++++++++++++++++++++
 4 files changed, 286 insertions(+), 5 deletions(-)
 create mode 100644 sysdeps/aarch64/multiarch/memset_a64fx.S

diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile
index 04c3f171215e..7500cf1e9369 100644
--- a/sysdeps/aarch64/multiarch/Makefile
+++ b/sysdeps/aarch64/multiarch/Makefile
@@ -2,6 +2,7 @@ ifeq ($(subdir),string)
 sysdep_routines += memcpy_generic memcpy_advsimd memcpy_thunderx memcpy_thunderx2 \
 		   memcpy_falkor memcpy_a64fx \
 		   memset_generic memset_falkor memset_emag memset_kunpeng \
+		   memset_a64fx \
 		   memchr_generic memchr_nosimd \
 		   strlen_mte strlen_asimd
 endif
diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
index 911393565c21..4e1a641d9fe9 100644
--- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c
+++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
@@ -37,7 +37,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 
   INIT_ARCH ();
 
-  /* Support sysdeps/aarch64/multiarch/memcpy.c and memmove.c.  */
+  /* Support sysdeps/aarch64/multiarch/memcpy.c, memmove.c and memset.c.  */
   IFUNC_IMPL (i, name, memcpy,
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_thunderx)
 	      IFUNC_IMPL_ADD (array, i, memcpy, !bti, __memcpy_thunderx2)
@@ -62,6 +62,9 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 	      IFUNC_IMPL_ADD (array, i, memset, (zva_size == 64), __memset_falkor)
 	      IFUNC_IMPL_ADD (array, i, memset, (zva_size == 64), __memset_emag)
 	      IFUNC_IMPL_ADD (array, i, memset, 1, __memset_kunpeng)
+#if HAVE_AARCH64_SVE_ASM
+	      IFUNC_IMPL_ADD (array, i, memset, sve, __memset_a64fx)
+#endif
 	      IFUNC_IMPL_ADD (array, i, memset, 1, __memset_generic))
   IFUNC_IMPL (i, name, memchr,
 	      IFUNC_IMPL_ADD (array, i, memchr, !mte, __memchr_nosimd)
diff --git a/sysdeps/aarch64/multiarch/memset.c b/sysdeps/aarch64/multiarch/memset.c
index 28d3926bc2e6..d7d9bbbda095 100644
--- a/sysdeps/aarch64/multiarch/memset.c
+++ b/sysdeps/aarch64/multiarch/memset.c
@@ -31,16 +31,25 @@ extern __typeof (__redirect_memset) __libc_memset;
 extern __typeof (__redirect_memset) __memset_falkor attribute_hidden;
 extern __typeof (__redirect_memset) __memset_emag attribute_hidden;
 extern __typeof (__redirect_memset) __memset_kunpeng attribute_hidden;
+# if HAVE_AARCH64_SVE_ASM
+extern __typeof (__redirect_memset) __memset_a64fx attribute_hidden;
+# endif
 extern __typeof (__redirect_memset) __memset_generic attribute_hidden;
 
 libc_ifunc (__libc_memset,
 	    IS_KUNPENG920 (midr)
 	    ?__memset_kunpeng
 	    : ((IS_FALKOR (midr) || IS_PHECDA (midr)) && zva_size == 64
-	     ? __memset_falkor
-	     : (IS_EMAG (midr) && zva_size == 64
-	       ? __memset_emag
-	       : __memset_generic)));
+	      ? __memset_falkor
+	      : (IS_EMAG (midr) && zva_size == 64
+		? __memset_emag
+# if HAVE_AARCH64_SVE_ASM
+		: (IS_A64FX (midr)
+		  ? __memset_a64fx
+		  : __memset_generic))));
+# else
+		  : __memset_generic)));
+# endif
 
 # undef memset
 strong_alias (__libc_memset, memset);
diff --git a/sysdeps/aarch64/multiarch/memset_a64fx.S b/sysdeps/aarch64/multiarch/memset_a64fx.S
new file mode 100644
index 000000000000..ce54e5418b08
--- /dev/null
+++ b/sysdeps/aarch64/multiarch/memset_a64fx.S
@@ -0,0 +1,268 @@
+/* Optimized memset for Fujitsu A64FX processor.
+   Copyright (C) 2021 Free Software Foundation, Inc.
+
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library.  If not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+#include <sysdeps/aarch64/memset-reg.h>
+
+/* Assumptions:
+ *
+ * ARMv8.2-a, AArch64, unaligned accesses, sve
+ *
+ */
+
+#define L1_SIZE		(64*1024)	// L1 64KB
+#define L2_SIZE         (8*1024*1024)	// L2 8MB - 1MB
+#define CACHE_LINE_SIZE	256
+#define PF_DIST_L1	(CACHE_LINE_SIZE * 16)	// Prefetch distance L1
+#define ZF_DIST		(CACHE_LINE_SIZE * 21)	// Zerofill distance
+#define rest		x8
+#define vector_length	x9
+#define vl_remainder	x10	// vector_length remainder
+#define cl_remainder	x11	// CACHE_LINE_SIZE remainder
+
+#if HAVE_AARCH64_SVE_ASM
+# if IS_IN (libc)
+#  define MEMSET __memset_a64fx
+
+	.arch armv8.2-a+sve
+
+	.macro dc_zva times
+	dc	zva, tmp1
+	add	tmp1, tmp1, CACHE_LINE_SIZE
+	.if \times-1
+	dc_zva "(\times-1)"
+	.endif
+	.endm
+
+	.macro st1b_unroll first=0, last=7
+	st1b	z0.b, p0, [dst, #\first, mul vl]
+	.if \last-\first
+	st1b_unroll "(\first+1)", \last
+	.endif
+	.endm
+
+	.macro shortcut_for_small_size exit
+	// if rest <= vector_length * 2
+	whilelo	p0.b, xzr, count
+	whilelo	p1.b, vector_length, count
+	b.last	1f
+	st1b	z0.b, p0, [dstin, #0, mul vl]
+	st1b	z0.b, p1, [dstin, #1, mul vl]
+	ret
+1:	// if rest > vector_length * 8
+	cmp	count, vector_length, lsl 3	// vector_length * 8
+	b.hi	\exit
+	// if rest <= vector_length * 4
+	lsl	tmp1, vector_length, 1	// vector_length * 2
+	whilelo	p2.b, tmp1, count
+	incb	tmp1
+	whilelo	p3.b, tmp1, count
+	b.last	1f
+	st1b	z0.b, p0, [dstin, #0, mul vl]
+	st1b	z0.b, p1, [dstin, #1, mul vl]
+	st1b	z0.b, p2, [dstin, #2, mul vl]
+	st1b	z0.b, p3, [dstin, #3, mul vl]
+	ret
+1:	// if rest <= vector_length * 8
+	lsl	tmp1, vector_length, 2	// vector_length * 4
+	whilelo	p4.b, tmp1, count
+	incb	tmp1
+	whilelo	p5.b, tmp1, count
+	b.last	1f
+	st1b	z0.b, p0, [dstin, #0, mul vl]
+	st1b	z0.b, p1, [dstin, #1, mul vl]
+	st1b	z0.b, p2, [dstin, #2, mul vl]
+	st1b	z0.b, p3, [dstin, #3, mul vl]
+	st1b	z0.b, p4, [dstin, #4, mul vl]
+	st1b	z0.b, p5, [dstin, #5, mul vl]
+	ret
+1:	lsl	tmp1, vector_length, 2	// vector_length * 4
+	incb	tmp1			// vector_length * 5
+	incb	tmp1			// vector_length * 6
+	whilelo	p6.b, tmp1, count
+	incb	tmp1
+	whilelo	p7.b, tmp1, count
+	st1b	z0.b, p0, [dstin, #0, mul vl]
+	st1b	z0.b, p1, [dstin, #1, mul vl]
+	st1b	z0.b, p2, [dstin, #2, mul vl]
+	st1b	z0.b, p3, [dstin, #3, mul vl]
+	st1b	z0.b, p4, [dstin, #4, mul vl]
+	st1b	z0.b, p5, [dstin, #5, mul vl]
+	st1b	z0.b, p6, [dstin, #6, mul vl]
+	st1b	z0.b, p7, [dstin, #7, mul vl]
+	ret
+	.endm
+
+ENTRY (MEMSET)
+
+	PTR_ARG (0)
+	SIZE_ARG (2)
+
+	cbnz	count, 1f
+	ret
+1:	dup	z0.b, valw
+	cntb	vector_length
+	// shortcut for less than vector_length * 8
+	// gives a free ptrue to p0.b for n >= vector_length
+	shortcut_for_small_size L(vl_agnostic)
+	// end of shortcut
+
+L(vl_agnostic): // VL Agnostic
+	mov	rest, count
+	mov	dst, dstin
+	add	dstend, dstin, count
+	// if rest >= L2_SIZE && vector_length == 64 then L(L2)
+	mov	tmp1, 64
+	cmp	rest, L2_SIZE
+	ccmp	vector_length, tmp1, 0, cs
+	b.eq	L(L2)
+	// if rest >= L1_SIZE && vector_length == 64 then L(L1_prefetch)
+	cmp	rest, L1_SIZE
+	ccmp	vector_length, tmp1, 0, cs
+	b.eq	L(L1_prefetch)
+
+L(unroll32):
+	lsl	tmp1, vector_length, 3	// vector_length * 8
+	lsl	tmp2, vector_length, 5	// vector_length * 32
+	.p2align 3
+1:	cmp	rest, tmp2
+	b.cc	L(unroll8)
+	st1b_unroll
+	add	dst, dst, tmp1
+	st1b_unroll
+	add	dst, dst, tmp1
+	st1b_unroll
+	add	dst, dst, tmp1
+	st1b_unroll
+	add	dst, dst, tmp1
+	sub	rest, rest, tmp2
+	b	1b
+
+L(unroll8):
+	lsl	tmp1, vector_length, 3
+	.p2align 3
+1:	cmp	rest, tmp1
+	b.cc	L(last)
+	st1b_unroll
+	add	dst, dst, tmp1
+	sub	rest, rest, tmp1
+	b	1b
+
+L(last):
+	whilelo	p0.b, xzr, rest
+	whilelo	p1.b, vector_length, rest
+	b.last	1f
+	st1b	z0.b, p0, [dst, #0, mul vl]
+	st1b	z0.b, p1, [dst, #1, mul vl]
+	ret
+1:	lsl	tmp1, vector_length, 1	// vector_length * 2
+	whilelo	p2.b, tmp1, rest
+	incb	tmp1
+	whilelo	p3.b, tmp1, rest
+	b.last	1f
+	st1b	z0.b, p0, [dst, #0, mul vl]
+	st1b	z0.b, p1, [dst, #1, mul vl]
+	st1b	z0.b, p2, [dst, #2, mul vl]
+	st1b	z0.b, p3, [dst, #3, mul vl]
+	ret
+1:	lsl	tmp1, vector_length, 2	// vector_length * 4
+	whilelo	p4.b, tmp1, rest
+	incb	tmp1
+	whilelo	p5.b, tmp1, rest
+	incb	tmp1
+	whilelo	p6.b, tmp1, rest
+	incb	tmp1
+	whilelo	p7.b, tmp1, rest
+	st1b	z0.b, p0, [dst, #0, mul vl]
+	st1b	z0.b, p1, [dst, #1, mul vl]
+	st1b	z0.b, p2, [dst, #2, mul vl]
+	st1b	z0.b, p3, [dst, #3, mul vl]
+	st1b	z0.b, p4, [dst, #4, mul vl]
+	st1b	z0.b, p5, [dst, #5, mul vl]
+	st1b	z0.b, p6, [dst, #6, mul vl]
+	st1b	z0.b, p7, [dst, #7, mul vl]
+	ret
+
+L(L1_prefetch): // if rest >= L1_SIZE
+	.p2align 3
+1:	st1b_unroll 0, 3
+	prfm	pstl1keep, [dst, PF_DIST_L1]
+	st1b_unroll 4, 7
+	prfm	pstl1keep, [dst, PF_DIST_L1 + CACHE_LINE_SIZE]
+	add	dst, dst, CACHE_LINE_SIZE * 2
+	sub	rest, rest, CACHE_LINE_SIZE * 2
+	cmp	rest, L1_SIZE
+	b.ge	1b
+	cbnz	rest, L(unroll32)
+	ret
+
+L(L2):
+	// align dst address at vector_length byte boundary
+	sub	tmp1, vector_length, 1
+	ands	tmp2, dst, tmp1
+	// if vl_remainder == 0
+	b.eq	1f
+	sub	vl_remainder, vector_length, tmp2
+	// process remainder until the first vector_length boundary
+	whilelt	p2.b, xzr, vl_remainder
+	st1b	z0.b, p2, [dst]
+	add	dst, dst, vl_remainder
+	sub	rest, rest, vl_remainder
+	// align dstin address at CACHE_LINE_SIZE byte boundary
+1:	mov	tmp1, CACHE_LINE_SIZE
+	ands	tmp2, dst, CACHE_LINE_SIZE - 1
+	// if cl_remainder == 0
+	b.eq	L(L2_dc_zva)
+	sub	cl_remainder, tmp1, tmp2
+	// process remainder until the first CACHE_LINE_SIZE boundary
+	mov	tmp1, xzr       // index
+2:	whilelt	p2.b, tmp1, cl_remainder
+	st1b	z0.b, p2, [dst, tmp1]
+	incb	tmp1
+	cmp	tmp1, cl_remainder
+	b.lo	2b
+	add	dst, dst, cl_remainder
+	sub	rest, rest, cl_remainder
+
+L(L2_dc_zva):
+	// zero fill
+	mov	tmp1, dst
+	dc_zva	(ZF_DIST / CACHE_LINE_SIZE) - 1
+	mov	zva_len, ZF_DIST
+	add	tmp1, zva_len, CACHE_LINE_SIZE * 2
+	// unroll
+	.p2align 3
+1:	st1b_unroll 0, 3
+	add	tmp2, dst, zva_len
+	dc	 zva, tmp2
+	st1b_unroll 4, 7
+	add	tmp2, tmp2, CACHE_LINE_SIZE
+	dc	zva, tmp2
+	add	dst, dst, CACHE_LINE_SIZE * 2
+	sub	rest, rest, CACHE_LINE_SIZE * 2
+	cmp	rest, tmp1	// ZF_DIST + CACHE_LINE_SIZE * 2
+	b.ge	1b
+	cbnz	rest, L(unroll8)
+	ret
+
+END (MEMSET)
+libc_hidden_builtin_def (MEMSET)
+
+#endif /* IS_IN (libc) */
+#endif /* HAVE_AARCH64_SVE_ASM */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [PATCH v2 0/6] aarch64: Added optimized memcpy/memmove/memset for A64FX
  2021-05-27  0:22   ` [PATCH v2 0/6] aarch64: Added optimized memcpy/memmove/memset for A64FX naohirot
@ 2021-05-27 23:50     ` naohirot
  0 siblings, 0 replies; 36+ messages in thread
From: naohirot @ 2021-05-27 23:50 UTC (permalink / raw)
  To: 'Szabolcs Nagy', libc-alpha

Hi Szabolcs,

> >   aarch64: Added optimized memcpy and memmove for A64FX
> >   aarch64: Added optimized memset for A64FX
> 
> I'll fix the whitespaces.

Great thank you for the merges!
Naohiro


^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2021-05-27 23:50 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-17  2:28 [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Naohiro Tamura
2021-03-17  2:33 ` [PATCH 1/5] config: Added HAVE_SVE_ASM_SUPPORT for aarch64 Naohiro Tamura
2021-03-29 12:11   ` Szabolcs Nagy
2021-03-30  6:19     ` naohirot
2021-03-17  2:34 ` [PATCH 2/5] aarch64: Added optimized memcpy and memmove for A64FX Naohiro Tamura
2021-03-29 12:44   ` Szabolcs Nagy
2021-03-30  7:17     ` naohirot
2021-03-17  2:34 ` [PATCH 3/5] aarch64: Added optimized memset " Naohiro Tamura
2021-03-17  2:35 ` [PATCH 4/5] scripts: Added Vector Length Set test helper script Naohiro Tamura
2021-03-29 13:20   ` Szabolcs Nagy
2021-03-30  7:25     ` naohirot
2021-03-17  2:35 ` [PATCH 5/5] benchtests: Added generic_memcpy and generic_memmove to large benchtests Naohiro Tamura
2021-03-29 12:03 ` [PATCH 0/5] Added optimized memcpy/memmove/memset for A64FX Szabolcs Nagy
2021-05-10  1:45 ` naohirot
2021-05-14 13:35   ` Szabolcs Nagy
2021-05-19  0:11     ` naohirot
2021-05-12  9:23 ` [PATCH v2 0/6] aarch64: " Naohiro Tamura
2021-05-12  9:26   ` [PATCH v2 1/6] config: Added HAVE_AARCH64_SVE_ASM for aarch64 Naohiro Tamura
2021-05-26 10:05     ` Szabolcs Nagy
2021-05-12  9:27   ` [PATCH v2 2/6] aarch64: define BTI_C and BTI_J macros as NOP unless HAVE_AARCH64_BTI Naohiro Tamura
2021-05-26 10:06     ` Szabolcs Nagy
2021-05-12  9:28   ` [PATCH v2 3/6] aarch64: Added optimized memcpy and memmove for A64FX Naohiro Tamura
2021-05-26 10:19     ` Szabolcs Nagy
2021-05-12  9:28   ` [PATCH v2 4/6] aarch64: Added optimized memset " Naohiro Tamura
2021-05-26 10:22     ` Szabolcs Nagy
2021-05-12  9:29   ` [PATCH v2 5/6] scripts: Added Vector Length Set test helper script Naohiro Tamura
2021-05-12 16:58     ` Joseph Myers
2021-05-13  9:53       ` naohirot
2021-05-20  7:34     ` Naohiro Tamura
2021-05-26 10:24       ` Szabolcs Nagy
2021-05-12  9:29   ` [PATCH v2 6/6] benchtests: Fixed bench-memcpy-random: buf1: mprotect failed Naohiro Tamura
2021-05-26 10:25     ` Szabolcs Nagy
2021-05-27  0:22   ` [PATCH v2 0/6] aarch64: Added optimized memcpy/memmove/memset for A64FX naohirot
2021-05-27 23:50     ` naohirot
2021-05-27  7:42   ` [PATCH v3 1/2] aarch64: Added optimized memcpy and memmove " Naohiro Tamura
2021-05-27  7:44   ` [PATCH v3 2/2] aarch64: Added optimized memset " Naohiro Tamura

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).