* Ping: [Patch] aarch64: Thunderx specific memcpy and memmove @ 2017-05-01 18:27 Steve Ellcey 2017-05-01 21:20 ` Wainer dos Santos Moschetta 2017-05-03 14:01 ` Szabolcs Nagy 0 siblings, 2 replies; 19+ messages in thread From: Steve Ellcey @ 2017-05-01 18:27 UTC (permalink / raw) To: libc-alpha This is a patch ping for the aarch64 IFUNC memcpy/memmove patch. https://sourceware.org/ml/libc-alpha/2017-03/msg00596.html I sent the glibc memcpy/memmove benchmark outputs to show the speedup and responded to Wainer's include syntax question.  I don't have an answer for Szabolcs on whether we should just be doing prefetches on all platforms because I don't have other platforms to test on. Steve Ellcey sellcey@cavium.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-01 18:27 Ping: [Patch] aarch64: Thunderx specific memcpy and memmove Steve Ellcey @ 2017-05-01 21:20 ` Wainer dos Santos Moschetta 2017-05-03 14:01 ` Szabolcs Nagy 1 sibling, 0 replies; 19+ messages in thread From: Wainer dos Santos Moschetta @ 2017-05-01 21:20 UTC (permalink / raw) To: libc-alpha On 01/05/2017 15:27, Steve Ellcey wrote: > This is a patch ping for the aarch64 IFUNC memcpy/memmove patch. > > https://sourceware.org/ml/libc-alpha/2017-03/msg00596.html > > I sent the glibc memcpy/memmove benchmark outputs to show the speedup > and responded to Wainer's include syntax question. I don't have an > answer for Szabolcs on whether we should just be doing prefetches on > all platforms because I don't have other platforms to test on. I'm ok with the response you gave to my question. I don't have further comments or questions about this patch. > Steve Ellcey > sellcey@cavium.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-01 18:27 Ping: [Patch] aarch64: Thunderx specific memcpy and memmove Steve Ellcey 2017-05-01 21:20 ` Wainer dos Santos Moschetta @ 2017-05-03 14:01 ` Szabolcs Nagy 2017-05-09 3:17 ` Siddhesh Poyarekar 1 sibling, 1 reply; 19+ messages in thread From: Szabolcs Nagy @ 2017-05-03 14:01 UTC (permalink / raw) To: sellcey, libc-alpha; +Cc: nd On 01/05/17 19:27, Steve Ellcey wrote: > This is a patch ping for the aarch64 IFUNC memcpy/memmove patch. > > https://sourceware.org/ml/libc-alpha/2017-03/msg00596.html > > I sent the glibc memcpy/memmove benchmark outputs to show the speedup > and responded to Wainer's include syntax question. I don't have an > answer for Szabolcs on whether we should just be doing prefetches on > all platforms because I don't have other platforms to test on. > Wilco is still investigating how to add the prefetches to the generic code (so thunderx does not need separate memcpy) we will first post the patches on the newlib list (to avoid copyright assignment issues) if we find a way to improve the generic code. if it turns out that a single generic memcpy does not work it makes more sense to me to organize the code differently: if we expect the generic memcpy to diverge from the thunderx one then it's better not to use the same code with ifdefs, but keep them separate, so the thunderx variant can be maintained independently by whoever cares about thunderx. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-03 14:01 ` Szabolcs Nagy @ 2017-05-09 3:17 ` Siddhesh Poyarekar 2017-05-09 21:45 ` Steve Ellcey 0 siblings, 1 reply; 19+ messages in thread From: Siddhesh Poyarekar @ 2017-05-09 3:17 UTC (permalink / raw) To: Szabolcs Nagy, sellcey, libc-alpha; +Cc: nd On Wednesday 03 May 2017 07:31 PM, Szabolcs Nagy wrote: > if it turns out that a single generic memcpy does not work > it makes more sense to me to organize the code differently: > if we expect the generic memcpy to diverge from the thunderx > one then it's better not to use the same code with ifdefs, but > keep them separate, so the thunderx variant can be maintained > independently by whoever cares about thunderx. If that is the case then I think Steve might be better off posting a patch with the thunderx implementation being independent of the stock aarch64 implementation while Wilco does his investigation. That way we don't scramble for a patch late in the 2.26 cycle - there's about a month and a half left. Siddhesh ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-09 3:17 ` Siddhesh Poyarekar @ 2017-05-09 21:45 ` Steve Ellcey 2017-05-18 21:48 ` Steve Ellcey 2017-05-19 7:41 ` Siddhesh Poyarekar 0 siblings, 2 replies; 19+ messages in thread From: Steve Ellcey @ 2017-05-09 21:45 UTC (permalink / raw) To: Siddhesh Poyarekar, Szabolcs Nagy, libc-alpha; +Cc: nd [-- Attachment #1: Type: text/plain, Size: 2657 bytes --] On Tue, 2017-05-09 at 08:45 +0530, Siddhesh Poyarekar wrote: > On Wednesday 03 May 2017 07:31 PM, Szabolcs Nagy wrote: > > > > if it turns out that a single generic memcpy does not work > > it makes more sense to me to organize the code differently: > > if we expect the generic memcpy to diverge from the thunderx > > one then it's better not to use the same code with ifdefs, but > > keep them separate, so the thunderx variant can be maintained > > independently by whoever cares about thunderx. > If that is the case then I think Steve might be better off posting a > patch with the thunderx implementation being independent of the stock > aarch64 implementation while Wilco does his investigation.  That way > we > don't scramble for a patch late in the 2.26 cycle - there's about a > month and a half left. > > Siddhesh That sounds reasonable to me.  Here is a patch that contains a separate memcpy_thunderx implementation.  I still have some (minor) changes to the generic memcpy.S file.  One change is to use macros for the function names so that the generic multiarch memcpy can include the standard non-multiarch version.  The other is to change a couple of internal labels to external labels.  This change isn't absolutely necessary but it is helpful in the thunderx memcpy where the branches are slightly different and I would like to keep the thunderx memcpy and the generic memcpy as similar as possible so that when a change happens in one or the other it is easy to compare the two versions.  I don't believe using different label types affects the generated code at all and personally, I find named labels easier to read than the internal numbered labels.  Being able to compare the two memcpy's is also why I kept the THUNDERX ifdef in memcpy_thunderx.S even though it is always defined there, so that the intended differences are explicit when comparing the two versions of memcpy. Tested on the top-of-tree sources with no regressions. Steve Ellcey sellcey@cavium.com 2017-05-09  Steve Ellcey  <sellcey@caviumnetworks.com> * sysdeps/aarch64/memcpy.S (MEMMOVE, MEMCPY): New macros. (memmove): Use MEMMOVE for name. (memcpy): Use MEMCPY for name.  Change internal labels to external labels. * sysdeps/aarch64/multiarch/Makefile: New file. * sysdeps/aarch64/multiarch/ifunc-impl-list.c: Likewise. * sysdeps/aarch64/multiarch/init-arch.h: Likewise. * sysdeps/aarch64/multiarch/memcpy.c: Likewise. * sysdeps/aarch64/multiarch/memcpy_generic.S: Likewise. * sysdeps/aarch64/multiarch/memcpy_thunderx.S: Likewise. * sysdeps/aarch64/multiarch/memmove.c: Likewise. [-- Attachment #2: ifunc.patch --] [-- Type: text/x-patch, Size: 19320 bytes --] diff --git a/sysdeps/aarch64/memcpy.S b/sysdeps/aarch64/memcpy.S index 29af8b1..88a3b90 100644 --- a/sysdeps/aarch64/memcpy.S +++ b/sysdeps/aarch64/memcpy.S @@ -59,7 +59,14 @@ Overlapping large forward memmoves use a loop that copies backwards. */ -ENTRY_ALIGN (memmove, 6) +#ifndef MEMMOVE +# define MEMMOVE memmove +#endif +#ifndef MEMCPY +# define MEMCPY memcpy +#endif + +ENTRY_ALIGN (MEMMOVE, 6) DELOUSE (0) DELOUSE (1) @@ -71,9 +78,9 @@ ENTRY_ALIGN (memmove, 6) b.lo L(move_long) /* Common case falls through into memcpy. */ -END (memmove) -libc_hidden_builtin_def (memmove) -ENTRY (memcpy) +END (MEMMOVE) +libc_hidden_builtin_def (MEMMOVE) +ENTRY (MEMCPY) DELOUSE (0) DELOUSE (1) @@ -169,8 +176,8 @@ L(copy_long): ldp C_l, C_h, [src, 48] ldp D_l, D_h, [src, 64]! subs count, count, 128 + 16 /* Test and readjust count. */ - b.ls 2f -1: + b.ls L(last64) +L(loop64): stp A_l, A_h, [dst, 16] ldp A_l, A_h, [src, 16] stp B_l, B_h, [dst, 32] @@ -180,12 +187,12 @@ L(copy_long): stp D_l, D_h, [dst, 64]! ldp D_l, D_h, [src, 64]! subs count, count, 64 - b.hi 1b + b.hi L(loop64) /* Write the last full set of 64 bytes. The remainder is at most 64 bytes, so it is safe to always copy 64 bytes from the end even if there is just 1 byte left. */ -2: +L(last64): ldp E_l, E_h, [srcend, -64] stp A_l, A_h, [dst, 16] ldp A_l, A_h, [srcend, -48] @@ -256,5 +263,5 @@ L(move_long): stp C_l, C_h, [dstin] 3: ret -END (memcpy) -libc_hidden_builtin_def (memcpy) +END (MEMCPY) +libc_hidden_builtin_def (MEMCPY) diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile index e69de29..78d52c7 100644 --- a/sysdeps/aarch64/multiarch/Makefile +++ b/sysdeps/aarch64/multiarch/Makefile @@ -0,0 +1,3 @@ +ifeq ($(subdir),string) +sysdep_routines += memcpy_generic memcpy_thunderx +endif diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c index e69de29..c4f23df 100644 --- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c +++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c @@ -0,0 +1,51 @@ +/* Enumerate available IFUNC implementations of a function. AARCH64 version. + Copyright (C) 2017 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <http://www.gnu.org/licenses/>. */ + +#include <assert.h> +#include <string.h> +#include <wchar.h> +#include <ldsodefs.h> +#include <ifunc-impl-list.h> +#include <init-arch.h> +#include <stdio.h> + +/* Maximum number of IFUNC implementations. */ +#define MAX_IFUNC 2 + +size_t +__libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, + size_t max) +{ + assert (max >= MAX_IFUNC); + + size_t i = 0; + + INIT_ARCH (); + + /* Support sysdeps/aarch64/multiarch/memcpy.c and memmove.c. */ + IFUNC_IMPL (i, name, memcpy, + IFUNC_IMPL_ADD (array, i, memcpy, IS_THUNDERX (midr), + __memcpy_thunderx) + IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_generic)) + IFUNC_IMPL (i, name, memmove, + IFUNC_IMPL_ADD (array, i, memmove, IS_THUNDERX (midr), + __memmove_thunderx) + IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_generic)) + + return i; +} diff --git a/sysdeps/aarch64/multiarch/init-arch.h b/sysdeps/aarch64/multiarch/init-arch.h index e69de29..3af442c 100644 --- a/sysdeps/aarch64/multiarch/init-arch.h +++ b/sysdeps/aarch64/multiarch/init-arch.h @@ -0,0 +1,23 @@ +/* Define INIT_ARCH so that midr is initialized before use by IFUNCs. + This file is part of the GNU C Library. + Copyright (C) 2017 Free Software Foundation, Inc. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <http://www.gnu.org/licenses/>. */ + +#include <ldsodefs.h> + +#define INIT_ARCH() \ + uint64_t __attribute__((unused)) midr = \ + GLRO(dl_aarch64_cpu_features).midr_el1; diff --git a/sysdeps/aarch64/multiarch/memcpy.c b/sysdeps/aarch64/multiarch/memcpy.c index e69de29..9f73efb 100644 --- a/sysdeps/aarch64/multiarch/memcpy.c +++ b/sysdeps/aarch64/multiarch/memcpy.c @@ -0,0 +1,39 @@ +/* Multiple versions of memcpy. AARCH64 version. + Copyright (C) 2017 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <http://www.gnu.org/licenses/>. */ + +/* Define multiple versions only for the definition in libc. */ + +#if IS_IN (libc) +/* Redefine memcpy so that the compiler won't complain about the type + mismatch with the IFUNC selector in strong_alias, below. */ +# undef memcpy +# define memcpy __redirect_memcpy +# include <string.h> +# include <init-arch.h> + +extern __typeof (__redirect_memcpy) __libc_memcpy; + +extern __typeof (__redirect_memcpy) __memcpy_generic attribute_hidden; +extern __typeof (__redirect_memcpy) __memcpy_thunderx attribute_hidden; + +libc_ifunc (__libc_memcpy, + IS_THUNDERX (midr) ? __memcpy_thunderx : __memcpy_generic); + +# undef memcpy +strong_alias (__libc_memcpy, memcpy); +#endif diff --git a/sysdeps/aarch64/multiarch/memcpy_generic.S b/sysdeps/aarch64/multiarch/memcpy_generic.S index e69de29..041a779 100644 --- a/sysdeps/aarch64/multiarch/memcpy_generic.S +++ b/sysdeps/aarch64/multiarch/memcpy_generic.S @@ -0,0 +1,42 @@ +/* A Generic Optimized memcpy implementation for AARCH64. + Copyright (C) 2017 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <http://www.gnu.org/licenses/>. */ + +/* The actual memcpy and memmove code is in ../memcpy.S. If we are + building libc this file defines __memcpy_generic and __memmove_generic. + Otherwise the include of ../memcpy.S will define the normal __memcpy + and__memmove entry points. */ + +#include <sysdep.h> + +#if IS_IN (libc) + +# define MEMCPY __memcpy_generic +# define MEMMOVE __memmove_generic + +/* Do not hide the generic versions of memcpy and memmove, we use them + internally. */ +# undef libc_hidden_builtin_def +# define libc_hidden_builtin_def(name) + +/* It doesn't make sense to send libc-internal memcpy calls through a PLT. */ + .globl __GI_memcpy; __GI_memcpy = __memcpy_generic + .globl __GI_memmove; __GI_memmove = __memmove_generic + +#endif + +#include "../memcpy.S" diff --git a/sysdeps/aarch64/multiarch/memcpy_thunderx.S b/sysdeps/aarch64/multiarch/memcpy_thunderx.S index e69de29..5ac9e34 100644 --- a/sysdeps/aarch64/multiarch/memcpy_thunderx.S +++ b/sysdeps/aarch64/multiarch/memcpy_thunderx.S @@ -0,0 +1,326 @@ +/* A Thunderx Optimized memcpy implementation for AARCH64. + Copyright (C) 2017 Free Software Foundation, Inc. + + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <http://www.gnu.org/licenses/>. */ + +/* The actual code in this memcpy and memmove should be identical to the + generic version except for the code under '#ifdef THUNDERX'. This is + to make is easier to keep this version and the generic version in sync + for changes that are not specific to thunderx. */ + +#include <sysdep.h> + +/* Assumptions: + * + * ARMv8-a, AArch64, unaligned accesses. + * + */ + +#define dstin x0 +#define src x1 +#define count x2 +#define dst x3 +#define srcend x4 +#define dstend x5 +#define A_l x6 +#define A_lw w6 +#define A_h x7 +#define A_hw w7 +#define B_l x8 +#define B_lw w8 +#define B_h x9 +#define C_l x10 +#define C_h x11 +#define D_l x12 +#define D_h x13 +#define E_l src +#define E_h count +#define F_l srcend +#define F_h dst +#define G_l count +#define G_h dst +#define tmp1 x14 + +/* Copies are split into 3 main cases: small copies of up to 16 bytes, + medium copies of 17..96 bytes which are fully unrolled. Large copies + of more than 96 bytes align the destination and use an unrolled loop + processing 64 bytes per iteration. + In order to share code with memmove, small and medium copies read all + data before writing, allowing any kind of overlap. So small, medium + and large backwards memmoves are handled by falling through into memcpy. + Overlapping large forward memmoves use a loop that copies backwards. +*/ + +#ifndef MEMMOVE +# define MEMMOVE memmove +#endif +#ifndef MEMCPY +# define MEMCPY memcpy +#endif + +#if IS_IN (libc) + +# undef MEMCPY +# define MEMCPY __memcpy_thunderx +# undef MEMMOVE +# define MEMMOVE __memmove_thunderx +# define USE_THUNDERX + +ENTRY_ALIGN (MEMMOVE, 6) + + DELOUSE (0) + DELOUSE (1) + DELOUSE (2) + + sub tmp1, dstin, src + cmp count, 96 + ccmp tmp1, count, 2, hi + b.lo L(move_long) + + /* Common case falls through into memcpy. */ +END (MEMMOVE) +libc_hidden_builtin_def (MEMMOVE) +ENTRY (MEMCPY) + + DELOUSE (0) + DELOUSE (1) + DELOUSE (2) + + prfm PLDL1KEEP, [src] + add srcend, src, count + add dstend, dstin, count + cmp count, 16 + b.ls L(copy16) + cmp count, 96 + b.hi L(copy_long) + + /* Medium copies: 17..96 bytes. */ + sub tmp1, count, 1 + ldp A_l, A_h, [src] + tbnz tmp1, 6, L(copy96) + ldp D_l, D_h, [srcend, -16] + tbz tmp1, 5, 1f + ldp B_l, B_h, [src, 16] + ldp C_l, C_h, [srcend, -32] + stp B_l, B_h, [dstin, 16] + stp C_l, C_h, [dstend, -32] +1: + stp A_l, A_h, [dstin] + stp D_l, D_h, [dstend, -16] + ret + + .p2align 4 + /* Small copies: 0..16 bytes. */ +L(copy16): + cmp count, 8 + b.lo 1f + ldr A_l, [src] + ldr A_h, [srcend, -8] + str A_l, [dstin] + str A_h, [dstend, -8] + ret + .p2align 4 +1: + tbz count, 2, 1f + ldr A_lw, [src] + ldr A_hw, [srcend, -4] + str A_lw, [dstin] + str A_hw, [dstend, -4] + ret + + /* Copy 0..3 bytes. Use a branchless sequence that copies the same + byte 3 times if count==1, or the 2nd byte twice if count==2. */ +1: + cbz count, 2f + lsr tmp1, count, 1 + ldrb A_lw, [src] + ldrb A_hw, [srcend, -1] + ldrb B_lw, [src, tmp1] + strb A_lw, [dstin] + strb B_lw, [dstin, tmp1] + strb A_hw, [dstend, -1] +2: ret + + .p2align 4 + /* Copy 64..96 bytes. Copy 64 bytes from the start and + 32 bytes from the end. */ +L(copy96): + ldp B_l, B_h, [src, 16] + ldp C_l, C_h, [src, 32] + ldp D_l, D_h, [src, 48] + ldp E_l, E_h, [srcend, -32] + ldp F_l, F_h, [srcend, -16] + stp A_l, A_h, [dstin] + stp B_l, B_h, [dstin, 16] + stp C_l, C_h, [dstin, 32] + stp D_l, D_h, [dstin, 48] + stp E_l, E_h, [dstend, -32] + stp F_l, F_h, [dstend, -16] + ret + + /* Align DST to 16 byte alignment so that we don't cross cache line + boundaries on both loads and stores. There are at least 96 bytes + to copy, so copy 16 bytes unaligned and then align. The loop + copies 64 bytes per iteration and prefetches one iteration ahead. */ + + .p2align 4 +L(copy_long): + +# ifdef USE_THUNDERX + + /* On thunderx, large memcpy's are helped by software prefetching. + This loop is identical to the one below it but with prefetching + instructions included. For loops that are less than 32768 bytes, + the prefetching does not help and slow the code down so we only + use the prefetching loop for the largest memcpys. */ + + cmp count, #32768 + b.lo L(copy_long_without_prefetch) + and tmp1, dstin, 15 + bic dst, dstin, 15 + ldp D_l, D_h, [src] + sub src, src, tmp1 + prfm pldl1strm, [src, 384] + add count, count, tmp1 /* Count is now 16 too large. */ + ldp A_l, A_h, [src, 16] + stp D_l, D_h, [dstin] + ldp B_l, B_h, [src, 32] + ldp C_l, C_h, [src, 48] + ldp D_l, D_h, [src, 64]! + subs count, count, 128 + 16 /* Test and readjust count. */ + +L(prefetch_loop64): + tbz src, #6, 1f + prfm pldl1strm, [src, 512] +1: + stp A_l, A_h, [dst, 16] + ldp A_l, A_h, [src, 16] + stp B_l, B_h, [dst, 32] + ldp B_l, B_h, [src, 32] + stp C_l, C_h, [dst, 48] + ldp C_l, C_h, [src, 48] + stp D_l, D_h, [dst, 64]! + ldp D_l, D_h, [src, 64]! + subs count, count, 64 + b.hi L(prefetch_loop64) + b L(last64) + +L(copy_long_without_prefetch): +# endif + + and tmp1, dstin, 15 + bic dst, dstin, 15 + ldp D_l, D_h, [src] + sub src, src, tmp1 + add count, count, tmp1 /* Count is now 16 too large. */ + ldp A_l, A_h, [src, 16] + stp D_l, D_h, [dstin] + ldp B_l, B_h, [src, 32] + ldp C_l, C_h, [src, 48] + ldp D_l, D_h, [src, 64]! + subs count, count, 128 + 16 /* Test and readjust count. */ + b.ls L(last64) +L(loop64): + stp A_l, A_h, [dst, 16] + ldp A_l, A_h, [src, 16] + stp B_l, B_h, [dst, 32] + ldp B_l, B_h, [src, 32] + stp C_l, C_h, [dst, 48] + ldp C_l, C_h, [src, 48] + stp D_l, D_h, [dst, 64]! + ldp D_l, D_h, [src, 64]! + subs count, count, 64 + b.hi L(loop64) + + /* Write the last full set of 64 bytes. The remainder is at most 64 + bytes, so it is safe to always copy 64 bytes from the end even if + there is just 1 byte left. */ +L(last64): + ldp E_l, E_h, [srcend, -64] + stp A_l, A_h, [dst, 16] + ldp A_l, A_h, [srcend, -48] + stp B_l, B_h, [dst, 32] + ldp B_l, B_h, [srcend, -32] + stp C_l, C_h, [dst, 48] + ldp C_l, C_h, [srcend, -16] + stp D_l, D_h, [dst, 64] + stp E_l, E_h, [dstend, -64] + stp A_l, A_h, [dstend, -48] + stp B_l, B_h, [dstend, -32] + stp C_l, C_h, [dstend, -16] + ret + + .p2align 4 +L(move_long): + cbz tmp1, 3f + + add srcend, src, count + add dstend, dstin, count + + /* Align dstend to 16 byte alignment so that we don't cross cache line + boundaries on both loads and stores. There are at least 96 bytes + to copy, so copy 16 bytes unaligned and then align. The loop + copies 64 bytes per iteration and prefetches one iteration ahead. */ + + and tmp1, dstend, 15 + ldp D_l, D_h, [srcend, -16] + sub srcend, srcend, tmp1 + sub count, count, tmp1 + ldp A_l, A_h, [srcend, -16] + stp D_l, D_h, [dstend, -16] + ldp B_l, B_h, [srcend, -32] + ldp C_l, C_h, [srcend, -48] + ldp D_l, D_h, [srcend, -64]! + sub dstend, dstend, tmp1 + subs count, count, 128 + b.ls 2f + + nop +1: + stp A_l, A_h, [dstend, -16] + ldp A_l, A_h, [srcend, -16] + stp B_l, B_h, [dstend, -32] + ldp B_l, B_h, [srcend, -32] + stp C_l, C_h, [dstend, -48] + ldp C_l, C_h, [srcend, -48] + stp D_l, D_h, [dstend, -64]! + ldp D_l, D_h, [srcend, -64]! + subs count, count, 64 + b.hi 1b + + /* Write the last full set of 64 bytes. The remainder is at most 64 + bytes, so it is safe to always copy 64 bytes from the start even if + there is just 1 byte left. */ +2: + ldp G_l, G_h, [src, 48] + stp A_l, A_h, [dstend, -16] + ldp A_l, A_h, [src, 32] + stp B_l, B_h, [dstend, -32] + ldp B_l, B_h, [src, 16] + stp C_l, C_h, [dstend, -48] + ldp C_l, C_h, [src] + stp D_l, D_h, [dstend, -64] + stp G_l, G_h, [dstin, 48] + stp A_l, A_h, [dstin, 32] + stp B_l, B_h, [dstin, 16] + stp C_l, C_h, [dstin] +3: ret + +END (MEMCPY) +libc_hidden_builtin_def (MEMCPY) + +#endif diff --git a/sysdeps/aarch64/multiarch/memmove.c b/sysdeps/aarch64/multiarch/memmove.c index e69de29..34c6b29 100644 --- a/sysdeps/aarch64/multiarch/memmove.c +++ b/sysdeps/aarch64/multiarch/memmove.c @@ -0,0 +1,39 @@ +/* Multiple versions of memmove. AARCH64 version. + Copyright (C) 2017 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + <http://www.gnu.org/licenses/>. */ + +/* Define multiple versions only for the definition in libc. */ + +#if IS_IN (libc) +/* Redefine memmove so that the compiler won't complain about the type + mismatch with the IFUNC selector in strong_alias, below. */ +# undef memmove +# define memmove __redirect_memmove +# include <string.h> +# include <init-arch.h> + +extern __typeof (__redirect_memmove) __libc_memmove; + +extern __typeof (__redirect_memmove) __memmove_generic attribute_hidden; +extern __typeof (__redirect_memmove) __memmove_thunderx attribute_hidden; + +libc_ifunc (__libc_memmove, + IS_THUNDERX (midr) ? __memmove_thunderx : __memmove_generic); + +# undef memmove +strong_alias (__libc_memmove, memmove); +#endif ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-09 21:45 ` Steve Ellcey @ 2017-05-18 21:48 ` Steve Ellcey 2017-05-19 7:41 ` Siddhesh Poyarekar 1 sibling, 0 replies; 19+ messages in thread From: Steve Ellcey @ 2017-05-18 21:48 UTC (permalink / raw) To: Siddhesh Poyarekar, Szabolcs Nagy, libc-alpha; +Cc: nd On Tue, 2017-05-09 at 14:45 -0700, Steve Ellcey wrote: > > 2017-05-09  Steve Ellcey  <sellcey@caviumnetworks.com> > > * sysdeps/aarch64/memcpy.S (MEMMOVE, MEMCPY): New macros. > (memmove): Use MEMMOVE for name. > (memcpy): Use MEMCPY for name.  Change internal labels > to external labels. > * sysdeps/aarch64/multiarch/Makefile: New file. > * sysdeps/aarch64/multiarch/ifunc-impl-list.c: Likewise. > * sysdeps/aarch64/multiarch/init-arch.h: Likewise. > * sysdeps/aarch64/multiarch/memcpy.c: Likewise. > * sysdeps/aarch64/multiarch/memcpy_generic.S: Likewise. > * sysdeps/aarch64/multiarch/memcpy_thunderx.S: Likewise. > * sysdeps/aarch64/multiarch/memmove.c: Likewise. Ping. Steve Ellcey sellcey@cavium.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-09 21:45 ` Steve Ellcey 2017-05-18 21:48 ` Steve Ellcey @ 2017-05-19 7:41 ` Siddhesh Poyarekar [not found] ` <DM5PR07MB34662F805C1EDE45882B82F6F5F90@DM5PR07MB3466.namprd07.prod.outlook.com> 1 sibling, 1 reply; 19+ messages in thread From: Siddhesh Poyarekar @ 2017-05-19 7:41 UTC (permalink / raw) To: sellcey, Szabolcs Nagy, libc-alpha; +Cc: nd On Wednesday 10 May 2017 03:15 AM, Steve Ellcey wrote: > That sounds reasonable to me. Here is a patch that contains a separate > memcpy_thunderx implementation. I still have some (minor) changes to > the generic memcpy.S file. One change is to use macros for the > function names so that the generic multiarch memcpy can include the > standard non-multiarch version. The other is to change a couple of > internal labels to external labels. This change isn't absolutely > necessary but it is helpful in the thunderx memcpy where the branches > are slightly different and I would like to keep the thunderx memcpy and > the generic memcpy as similar as possible so that when a change happens > in one or the other it is easy to compare the two versions. I don't > believe using different label types affects the generated code at all > and personally, I find named labels easier to read than the internal > numbered labels. Being able to compare the two memcpy's is also why I > kept the THUNDERX ifdef in memcpy_thunderx.S even though it is always > defined there, so that the intended differences are explicit when > comparing the two versions of memcpy. > > Tested on the top-of-tree sources with no regressions. The patch looks fine. Please coordinate with Szabolcs and Wilco on the way forward. You could either commit now and let Wilco rebase on top of your changes or wait till he is done with his analysis and then figure out the next step. Siddhesh PS: You don't really need the USE_THUNDERX macros in your memcpy anymore, but it's not a big deal if that is what you prefer to keep track of generic memcpy changes since you own that code. ^ permalink raw reply [flat|nested] 19+ messages in thread
[parent not found: <DM5PR07MB34662F805C1EDE45882B82F6F5F90@DM5PR07MB3466.namprd07.prod.outlook.com>]
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove [not found] ` <DM5PR07MB34662F805C1EDE45882B82F6F5F90@DM5PR07MB3466.namprd07.prod.outlook.com> @ 2017-05-24 17:04 ` Szabolcs Nagy 2017-05-25 6:42 ` Siddhesh Poyarekar 2017-05-25 16:22 ` Steve Ellcey 0 siblings, 2 replies; 19+ messages in thread From: Szabolcs Nagy @ 2017-05-24 17:04 UTC (permalink / raw) To: Ellcey, Steve, Siddhesh Poyarekar, libc-alpha, Wilco Dijkstra; +Cc: nd On 23/05/17 21:12, Ellcey, Steve wrote: > Wilco and Szabolcs, > > > Do you have any objection to me going ahead and checking in this patch? > you can commit the patch, however - we are still looking at posting an updated generic memcpy, it just takes longer to get it through than expected, when that happens the thunderx specific memcpy will be reconsidered and may get removed. (we try to do it in this release cycle) - i don't know if you plan to make more changes to the thunderx memcpy, if prefetching is the only change then it's likely that we can agree on a generic version that's good enough. if you do plan to make further changes, then keep in mind that we try to have same/similar generic memcpy across c runtimes and if your change is good for generic we might not be able to use the code outside of glibc (so newlib, bionic, freebsd,.. memcpy would diverge) - non-thunderx systems are affected: static linked code using memcpy will start to go through an indirection (iplt) instead of direct call. if there are complaints about it or other ifunc related issues come up, then again we will have to reconsider it. so the patch can go in with an understanding that it may go out. > > Steve Ellcey > > sellcey@cavium.com > > > > --------------------------------------------------------------------------------------------------------------- > *From:* Siddhesh Poyarekar <siddhesh@gotplt.org> > *Sent:* Friday, May 19, 2017 12:41 AM > *To:* Ellcey, Steve; Szabolcs Nagy; libc-alpha > *Cc:* nd@arm.com > *Subject:* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove > > On Wednesday 10 May 2017 03:15 AM, Steve Ellcey wrote: >> That sounds reasonable to me. Here is a patch that contains a separate >> memcpy_thunderx implementation. I still have some (minor) changes to >> the generic memcpy.S file. One change is to use macros for the >> function names so that the generic multiarch memcpy can include the >> standard non-multiarch version. The other is to change a couple of >> internal labels to external labels. This change isn't absolutely >> necessary but it is helpful in the thunderx memcpy where the branches >> are slightly different and I would like to keep the thunderx memcpy and >> the generic memcpy as similar as possible so that when a change happens >> in one or the other it is easy to compare the two versions. I don't >> believe using different label types affects the generated code at all >> and personally, I find named labels easier to read than the internal >> numbered labels. Being able to compare the two memcpy's is also why I >> kept the THUNDERX ifdef in memcpy_thunderx.S even though it is always >> defined there, so that the intended differences are explicit when >> comparing the two versions of memcpy. >> >> Tested on the top-of-tree sources with no regressions. > > The patch looks fine. Please coordinate with Szabolcs and Wilco on the > way forward. You could either commit now and let Wilco rebase on top of > your changes or wait till he is done with his analysis and then figure > out the next step. > > Siddhesh > > PS: You don't really need the USE_THUNDERX macros in your memcpy > anymore, but it's not a big deal if that is what you prefer to keep > track of generic memcpy changes since you own that code. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-24 17:04 ` Szabolcs Nagy @ 2017-05-25 6:42 ` Siddhesh Poyarekar 2017-05-25 16:28 ` Andrew Pinski 2017-05-25 16:22 ` Steve Ellcey 1 sibling, 1 reply; 19+ messages in thread From: Siddhesh Poyarekar @ 2017-05-25 6:42 UTC (permalink / raw) To: Szabolcs Nagy, Ellcey, Steve, libc-alpha, Wilco Dijkstra; +Cc: nd On Wednesday 24 May 2017 10:34 PM, Szabolcs Nagy wrote: > - i don't know if you plan to make more changes to the thunderx > memcpy, if prefetching is the only change then it's likely that > we can agree on a generic version that's good enough. if you > do plan to make further changes, then keep in mind that we try > to have same/similar generic memcpy across c runtimes and if > your change is good for generic we might not be able to use the > code outside of glibc (so newlib, bionic, freebsd,.. memcpy > would diverge) Steve, if that is desirable then please consider contributing the code to cortex-strings[1]. > - non-thunderx systems are affected: static linked code using > memcpy will start to go through an indirection (iplt) instead > of direct call. if there are complaints about it or other ifunc > related issues come up, then again we will have to reconsider it. They could use a library built with --disable-multiarch. The only place I can see this happening is on systems that currently need bespoke images, e.g. raspberry pis or similar form factors. Since they're building custom images anyway, it shouldn't be too hard to add a glibc built with --disable-multiarch for them. Removing multiarch completely is not an option since there's thunderx *and* falkor (yes yes, coming soon, I promise!) with their own routines and perhaps more in future. Siddhesh [1] https://launchpad.net/cortex-strings ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-25 6:42 ` Siddhesh Poyarekar @ 2017-05-25 16:28 ` Andrew Pinski 2017-05-25 16:43 ` Ramana Radhakrishnan 2017-05-25 17:49 ` Wilco Dijkstra 0 siblings, 2 replies; 19+ messages in thread From: Andrew Pinski @ 2017-05-25 16:28 UTC (permalink / raw) To: Siddhesh Poyarekar Cc: Szabolcs Nagy, Ellcey, Steve, libc-alpha, Wilco Dijkstra, nd On Wed, May 24, 2017 at 11:42 PM, Siddhesh Poyarekar <siddhesh@gotplt.org> wrote: > On Wednesday 24 May 2017 10:34 PM, Szabolcs Nagy wrote: >> - i don't know if you plan to make more changes to the thunderx >> memcpy, if prefetching is the only change then it's likely that >> we can agree on a generic version that's good enough. if you >> do plan to make further changes, then keep in mind that we try >> to have same/similar generic memcpy across c runtimes and if >> your change is good for generic we might not be able to use the >> code outside of glibc (so newlib, bionic, freebsd,.. memcpy >> would diverge) > > Steve, if that is desirable then please consider contributing the code > to cortex-strings[1]. One memcpy does not fit all micro-arch. Just look at x86, where they have many different versions and even do selection based on cache size (see the current discussion about the memcpy regression). > >> - non-thunderx systems are affected: static linked code using >> memcpy will start to go through an indirection (iplt) instead >> of direct call. if there are complaints about it or other ifunc >> related issues come up, then again we will have to reconsider it. Just to answer this. This is true on x86 and PowerPC already so there should be no difference on aarch64 than those two targets. > > They could use a library built with --disable-multiarch. The only place > I can see this happening is on systems that currently need bespoke > images, e.g. raspberry pis or similar form factors. Since they're > building custom images anyway, it shouldn't be too hard to add a glibc > built with --disable-multiarch for them. > > Removing multiarch completely is not an option since there's thunderx > *and* falkor (yes yes, coming soon, I promise!) with their own routines > and perhaps more in future. ThunderX2T99 version should be posted by the end of next week. > > Siddhesh > > [1] https://launchpad.net/cortex-strings ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-25 16:28 ` Andrew Pinski @ 2017-05-25 16:43 ` Ramana Radhakrishnan 2017-05-25 17:49 ` Wilco Dijkstra 1 sibling, 0 replies; 19+ messages in thread From: Ramana Radhakrishnan @ 2017-05-25 16:43 UTC (permalink / raw) To: Andrew Pinski Cc: Siddhesh Poyarekar, Szabolcs Nagy, Ellcey, Steve, libc-alpha, Wilco Dijkstra, nd On Thu, May 25, 2017 at 5:28 PM, Andrew Pinski <pinskia@gmail.com> wrote: > On Wed, May 24, 2017 at 11:42 PM, Siddhesh Poyarekar > <siddhesh@gotplt.org> wrote: >> On Wednesday 24 May 2017 10:34 PM, Szabolcs Nagy wrote: >>> - i don't know if you plan to make more changes to the thunderx >>> memcpy, if prefetching is the only change then it's likely that >>> we can agree on a generic version that's good enough. if you >>> do plan to make further changes, then keep in mind that we try >>> to have same/similar generic memcpy across c runtimes and if >>> your change is good for generic we might not be able to use the >>> code outside of glibc (so newlib, bionic, freebsd,.. memcpy >>> would diverge) >> >> Steve, if that is desirable then please consider contributing the code >> to cortex-strings[1]. > > One memcpy does not fit all micro-arch. Just look at x86, where they > have many different versions and even do selection based on cache size > (see the current discussion about the memcpy regression). No it doesn't but that's not an excuse for putting in multiple copies without the engineering to see if generic memcpy's are improved. Otherwise it's just bloatware. regards Ramana ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-25 16:28 ` Andrew Pinski 2017-05-25 16:43 ` Ramana Radhakrishnan @ 2017-05-25 17:49 ` Wilco Dijkstra 2017-05-25 19:26 ` Siddhesh Poyarekar 1 sibling, 1 reply; 19+ messages in thread From: Wilco Dijkstra @ 2017-05-25 17:49 UTC (permalink / raw) To: Andrew Pinski, Siddhesh Poyarekar Cc: Szabolcs Nagy, Ellcey, Steve, libc-alpha, nd Andrew Pinski <pinskia@gmail.com> wrote: > > One memcpy does not fit all micro-arch. Just look at x86, where they > have many different versions and even do selection based on cache size > (see the current discussion about the memcpy regression). Given the number of micro architectures already existing, it would be a really bad situation to end up with one memcpy per micro architecture... Micro architectures will tend to converge rather than diverge as performance level increases. So I believe it's generally best to use the same instructions for memcpy as for compiled code as that is what CPUs will actually encounter and optimize for. For the rare, very large copies we could do something different if it helps (eg. prefetch, non-temporals, SIMD registers etc). > >> - non-thunderx systems are affected: static linked code using > >> memcpy will start to go through an indirection (iplt) instead > >> of direct call. if there are complaints about it or other ifunc > >> related issues come up, then again we will have to reconsider it. > > Just to answer this. This is true on x86 and PowerPC already so there > should be no difference on aarch64 than those two targets. An ifunc has a measurable overhead unfortunately, and that would no longer be trivially avoidable via static linking. Most calls to memcpy tend to be very small copies. Maybe we should investigate statically linking the small copy part of memcpy with say -O3? Cheers, Wilco ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-25 17:49 ` Wilco Dijkstra @ 2017-05-25 19:26 ` Siddhesh Poyarekar 2017-05-25 21:04 ` Ramana Radhakrishnan 0 siblings, 1 reply; 19+ messages in thread From: Siddhesh Poyarekar @ 2017-05-25 19:26 UTC (permalink / raw) To: Wilco Dijkstra, Andrew Pinski Cc: Szabolcs Nagy, Ellcey, Steve, libc-alpha, nd On Thursday 25 May 2017 11:19 PM, Wilco Dijkstra wrote: > Given the number of micro architectures already existing, it would be a really > bad situation to end up with one memcpy per micro architecture... It's not just per micro-architecture... > Micro architectures will tend to converge rather than diverge as performance > level increases. So I believe it's generally best to use the same instructions for > memcpy as for compiled code as that is what CPUs will actually encounter > and optimize for. For the rare, very large copies we could do something different > if it helps (eg. prefetch, non-temporals, SIMD registers etc). ... because as you say, micro-architectures may well converge over time to some extent, but you will still end up having multiple memcpy implementation taking advantage of different features in aarch64 architecture over time. For example, SVE routines vs non-SVE routines. You'll need both and looking at how x86 has evolved, there will be much more to come. > An ifunc has a measurable overhead unfortunately, and that would no longer > be trivially avoidable via static linking. Most calls to memcpy tend to be very > small copies. Maybe we should investigate statically linking the small copy part > of memcpy with say -O3? Sure, that might be something to look at as a data point, but again getting rid of multiarch is not the option for desktop/server implementations, especially if micro-architecture specific routines give measurable gains over generic implementations in the general case, i.e. dynamically linked programs that need to run out of the box and optimally on multiple types of hardware. Static binaries unfortunately become the edge case here. Siddhesh ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-25 19:26 ` Siddhesh Poyarekar @ 2017-05-25 21:04 ` Ramana Radhakrishnan 2017-05-25 21:12 ` Florian Weimer 2017-05-26 5:34 ` Siddhesh Poyarekar 0 siblings, 2 replies; 19+ messages in thread From: Ramana Radhakrishnan @ 2017-05-25 21:04 UTC (permalink / raw) To: Siddhesh Poyarekar Cc: Wilco Dijkstra, Andrew Pinski, Szabolcs Nagy, Ellcey, Steve, libc-alpha, nd On Thu, May 25, 2017 at 8:26 PM, Siddhesh Poyarekar <siddhesh@gotplt.org> wrote: > On Thursday 25 May 2017 11:19 PM, Wilco Dijkstra wrote: >> Given the number of micro architectures already existing, it would be a really >> bad situation to end up with one memcpy per micro architecture... > > It's not just per micro-architecture... > >> Micro architectures will tend to converge rather than diverge as performance >> level increases. So I believe it's generally best to use the same instructions for >> memcpy as for compiled code as that is what CPUs will actually encounter >> and optimize for. For the rare, very large copies we could do something different >> if it helps (eg. prefetch, non-temporals, SIMD registers etc). > > ... because as you say, micro-architectures may well converge over time > to some extent, but you will still end up having multiple memcpy > implementation taking advantage of different features in aarch64 > architecture over time. For example, SVE routines vs non-SVE routines. > You'll need both and looking at how x86 has evolved, there will be much > more to come. SVE in the ARM world is architectural and not micro-architectural in the context of this discussion :) . The difference in the ARM world compared to the x86 world is the number of micro-architectures that target the same architectural baseline. Pushing in a memcpy for every single micro-architecture out there will make the library a maintenance nightmare ! And we also need to see some numbers which compare the relative performance of the routines being put in compared to the generic memcpy otherwise things will not improve. Atleast something like this routine is X % better than the generic memcpy. regards Ramana ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-25 21:04 ` Ramana Radhakrishnan @ 2017-05-25 21:12 ` Florian Weimer 2017-05-26 5:42 ` Siddhesh Poyarekar 2017-05-26 5:34 ` Siddhesh Poyarekar 1 sibling, 1 reply; 19+ messages in thread From: Florian Weimer @ 2017-05-25 21:12 UTC (permalink / raw) To: Ramana Radhakrishnan Cc: Siddhesh Poyarekar, Wilco Dijkstra, Andrew Pinski, Szabolcs Nagy, Ellcey, Steve, libc-alpha, nd On Thu, May 25, 2017 at 11:04 PM, Ramana Radhakrishnan <ramana.gcc@googlemail.com> wrote: > The difference in the ARM world compared to the x86 world is the number > of micro-architectures that target the same architectural baseline. > Pushing in a memcpy > for every single micro-architecture out there will make the library a > maintenance nightmare ! In this case, you should reconsider putting the string functions into the vDSO. This will push the implementation to the kernel, but it does more than just shifting the work: the kernel has more direct means to provide hardware capabilities, and it also can use just-in-time code generation (which we want to avoid in glibc). Thanks, Florian ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-25 21:12 ` Florian Weimer @ 2017-05-26 5:42 ` Siddhesh Poyarekar 0 siblings, 0 replies; 19+ messages in thread From: Siddhesh Poyarekar @ 2017-05-26 5:42 UTC (permalink / raw) To: Florian Weimer, Ramana Radhakrishnan Cc: Wilco Dijkstra, Andrew Pinski, Szabolcs Nagy, Ellcey, Steve, libc-alpha, nd On Friday 26 May 2017 02:42 AM, Florian Weimer wrote: > In this case, you should reconsider putting the string functions into > the vDSO. This will push the implementation to the kernel, but it > does more than just shifting the work: the kernel has more direct > means to provide hardware capabilities, and it also can use > just-in-time code generation (which we want to avoid in glibc). You will still need an indirection to access the VDSO function, so it does not solve the problem Wilco was referring to. Micro-architecture explosion is currently just a theory (which both Wilco and I think is invalid since we will eventually come up with a small enough set of functions that cater to a variety of micro-architectures), so this would be worth worrying about only if we find that we're exceeding the number of IFUNC implementations in x86. Siddhesh ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-25 21:04 ` Ramana Radhakrishnan 2017-05-25 21:12 ` Florian Weimer @ 2017-05-26 5:34 ` Siddhesh Poyarekar 2017-05-26 5:38 ` Andrew Pinski 1 sibling, 1 reply; 19+ messages in thread From: Siddhesh Poyarekar @ 2017-05-26 5:34 UTC (permalink / raw) To: Ramana Radhakrishnan Cc: Wilco Dijkstra, Andrew Pinski, Szabolcs Nagy, Ellcey, Steve, libc-alpha, nd On Friday 26 May 2017 02:34 AM, Ramana Radhakrishnan wrote: > SVE in the ARM world is architectural and not micro-architectural in the context > of this discussion :) . Yes, I did not think of SVE as a micro-architecture detail. I used SVE as an example to show that multiarch =/=> micro-architectures. > The difference in the ARM world compared to the x86 world is the number > of micro-architectures that target the same architectural baseline. > Pushing in a memcpy > for every single micro-architecture out there will make the library a > maintenance > nightmare ! Nor am I arguing that micro-architectures ==> multiarch. My comments have only been pointing out that the IFUNC cost for multiarch is here to stay and there will likely never be consensus to drop it given the innovations happening in the ARM server space. > And we also need to see some numbers which compare the relative performance > of the routines being put in compared to the generic memcpy otherwise things > will not improve. Atleast something like this routine is X % better than the > generic memcpy. Yes, and I understand Steve had pointed out the benefits of his implementation in the original post. If it turns out that the implementation is optimal for the general case, by all means merge it with the generic one but that's not a reason to drop multiarch. Siddhesh ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-26 5:34 ` Siddhesh Poyarekar @ 2017-05-26 5:38 ` Andrew Pinski 0 siblings, 0 replies; 19+ messages in thread From: Andrew Pinski @ 2017-05-26 5:38 UTC (permalink / raw) To: Siddhesh Poyarekar Cc: Ramana Radhakrishnan, Wilco Dijkstra, Szabolcs Nagy, Ellcey, Steve, libc-alpha, nd On Thu, May 25, 2017 at 10:34 PM, Siddhesh Poyarekar <siddhesh@gotplt.org> wrote: > On Friday 26 May 2017 02:34 AM, Ramana Radhakrishnan wrote: >> SVE in the ARM world is architectural and not micro-architectural in the context >> of this discussion :) . > > Yes, I did not think of SVE as a micro-architecture detail. I used SVE > as an example to show that multiarch =/=> micro-architectures. > >> The difference in the ARM world compared to the x86 world is the number >> of micro-architectures that target the same architectural baseline. >> Pushing in a memcpy >> for every single micro-architecture out there will make the library a >> maintenance >> nightmare ! > > Nor am I arguing that micro-architectures ==> multiarch. My comments > have only been pointing out that the IFUNC cost for multiarch is here to > stay and there will likely never be consensus to drop it given the > innovations happening in the ARM server space. > >> And we also need to see some numbers which compare the relative performance >> of the routines being put in compared to the generic memcpy otherwise things >> will not improve. Atleast something like this routine is X % better than the >> generic memcpy. > > Yes, and I understand Steve had pointed out the benefits of his > implementation in the original post. If it turns out that the > implementation is optimal for the general case, by all means merge it > with the generic one but that's not a reason to drop multiarch. One more comment about static linking. In the server world nobody static links any more. It is not something people do at all. It is hard to do in a reasonable fashion any more due to how things like DNS lookup in glibc. Thanks, Andrew > > Siddhesh ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Ping: [Patch] aarch64: Thunderx specific memcpy and memmove 2017-05-24 17:04 ` Szabolcs Nagy 2017-05-25 6:42 ` Siddhesh Poyarekar @ 2017-05-25 16:22 ` Steve Ellcey 1 sibling, 0 replies; 19+ messages in thread From: Steve Ellcey @ 2017-05-25 16:22 UTC (permalink / raw) To: Szabolcs Nagy, Ellcey, Steve, Siddhesh Poyarekar, libc-alpha, Wilco Dijkstra Cc: nd On Wed, 2017-05-24 at 18:04 +0100, Szabolcs Nagy wrote: > - i don't know if you plan to make more changes to the thunderx > memcpy, if prefetching is the only change then it's likely that > we can agree on a generic version that's good enough.  if you > do plan to make further changes, then keep in mind that we try > to have same/similar generic memcpy across c runtimes and if > your change is good for generic we might not be able to use the > code outside of glibc (so newlib, bionic, freebsd,.. memcpy > would diverge) I am not looking at any more changes to the thunderx memcpy but someone here is working on a thunderx2 memcpy so I do expect a third version of memcpy at some point. > - non-thunderx systems are affected: static linked code using > memcpy will start to go through an indirection (iplt) instead > of direct call. if there are complaints about it or other ifunc > related issues come up, then again we will have to reconsider it. > > so the patch can go in with an understanding that it may go out. I understand that we may have to look at issues like this in the future, though building glibc without multiarch enabled should provide an alternative to those who don't want IFUNC functionality. Steve Ellcey sellcey@cavium.com ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2017-05-26 5:42 UTC | newest] Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2017-05-01 18:27 Ping: [Patch] aarch64: Thunderx specific memcpy and memmove Steve Ellcey 2017-05-01 21:20 ` Wainer dos Santos Moschetta 2017-05-03 14:01 ` Szabolcs Nagy 2017-05-09 3:17 ` Siddhesh Poyarekar 2017-05-09 21:45 ` Steve Ellcey 2017-05-18 21:48 ` Steve Ellcey 2017-05-19 7:41 ` Siddhesh Poyarekar [not found] ` <DM5PR07MB34662F805C1EDE45882B82F6F5F90@DM5PR07MB3466.namprd07.prod.outlook.com> 2017-05-24 17:04 ` Szabolcs Nagy 2017-05-25 6:42 ` Siddhesh Poyarekar 2017-05-25 16:28 ` Andrew Pinski 2017-05-25 16:43 ` Ramana Radhakrishnan 2017-05-25 17:49 ` Wilco Dijkstra 2017-05-25 19:26 ` Siddhesh Poyarekar 2017-05-25 21:04 ` Ramana Radhakrishnan 2017-05-25 21:12 ` Florian Weimer 2017-05-26 5:42 ` Siddhesh Poyarekar 2017-05-26 5:34 ` Siddhesh Poyarekar 2017-05-26 5:38 ` Andrew Pinski 2017-05-25 16:22 ` Steve Ellcey
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).