From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 125262 invoked by alias); 6 Jul 2018 12:48:40 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Received: (qmail 125149 invoked by uid 89); 6 Jul 2018 12:48:36 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-16.4 required=5.0 tests=AWL,BAYES_00,GIT_PATCH_0,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,RCVD_IN_DNSWL_NONE,SPAM_URI,SPF_PASS autolearn=ham version=3.3.2 spammy=tend X-HELO: mail-ua0-f170.google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=subject:to:references:from:openpgp:autocrypt:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=Cs2B3VAat6TezcIEPWjS8dD/WGRx1rCvGDQkohh5mGo=; b=Y3RzaUOAeunVbU7gNJr9uBDqxmGVKr4fMsxHmPQmU1EHPCuQwQW28PYR9FrTBr2PBF H24tl3np8TAH7jayKp3TfUVnCYr+7LrXwVaPf8P2smjTb3/wSKqlG/qBBaegmMsgk+J7 4tb8bnHgrDb6oTFZevE2W9WcJ1b7Oqmqh9UZY= Return-Path: Subject: Re: [PATCH][AArch64] Inline mempcpy again To: Siddhesh Poyarekar , libc-alpha@sourceware.org References: <57ddf8ec-af86-cddb-79b6-fa5713143bb5@linaro.org> <954cdead-8aa9-ab22-90b1-0a19a4536aee@gotplt.org> From: Adhemerval Zanella Openpgp: preference=signencrypt Message-ID: <57fdf28d-c054-9817-1f64-175516846b0e@linaro.org> Date: Fri, 06 Jul 2018 12:48:00 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0 MIME-Version: 1.0 In-Reply-To: <954cdead-8aa9-ab22-90b1-0a19a4536aee@gotplt.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit X-SW-Source: 2018-07/txt/msg00186.txt.bz2 On 06/07/2018 02:34, Siddhesh Poyarekar wrote: > On 07/05/2018 06:33 PM, Adhemerval Zanella wrote: >> If optimizing mempcpy is really required I think a better option would >> to provide the optimized based on current memcpy/memmove. I have created >> an implementation [1] which provides the expected optimized mempcpy with >> the cost of only extra 'mov' instruction on both memcpy and memmove (to >> use the same memcpy/memmove code) >> >> [1] https://sourceware.org/git/?p=glibc.git;a=shortlog;h=refs/heads/azanella/aarch64-mempcpy > > I had proposed the exact same thing for __memcpy_chk[1] for aarch64, which was rejected under the pretext that this should be handled completely by gcc.  If that consensus has changed then I'd like to propose that patch again as well. > > However, I do understand that this is much better off being fixed in gcc so we should probably try and understand the limitations of doing that first.  Wilco, does anything prevent gcc from doing this optimization for mempcpy or __memcpy_chk? > > Siddhesh I tend to agree it should indeed be handled by compiler, but checking on bugzilla reports about the changes on gcc, it seems the idea is not make it generic for all platforms, but rather dependent of the backend plus targeting libc (that's why PR70140 seems to be reverted). In any case, if the idea is indeed optimize mempcpy and GCC won't get the required support anytime soon I still prefer to *not* get back in adding the code on string*.h headers. This patch was just one idea that we can get a similar performance directly on the assembly routines (which the advantage if compiler does not transform mempcpy to memcpy it will still get some improvements). > > [1] http://sourceware-org.1504.n7.nabble.com/PATCH-0-2-Multiarch-hooks-for-memcpy-variants-td463236.html > >>> >>> -- >>> diff --git a/include/string.h b/include/string.h >>> index 069efd0b87010e5fdb64c87ced7af1dc4f54f232..46b90b8f346149f075fad026e562dfb27b658969 100644 >>> --- a/include/string.h >>> +++ b/include/string.h >>> @@ -197,4 +197,23 @@ extern char *__strncat_chk (char *__restrict __dest, >>>                   size_t __len, size_t __destlen) __THROW; >>>   #endif >>>   +#if defined __USE_GNU && defined __OPTIMIZE__ \ >>> +    && defined __extern_always_inline && __GNUC_PREREQ (3,2) \ >>> +    && defined _INLINE_mempcpy >>> + >>> +#undef mempcpy >>> +#undef __mempcpy >>> + >>> +#define mempcpy(dest, src, n) __mempcpy_inline (dest, src, n) >>> +#define __mempcpy(dest, src, n) __mempcpy_inline (dest, src, n) >>> + >>> +__extern_always_inline void * >>> +__mempcpy_inline (void *__restrict __dest, >>> +          const void *__restrict __src, size_t __n) >>> +{ >>> +  return (char *) memcpy (__dest, __src, __n) + __n; >>> +} >>> + >>> +#endif >>> + >>>   #endif >>> diff --git a/sysdeps/aarch64/string_private.h b/sysdeps/aarch64/string_private.h >>> index 09dedbf3db40cf06077a44af992b399a6b37b48d..8b8fdddcc17a3f69455e72efe9c3616d2d33abe2 100644 >>> --- a/sysdeps/aarch64/string_private.h >>> +++ b/sysdeps/aarch64/string_private.h >>> @@ -18,3 +18,6 @@ >>>     /* AArch64 implementations support efficient unaligned access.  */ >>>   #define _STRING_ARCH_unaligned 1 >>> + >>> +/* Inline mempcpy since GCC doesn't optimize it (PR70140).  */ >>> +#define _INLINE_mempcpy 1 >>>