From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yb1-xb2d.google.com (mail-yb1-xb2d.google.com [IPv6:2607:f8b0:4864:20::b2d]) by sourceware.org (Postfix) with ESMTPS id 1A159383F970 for ; Wed, 29 Jun 2022 23:09:35 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 1A159383F970 Received: by mail-yb1-xb2d.google.com with SMTP id p136so24463960ybg.4 for ; Wed, 29 Jun 2022 16:09:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=8CrrSISgfOXoXlb2bmGGhB2uTtrQ2olauKB8RCqpSLI=; b=B0aA3YdYXatvLEUEeLyVh6FFBUbDgxitVa574LIUB3u3TLciZC8CSmUV1KH4fPb29C UB06ORehdHtJJJCYnii7urir8jZVyOKfE5PAghXELZ/dcwTELkC1MPxWNzId4PnQrMfL JLlMFvL6vOFC5IeLF3eBUOJHKfBs7V+vYbKoy+MvC+9zzrcKMNjkNcMwt7OCK6VcwN3n D/zDvQomov5qk5vOg3tBaUUcs4igVTqPtAcpdV3TUM4s3Pj0O5sTjREwqHjp6IDmbuKS XBP0AfiXZNaLtkbJ0UQcPJHB8WcZR059zt1ecHmEBnEU4/EKxr9xFvU9X2dYg0P9neiK fNHw== X-Gm-Message-State: AJIora+WN5TQg7JEXA26nAKGpqRbFsyC8acCJpXgeaHoV57rae9bnZ2O ymXxzioG1FiIR3GfKZWg6eHM1TMHMB8vKkp9AbI= X-Google-Smtp-Source: AGRyM1ut/DnKFMYHf9r5VgcKB7ugUT84SVFHw98visbqChCnVQJc4xmmjiWTRLQ0iyaXVgdrf9f/b/XVi++QkCwAdQk= X-Received: by 2002:a25:a1a1:0:b0:668:b8e6:8012 with SMTP id a30-20020a25a1a1000000b00668b8e68012mr5875100ybi.526.1656544174397; Wed, 29 Jun 2022 16:09:34 -0700 (PDT) MIME-Version: 1.0 References: <20220628152757.17922-1-goldstein.w.n@gmail.com> <20220629221349.1242862-1-goldstein.w.n@gmail.com> In-Reply-To: From: Noah Goldstein Date: Wed, 29 Jun 2022 16:09:23 -0700 Message-ID: Subject: Re: [PATCH v2 1/2] x86: Move mem{p}{mov|cpy}_{chk_}erms to its own file To: "H.J. Lu" Cc: GNU C Library , "Carlos O'Donell" Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-8.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2022 23:09:36 -0000 On Wed, Jun 29, 2022 at 3:20 PM H.J. Lu wrote: > > On Wed, Jun 29, 2022 at 3:13 PM Noah Goldstein wrote: > > > > The primary memmove_{impl}_unaligned_erms implementations don't > > interact with this function. Putting them in same file both > > wastes space and unnecessarily bloats a hot code section. > > --- > > sysdeps/x86_64/multiarch/Makefile | 1 + > > sysdeps/x86_64/multiarch/memmove-erms.S | 53 +++++++++++++++++++ > > .../multiarch/memmove-vec-unaligned-erms.S | 50 ----------------- > > 3 files changed, 54 insertions(+), 50 deletions(-) > > create mode 100644 sysdeps/x86_64/multiarch/memmove-erms.S > > > > diff --git a/sysdeps/x86_64/multiarch/Makefile b/sysdeps/x86_64/multiarch/Makefile > > index 666ee4d5d6..62a4d96fb8 100644 > > --- a/sysdeps/x86_64/multiarch/Makefile > > +++ b/sysdeps/x86_64/multiarch/Makefile > > @@ -18,6 +18,7 @@ sysdep_routines += \ > > memmove-avx-unaligned-erms-rtm \ > > memmove-avx512-no-vzeroupper \ > > memmove-avx512-unaligned-erms \ > > + memmove-erms \ > > memmove-evex-unaligned-erms \ > > memmove-sse2-unaligned-erms \ > > memmove-ssse3 \ > > diff --git a/sysdeps/x86_64/multiarch/memmove-erms.S b/sysdeps/x86_64/multiarch/memmove-erms.S > > new file mode 100644 > > index 0000000000..d98d21644b > > --- /dev/null > > +++ b/sysdeps/x86_64/multiarch/memmove-erms.S > > @@ -0,0 +1,53 @@ > > Need copyright notice. Fixed in V3. > > > +#include > > + > > +#if defined USE_MULTIARCH && IS_IN (libc) > > + .text > > +ENTRY (__mempcpy_chk_erms) > > + cmp %RDX_LP, %RCX_LP > > + jb HIDDEN_JUMPTARGET (__chk_fail) > > +END (__mempcpy_chk_erms) > > + > > +/* Only used to measure performance of REP MOVSB. */ > > +ENTRY (__mempcpy_erms) > > + mov %RDI_LP, %RAX_LP > > + /* Skip zero length. */ > > + test %RDX_LP, %RDX_LP > > + jz 2f > > + add %RDX_LP, %RAX_LP > > + jmp L(start_movsb) > > +END (__mempcpy_erms) > > + > > +ENTRY (__memmove_chk_erms) > > + cmp %RDX_LP, %RCX_LP > > + jb HIDDEN_JUMPTARGET (__chk_fail) > > +END (__memmove_chk_erms) > > + > > +ENTRY (__memmove_erms) > > + movq %rdi, %rax > > + /* Skip zero length. */ > > + test %RDX_LP, %RDX_LP > > + jz 2f > > +L(start_movsb): > > + mov %RDX_LP, %RCX_LP > > + cmp %RSI_LP, %RDI_LP > > + jb 1f > > + /* Source == destination is less common. */ > > + je 2f > > + lea (%rsi,%rcx), %RDX_LP > > + cmp %RDX_LP, %RDI_LP > > + jb L(movsb_backward) > > +1: > > + rep movsb > > +2: > > + ret > > +L(movsb_backward): > > + leaq -1(%rdi,%rcx), %rdi > > + leaq -1(%rsi,%rcx), %rsi > > + std > > + rep movsb > > + cld > > + ret > > +END (__memmove_erms) > > +strong_alias (__memmove_erms, __memcpy_erms) > > +strong_alias (__memmove_chk_erms, __memcpy_chk_erms) > > +#endif > > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > index d1518b8bab..04747133b7 100644 > > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > @@ -239,56 +239,6 @@ L(start): > > #endif > > #if defined USE_MULTIARCH && IS_IN (libc) > > END (MEMMOVE_SYMBOL (__memmove, unaligned)) > > -# if VEC_SIZE == 16 > > -ENTRY (__mempcpy_chk_erms) > > - cmp %RDX_LP, %RCX_LP > > - jb HIDDEN_JUMPTARGET (__chk_fail) > > -END (__mempcpy_chk_erms) > > - > > -/* Only used to measure performance of REP MOVSB. */ > > -ENTRY (__mempcpy_erms) > > - mov %RDI_LP, %RAX_LP > > - /* Skip zero length. */ > > - test %RDX_LP, %RDX_LP > > - jz 2f > > - add %RDX_LP, %RAX_LP > > - jmp L(start_movsb) > > -END (__mempcpy_erms) > > - > > -ENTRY (__memmove_chk_erms) > > - cmp %RDX_LP, %RCX_LP > > - jb HIDDEN_JUMPTARGET (__chk_fail) > > -END (__memmove_chk_erms) > > - > > -ENTRY (__memmove_erms) > > - movq %rdi, %rax > > - /* Skip zero length. */ > > - test %RDX_LP, %RDX_LP > > - jz 2f > > -L(start_movsb): > > - mov %RDX_LP, %RCX_LP > > - cmp %RSI_LP, %RDI_LP > > - jb 1f > > - /* Source == destination is less common. */ > > - je 2f > > - lea (%rsi,%rcx), %RDX_LP > > - cmp %RDX_LP, %RDI_LP > > - jb L(movsb_backward) > > -1: > > - rep movsb > > -2: > > - ret > > -L(movsb_backward): > > - leaq -1(%rdi,%rcx), %rdi > > - leaq -1(%rsi,%rcx), %rsi > > - std > > - rep movsb > > - cld > > - ret > > -END (__memmove_erms) > > -strong_alias (__memmove_erms, __memcpy_erms) > > -strong_alias (__memmove_chk_erms, __memcpy_chk_erms) > > -# endif > > > > # ifdef SHARED > > ENTRY (MEMMOVE_CHK_SYMBOL (__mempcpy_chk, unaligned_erms)) > > -- > > 2.34.1 > > > > > -- > H.J.