From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yw1-x1135.google.com (mail-yw1-x1135.google.com [IPv6:2607:f8b0:4864:20::1135]) by sourceware.org (Postfix) with ESMTPS id 827613854160 for ; Wed, 29 Jun 2022 19:34:17 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 827613854160 Received: by mail-yw1-x1135.google.com with SMTP id 00721157ae682-31780ad7535so158328557b3.8 for ; Wed, 29 Jun 2022 12:34:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=DzqcFoB14QRx5bY3rZc2BYKCxwpIs/JOsZYEyCmSvQU=; b=CAQuqt2I0630hmch+MI5Hw/FngGAqA/n37cU4IcyYUS4Hl+AvtOIy18YmtEOuZ6zFe dF5klAut75h6AACzhUr2PVnjn3fq4NgMW76xHsaSDKGZ7grrbKP7JxYpNQlrZsCaxeWG iZd5i0/WrP023G255WEUryHu3dmfVIti8VjH9eFNr76vtqTPcvRbvwPZp5FuK8NtV72D T1HtxEzGNPgAnPOuL8HPqc0LU37WJtZK5eVYz9oDDbtgVD+u7f9QVtKRyWNMBdZlQzes FcQDX12JpI8Ue97p2Rf40tAV6w+bb9gGT+Ui2FITIB9BXWi9bkgowMgrasIb6PfjH3Dz 5p+Q== X-Gm-Message-State: AJIora+tjEpRyKZAaGJ8QRp1Vsm4etTmlAI3lzR592RmhcMhaFH43rar n3JlJuspihNMpaRXRKFbl2cmrmVRvGXzC+B+hxg= X-Google-Smtp-Source: AGRyM1vh4Bt7vQJSqHFaeBNQMStOg1+DRjZpBOTu7TZJ7ghe2f7vIfanurHvLZxFdwip/AeSpwOkjLFSjkibSk9NQIE= X-Received: by 2002:a81:4e95:0:b0:31b:771a:3664 with SMTP id c143-20020a814e95000000b0031b771a3664mr5693408ywb.399.1656531256775; Wed, 29 Jun 2022 12:34:16 -0700 (PDT) MIME-Version: 1.0 References: <20220628152757.17922-1-goldstein.w.n@gmail.com> In-Reply-To: From: Noah Goldstein Date: Wed, 29 Jun 2022 12:34:06 -0700 Message-ID: Subject: Re: [PATCH v1 1/2] x86: Move mem{p}{mov|cpy}_{chk_}erms to its own file To: "H.J. Lu" Cc: GNU C Library , "Carlos O'Donell" Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 29 Jun 2022 19:34:19 -0000 On Wed, Jun 29, 2022 at 12:32 PM H.J. Lu wrote: > > On Tue, Jun 28, 2022 at 8:28 AM Noah Goldstein wrote: > > > > The primary memmove_{impl}_unaligned_erms implementations don't > > interact with this function. Putting them in same file both > > wastes space and unnecessarily bloats a hot code section. > > --- > > sysdeps/x86_64/multiarch/memmove-erms.S | 53 +++++++++++++++++++ > > .../multiarch/memmove-vec-unaligned-erms.S | 50 ----------------- > > 2 files changed, 53 insertions(+), 50 deletions(-) > > create mode 100644 sysdeps/x86_64/multiarch/memmove-erms.S > > > > diff --git a/sysdeps/x86_64/multiarch/memmove-erms.S b/sysdeps/x86_64/multiarch/memmove-erms.S > > new file mode 100644 > > index 0000000000..d98d21644b > > --- /dev/null > > +++ b/sysdeps/x86_64/multiarch/memmove-erms.S > > @@ -0,0 +1,53 @@ > > +#include > > + > > +#if defined USE_MULTIARCH && IS_IN (libc) > > + .text > > +ENTRY (__mempcpy_chk_erms) > > + cmp %RDX_LP, %RCX_LP > > + jb HIDDEN_JUMPTARGET (__chk_fail) > > +END (__mempcpy_chk_erms) > > + > > +/* Only used to measure performance of REP MOVSB. */ > > +ENTRY (__mempcpy_erms) > > + mov %RDI_LP, %RAX_LP > > + /* Skip zero length. */ > > + test %RDX_LP, %RDX_LP > > + jz 2f > > + add %RDX_LP, %RAX_LP > > + jmp L(start_movsb) > > +END (__mempcpy_erms) > > + > > +ENTRY (__memmove_chk_erms) > > + cmp %RDX_LP, %RCX_LP > > + jb HIDDEN_JUMPTARGET (__chk_fail) > > +END (__memmove_chk_erms) > > + > > +ENTRY (__memmove_erms) > > + movq %rdi, %rax > > + /* Skip zero length. */ > > + test %RDX_LP, %RDX_LP > > + jz 2f > > +L(start_movsb): > > + mov %RDX_LP, %RCX_LP > > + cmp %RSI_LP, %RDI_LP > > + jb 1f > > + /* Source == destination is less common. */ > > + je 2f > > + lea (%rsi,%rcx), %RDX_LP > > + cmp %RDX_LP, %RDI_LP > > + jb L(movsb_backward) > > +1: > > + rep movsb > > +2: > > + ret > > +L(movsb_backward): > > + leaq -1(%rdi,%rcx), %rdi > > + leaq -1(%rsi,%rcx), %rsi > > + std > > + rep movsb > > + cld > > + ret > > +END (__memmove_erms) > > +strong_alias (__memmove_erms, __memcpy_erms) > > +strong_alias (__memmove_chk_erms, __memcpy_chk_erms) > > +#endif > > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > index d1518b8bab..04747133b7 100644 > > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > @@ -239,56 +239,6 @@ L(start): > > #endif > > #if defined USE_MULTIARCH && IS_IN (libc) > > END (MEMMOVE_SYMBOL (__memmove, unaligned)) > > -# if VEC_SIZE == 16 > > -ENTRY (__mempcpy_chk_erms) > > - cmp %RDX_LP, %RCX_LP > > - jb HIDDEN_JUMPTARGET (__chk_fail) > > -END (__mempcpy_chk_erms) > > - > > -/* Only used to measure performance of REP MOVSB. */ > > -ENTRY (__mempcpy_erms) > > - mov %RDI_LP, %RAX_LP > > - /* Skip zero length. */ > > - test %RDX_LP, %RDX_LP > > - jz 2f > > - add %RDX_LP, %RAX_LP > > - jmp L(start_movsb) > > -END (__mempcpy_erms) > > - > > -ENTRY (__memmove_chk_erms) > > - cmp %RDX_LP, %RCX_LP > > - jb HIDDEN_JUMPTARGET (__chk_fail) > > -END (__memmove_chk_erms) > > - > > -ENTRY (__memmove_erms) > > - movq %rdi, %rax > > - /* Skip zero length. */ > > - test %RDX_LP, %RDX_LP > > - jz 2f > > -L(start_movsb): > > - mov %RDX_LP, %RCX_LP > > - cmp %RSI_LP, %RDI_LP > > - jb 1f > > - /* Source == destination is less common. */ > > - je 2f > > - lea (%rsi,%rcx), %RDX_LP > > - cmp %RDX_LP, %RDI_LP > > - jb L(movsb_backward) > > -1: > > - rep movsb > > -2: > > - ret > > -L(movsb_backward): > > - leaq -1(%rdi,%rcx), %rdi > > - leaq -1(%rsi,%rcx), %rsi > > - std > > - rep movsb > > - cld > > - ret > > -END (__memmove_erms) > > -strong_alias (__memmove_erms, __memcpy_erms) > > -strong_alias (__memmove_chk_erms, __memcpy_chk_erms) > > -# endif > > > > # ifdef SHARED > > ENTRY (MEMMOVE_CHK_SYMBOL (__mempcpy_chk, unaligned_erms)) > > -- > > 2.34.1 > > > > Please make a standalone patch. The memmove isa raising change has a dependency on it hence the series. Submit this first then rebsubmit memmove? Same for memset-erms / memset-isa raising? > > -- > H.J.