From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by sourceware.org (Postfix) with ESMTPS id 527C73858D32 for ; Wed, 15 Jun 2022 16:49:23 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 527C73858D32 Received: by mail-pf1-x433.google.com with SMTP id y6so6306495pfr.13 for ; Wed, 15 Jun 2022 09:49:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=6ogFNt4sYKy6hlDqiwud1O9V5lTFAUoNgA/fIgr/css=; b=drjNgzWPY4/CE5/gR4xen9yLSZxTY/gVHzLxYnCeUXC7uxxlys9hPYQGORCDpjbQvB N7xNFu6naCpenULa7APabniyCK8nxGtAE6uoN3PSgZ1DKLocO/pEC2l1poQj410fzmX8 L2LI5L2g0/FwopoqMkzBW1svXnarkgobUwMw19/hd9MUxrxkOJy9WHXtDt6yKBbfKMTn d7x6Xa5ktbQG1XjCmz7JgZW4Y++qDNLtnopA5XVYlf980ac69Q4cL9ExLNHucxifmfiT RfkZNRCwl+LmeUk4TiJi0akvMe+SHIe+RgZa4n6PuyKZQ07wAlfWp7joOETl8fOpT5XM Ns+Q== X-Gm-Message-State: AJIora+qEj2aUlLkFaHfCNBvus72UGFEWvuLvghwwXaGenXcsCz5RzWC ZoIPZCeK4wmOQKBHjQco7ptR+DZIBv4wP1MpY5A= X-Google-Smtp-Source: AGRyM1vCiom6GzW/TjrA4motpeL/nFYwZbCa4z5lsET2NT8Zrh0cqcmNP1hI/jbgiIASEXxw6Oue0GdRUIYfbDTJFN8= X-Received: by 2002:a63:1e05:0:b0:3fd:9170:800d with SMTP id e5-20020a631e05000000b003fd9170800dmr615047pge.586.1655311762198; Wed, 15 Jun 2022 09:49:22 -0700 (PDT) MIME-Version: 1.0 References: <20220615002533.1741934-2-goldstein.w.n@gmail.com> <20220615151246.613130-1-goldstein.w.n@gmail.com> In-Reply-To: <20220615151246.613130-1-goldstein.w.n@gmail.com> From: "H.J. Lu" Date: Wed, 15 Jun 2022 09:48:46 -0700 Message-ID: Subject: Re: [PATCH v3] x86: Cleanup bounds checking in large memcpy case To: Noah Goldstein Cc: GNU C Library , "Carlos O'Donell" Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-3025.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 15 Jun 2022 16:49:25 -0000 On Wed, Jun 15, 2022 at 8:12 AM Noah Goldstein wrote: > > 1. Fix incorrect lower-bound threshold in L(large_memcpy_2x). > Previously was using `__x86_rep_movsb_threshold` and should > have been using `__x86_shared_non_temporal_threshold`. > > 2. Avoid reloading __x86_shared_non_temporal_threshold before > the L(large_memcpy_4x) bounds check. > > 3. Document the second bounds check for L(large_memcpy_4x) > more clearly. > --- > manual/tunables.texi | 2 +- > sysdeps/x86/dl-cacheinfo.h | 6 +++- > .../multiarch/memmove-vec-unaligned-erms.S | 29 ++++++++++++++----- > 3 files changed, 27 insertions(+), 10 deletions(-) > > diff --git a/manual/tunables.texi b/manual/tunables.texi > index 1482412078..49daf3eb4a 100644 > --- a/manual/tunables.texi > +++ b/manual/tunables.texi > @@ -47,7 +47,7 @@ glibc.malloc.mxfast: 0x0 (min: 0x0, max: 0xffffffffffffffff) > glibc.elision.skip_lock_busy: 3 (min: -2147483648, max: 2147483647) > glibc.malloc.top_pad: 0x0 (min: 0x0, max: 0xffffffffffffffff) > glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff) > -glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x0, max: 0xffffffffffffffff) > +glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x0, max: 0x0fffffffffffffff) > glibc.cpu.x86_shstk: > glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff) > glibc.malloc.mmap_max: 0 (min: -2147483648, max: 2147483647) > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h > index cc3b840f9c..f94ff2df43 100644 > --- a/sysdeps/x86/dl-cacheinfo.h > +++ b/sysdeps/x86/dl-cacheinfo.h > @@ -931,8 +931,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) > > TUNABLE_SET_WITH_BOUNDS (x86_data_cache_size, data, 0, SIZE_MAX); > TUNABLE_SET_WITH_BOUNDS (x86_shared_cache_size, shared, 0, SIZE_MAX); > + /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of > + 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best > + if that operation cannot overflow. Not the '>> 4' also reflect the bound > + in the manual. */ > TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold, > - 0, SIZE_MAX); > + 0, SIZE_MAX >> 4); > TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold, > minimum_rep_movsb_threshold, SIZE_MAX); > TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1, To help backport, please break this patch into 2 patches and make the memmove-vec-unaligned-erms.S change a separate one. Thanks. > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > index af51177d5d..d1518b8bab 100644 > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > @@ -118,7 +118,13 @@ > # define LARGE_LOAD_SIZE (VEC_SIZE * 4) > #endif > > -/* Amount to shift rdx by to compare for memcpy_large_4x. */ > +/* Amount to shift __x86_shared_non_temporal_threshold by for > + bound for memcpy_large_4x. This is essentially use to to > + indicate that the copy is far beyond the scope of L3 > + (assuming no user config x86_non_temporal_threshold) and to > + use a more aggressively unrolled loop. NB: before > + increasing the value also update initialization of > + x86_non_temporal_threshold. */ > #ifndef LOG_4X_MEMCPY_THRESH > # define LOG_4X_MEMCPY_THRESH 4 > #endif > @@ -724,9 +730,14 @@ L(skip_short_movsb_check): > .p2align 4,, 10 > #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc) > L(large_memcpy_2x_check): > - cmp __x86_rep_movsb_threshold(%rip), %RDX_LP > - jb L(more_8x_vec_check) > + /* Entry from L(large_memcpy_2x) has a redundant load of > + __x86_shared_non_temporal_threshold(%rip). L(large_memcpy_2x) > + is only use for the non-erms memmove which is generally less > + common. */ > L(large_memcpy_2x): > + mov __x86_shared_non_temporal_threshold(%rip), %R11_LP > + cmp %R11_LP, %RDX_LP > + jb L(more_8x_vec_check) > /* To reach this point it is impossible for dst > src and > overlap. Remaining to check is src > dst and overlap. rcx > already contains dst - src. Negate rcx to get src - dst. If > @@ -774,18 +785,21 @@ L(large_memcpy_2x): > /* ecx contains -(dst - src). not ecx will return dst - src - 1 > which works for testing aliasing. */ > notl %ecx > + movq %rdx, %r10 > testl $(PAGE_SIZE - VEC_SIZE * 8), %ecx > jz L(large_memcpy_4x) > > - movq %rdx, %r10 > - shrq $LOG_4X_MEMCPY_THRESH, %r10 > - cmp __x86_shared_non_temporal_threshold(%rip), %r10 > + /* r11 has __x86_shared_non_temporal_threshold. Shift it left > + by LOG_4X_MEMCPY_THRESH to get L(large_memcpy_4x) threshold. > + */ > + shlq $LOG_4X_MEMCPY_THRESH, %r11 > + cmp %r11, %rdx > jae L(large_memcpy_4x) > > /* edx will store remainder size for copying tail. */ > andl $(PAGE_SIZE * 2 - 1), %edx > /* r10 stores outer loop counter. */ > - shrq $((LOG_PAGE_SIZE + 1) - LOG_4X_MEMCPY_THRESH), %r10 > + shrq $(LOG_PAGE_SIZE + 1), %r10 > /* Copy 4x VEC at a time from 2 pages. */ > .p2align 4 > L(loop_large_memcpy_2x_outer): > @@ -850,7 +864,6 @@ L(large_memcpy_2x_end): > > .p2align 4 > L(large_memcpy_4x): > - movq %rdx, %r10 > /* edx will store remainder size for copying tail. */ > andl $(PAGE_SIZE * 4 - 1), %edx > /* r10 stores outer loop counter. */ > -- > 2.34.1 > -- H.J.