public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Noah Goldstein <goldstein.w.n@gmail.com>
To: "H.J. Lu" <hjl.tools@gmail.com>
Cc: GNU C Library <libc-alpha@sourceware.org>,
	"Carlos O'Donell" <carlos@systemhalted.org>
Subject: Re: [PATCH v3] x86: Cleanup bounds checking in large memcpy case
Date: Wed, 15 Jun 2022 10:44:39 -0700	[thread overview]
Message-ID: <CAFUsyfJ56Xezb6EjC4epP-zLwoir5jMyF1fq-Up393MNR22G0A@mail.gmail.com> (raw)
In-Reply-To: <CAMe9rOpaWR3v-zT6=EGJ5e3A-VHNE+mUtryB41K3f19PiV_+DA@mail.gmail.com>

On Wed, Jun 15, 2022 at 9:49 AM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Wed, Jun 15, 2022 at 8:12 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > 1. Fix incorrect lower-bound threshold in L(large_memcpy_2x).
> >    Previously was using `__x86_rep_movsb_threshold` and should
> >    have been using `__x86_shared_non_temporal_threshold`.
> >
> > 2. Avoid reloading __x86_shared_non_temporal_threshold before
> >    the L(large_memcpy_4x) bounds check.
> >
> > 3. Document the second bounds check for L(large_memcpy_4x)
> >    more clearly.
> > ---
> >  manual/tunables.texi                          |  2 +-
> >  sysdeps/x86/dl-cacheinfo.h                    |  6 +++-
> >  .../multiarch/memmove-vec-unaligned-erms.S    | 29 ++++++++++++++-----
> >  3 files changed, 27 insertions(+), 10 deletions(-)
> >
> > diff --git a/manual/tunables.texi b/manual/tunables.texi
> > index 1482412078..49daf3eb4a 100644
> > --- a/manual/tunables.texi
> > +++ b/manual/tunables.texi
> > @@ -47,7 +47,7 @@ glibc.malloc.mxfast: 0x0 (min: 0x0, max: 0xffffffffffffffff)
> >  glibc.elision.skip_lock_busy: 3 (min: -2147483648, max: 2147483647)
> >  glibc.malloc.top_pad: 0x0 (min: 0x0, max: 0xffffffffffffffff)
> >  glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
> > -glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x0, max: 0xffffffffffffffff)
> > +glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x0, max: 0x0fffffffffffffff)
> >  glibc.cpu.x86_shstk:
> >  glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
> >  glibc.malloc.mmap_max: 0 (min: -2147483648, max: 2147483647)
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index cc3b840f9c..f94ff2df43 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -931,8 +931,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >
> >    TUNABLE_SET_WITH_BOUNDS (x86_data_cache_size, data, 0, SIZE_MAX);
> >    TUNABLE_SET_WITH_BOUNDS (x86_shared_cache_size, shared, 0, SIZE_MAX);
> > +  /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > +     'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > +     if that operation cannot overflow.  Not the '>> 4' also reflect the bound
> > +     in the manual.  */
> >    TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
> > -                          0, SIZE_MAX);
> > +                          0, SIZE_MAX >> 4);
> >    TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
> >                            minimum_rep_movsb_threshold, SIZE_MAX);
> >    TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
>
> To help backport, please break this patch into 2 patches and
> make the memmove-vec-unaligned-erms.S change a separate
> one.

Done in V4.

Note there has been a lower bound missing since 2.34 that might also
need to be backported.

Added it in the second patch. I can split that one too (since upper
bound is not a correctness
issue) if it does in fact need to be backported.
>
> Thanks.
>
> > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > index af51177d5d..d1518b8bab 100644
> > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > @@ -118,7 +118,13 @@
> >  # define LARGE_LOAD_SIZE (VEC_SIZE * 4)
> >  #endif
> >
> > -/* Amount to shift rdx by to compare for memcpy_large_4x.  */
> > +/* Amount to shift __x86_shared_non_temporal_threshold by for
> > +   bound for memcpy_large_4x. This is essentially use to to
> > +   indicate that the copy is far beyond the scope of L3
> > +   (assuming no user config x86_non_temporal_threshold) and to
> > +   use a more aggressively unrolled loop.  NB: before
> > +   increasing the value also update initialization of
> > +   x86_non_temporal_threshold.  */
> >  #ifndef LOG_4X_MEMCPY_THRESH
> >  # define LOG_4X_MEMCPY_THRESH 4
> >  #endif
> > @@ -724,9 +730,14 @@ L(skip_short_movsb_check):
> >         .p2align 4,, 10
> >  #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
> >  L(large_memcpy_2x_check):
> > -       cmp     __x86_rep_movsb_threshold(%rip), %RDX_LP
> > -       jb      L(more_8x_vec_check)
> > +       /* Entry from L(large_memcpy_2x) has a redundant load of
> > +          __x86_shared_non_temporal_threshold(%rip). L(large_memcpy_2x)
> > +          is only use for the non-erms memmove which is generally less
> > +          common.  */
> >  L(large_memcpy_2x):
> > +       mov     __x86_shared_non_temporal_threshold(%rip), %R11_LP
> > +       cmp     %R11_LP, %RDX_LP
> > +       jb      L(more_8x_vec_check)
> >         /* To reach this point it is impossible for dst > src and
> >            overlap. Remaining to check is src > dst and overlap. rcx
> >            already contains dst - src. Negate rcx to get src - dst. If
> > @@ -774,18 +785,21 @@ L(large_memcpy_2x):
> >         /* ecx contains -(dst - src). not ecx will return dst - src - 1
> >            which works for testing aliasing.  */
> >         notl    %ecx
> > +       movq    %rdx, %r10
> >         testl   $(PAGE_SIZE - VEC_SIZE * 8), %ecx
> >         jz      L(large_memcpy_4x)
> >
> > -       movq    %rdx, %r10
> > -       shrq    $LOG_4X_MEMCPY_THRESH, %r10
> > -       cmp     __x86_shared_non_temporal_threshold(%rip), %r10
> > +       /* r11 has __x86_shared_non_temporal_threshold.  Shift it left
> > +          by LOG_4X_MEMCPY_THRESH to get L(large_memcpy_4x) threshold.
> > +        */
> > +       shlq    $LOG_4X_MEMCPY_THRESH, %r11
> > +       cmp     %r11, %rdx
> >         jae     L(large_memcpy_4x)
> >
> >         /* edx will store remainder size for copying tail.  */
> >         andl    $(PAGE_SIZE * 2 - 1), %edx
> >         /* r10 stores outer loop counter.  */
> > -       shrq    $((LOG_PAGE_SIZE + 1) - LOG_4X_MEMCPY_THRESH), %r10
> > +       shrq    $(LOG_PAGE_SIZE + 1), %r10
> >         /* Copy 4x VEC at a time from 2 pages.  */
> >         .p2align 4
> >  L(loop_large_memcpy_2x_outer):
> > @@ -850,7 +864,6 @@ L(large_memcpy_2x_end):
> >
> >         .p2align 4
> >  L(large_memcpy_4x):
> > -       movq    %rdx, %r10
> >         /* edx will store remainder size for copying tail.  */
> >         andl    $(PAGE_SIZE * 4 - 1), %edx
> >         /* r10 stores outer loop counter.  */
> > --
> > 2.34.1
> >
>
>
> --
> H.J.

  reply	other threads:[~2022-06-15 17:44 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-15  0:25 [PATCH v1 1/3] x86: Fix misordered logic for setting `rep_movsb_stop_threshold` Noah Goldstein
2022-06-15  0:25 ` [PATCH v1 2/3] x86: Cleanup bounds checking in large memcpy case Noah Goldstein
2022-06-15  1:07   ` H.J. Lu
2022-06-15  3:57     ` Noah Goldstein
2022-06-15  3:57   ` [PATCH v2] " Noah Goldstein
2022-06-15 14:52     ` H.J. Lu
2022-06-15 15:13       ` Noah Goldstein
2022-06-15 15:12   ` [PATCH v3] " Noah Goldstein
2022-06-15 16:48     ` H.J. Lu
2022-06-15 17:44       ` Noah Goldstein [this message]
2022-06-15 17:41   ` [PATCH v4 1/2] " Noah Goldstein
2022-06-15 17:41     ` [PATCH v4 2/2] x86: Add bounds `x86_non_temporal_threshold` Noah Goldstein
2022-06-15 18:22       ` H.J. Lu
2022-06-15 18:33         ` Noah Goldstein
2022-06-15 18:32       ` [PATCH v5 " Noah Goldstein
2022-06-15 18:43         ` H.J. Lu
2022-06-15 19:52       ` [PATCH v6 2/3] " Noah Goldstein
2022-06-15 20:27         ` H.J. Lu
2022-06-15 20:35           ` Noah Goldstein
2022-06-15 20:34       ` [PATCH v7 " Noah Goldstein
2022-06-15 20:48         ` H.J. Lu
2022-07-14  2:55           ` Sunil Pandey
2022-06-15 18:22     ` [PATCH v4 1/2] x86: Cleanup bounds checking in large memcpy case H.J. Lu
2022-07-14  2:57       ` Sunil Pandey
2022-06-15  0:25 ` [PATCH v1 3/3] x86: Add sse42 implementation to strcmp's ifunc Noah Goldstein
2022-06-15  1:08   ` H.J. Lu
2022-07-14  2:54     ` Sunil Pandey
2022-06-15  1:02 ` [PATCH v1 1/3] x86: Fix misordered logic for setting `rep_movsb_stop_threshold` H.J. Lu
2022-07-14  2:53   ` Sunil Pandey

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFUsyfJ56Xezb6EjC4epP-zLwoir5jMyF1fq-Up393MNR22G0A@mail.gmail.com \
    --to=goldstein.w.n@gmail.com \
    --cc=carlos@systemhalted.org \
    --cc=hjl.tools@gmail.com \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).