public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Noah Goldstein <goldstein.w.n@gmail.com>
To: "H.J. Lu" <hjl.tools@gmail.com>
Cc: GNU C Library <libc-alpha@sourceware.org>
Subject: Re: [PATCH v3] Reversing calculation of __x86_shared_non_temporal_threshold
Date: Wed, 19 Apr 2023 18:24:43 -0500	[thread overview]
Message-ID: <CAFUsyf+y5+DdcFV8GGz9bPfNvGpFGLT_UcUqZ9DWtk-jdnDecQ@mail.gmail.com> (raw)
In-Reply-To: <CAMe9rOo5Zd5023k-oMa31w4Q-7NMYSDc4ZYqJty3ecd=b9t+SA@mail.gmail.com>

On Wed, Apr 19, 2023 at 5:43 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Wed, Apr 19, 2023 at 3:30 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Wed, Apr 19, 2023 at 5:26 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> > >
> > > ---------- Forwarded message ---------
> > > From: Patrick McGehearty via Libc-alpha <libc-alpha@sourceware.org>
> > > Date: Fri, Sep 25, 2020 at 3:21 PM
> > > Subject: [PATCH v3] Reversing calculation of __x86_shared_non_temporal_threshold
> > > To: <libc-alpha@sourceware.org>
> > >
> > >
> > > The __x86_shared_non_temporal_threshold determines when memcpy on x86
> > > uses non_temporal stores to avoid pushing other data out of the last
> > > level cache.
> > >
> > > This patch proposes to revert the calculation change made by H.J. Lu's
> > > patch of June 2, 2017.
> > >
> > > H.J. Lu's patch selected a threshold suitable for a single thread
> > > getting maximum performance. It was tuned using the single threaded
> > > large memcpy micro benchmark on an 8 core processor. The last change
> > > changes the threshold from using 3/4 of one thread's share of the
> > > cache to using 3/4 of the entire cache of a multi-threaded system
> > > before switching to non-temporal stores. Multi-threaded systems with
> > > more than a few threads are server-class and typically have many
> > > active threads. If one thread consumes 3/4 of the available cache for
> > > all threads, it will cause other active threads to have data removed
> > > from the cache. Two examples show the range of the effect. John
> > > McCalpin's widely parallel Stream benchmark, which runs in parallel
> > > and fetches data sequentially, saw a 20% slowdown with this patch on
> > > an internal system test of 128 threads. This regression was discovered
> > > when comparing OL8 performance to OL7.  An example that compares
> > > normal stores to non-temporal stores may be found at
> > > https://vgatherps.github.io/2018-09-02-nontemporal/.  A simple test
> > > shows performance loss of 400 to 500% due to a failure to use
> > > nontemporal stores. These performance losses are most likely to occur
> > > when the system load is heaviest and good performance is critical.
> > >
> > > The tunable x86_non_temporal_threshold can be used to override the
> > > default for the knowledgable user who really wants maximum cache
> > > allocation to a single thread in a multi-threaded system.
> > > The manual entry for the tunable has been expanded to provide
> > > more information about its purpose.
> > >
> > >         modified: sysdeps/x86/cacheinfo.c
> > >         modified: manual/tunables.texi
> > > ---
> > >  manual/tunables.texi    |  6 +++++-
> > >  sysdeps/x86/cacheinfo.c | 16 +++++++++++-----
> > >  2 files changed, 16 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/manual/tunables.texi b/manual/tunables.texi
> > > index b6bb54d..94d4fbd 100644
> > > --- a/manual/tunables.texi
> > > +++ b/manual/tunables.texi
> > > @@ -364,7 +364,11 @@ set shared cache size in bytes for use in memory
> > > and string routines.
> > >
> > >  @deftp Tunable glibc.tune.x86_non_temporal_threshold
> > >  The @code{glibc.tune.x86_non_temporal_threshold} tunable allows the user
> > > -to set threshold in bytes for non temporal store.
> > > +to set threshold in bytes for non temporal store. Non temporal stores
> > > +give a hint to the hardware to move data directly to memory without
> > > +displacing other data from the cache. This tunable is used by some
> > > +platforms to determine when to use non temporal stores in operations
> > > +like memmove and memcpy.
> > >
> > >  This tunable is specific to i386 and x86-64.
> > >  @end deftp
> > > diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c
> > > index b9444dd..42b468d 100644
> > > --- a/sysdeps/x86/cacheinfo.c
> > > +++ b/sysdeps/x86/cacheinfo.c
> > > @@ -778,14 +778,20 @@ intel_bug_no_cache_info:
> > >        __x86_shared_cache_size = shared;
> > >      }
> > >
> > > -  /* The large memcpy micro benchmark in glibc shows that 6 times of
> > > -     shared cache size is the approximate value above which non-temporal
> > > -     store becomes faster on a 8-core processor.  This is the 3/4 of the
> > > -     total shared cache size.  */
> > > +  /* The default setting for the non_temporal threshold is 3/4 of one
> > > +     thread's share of the chip's cache. For most Intel and AMD processors
> > > +     with an initial release date between 2017 and 2020, a thread's typical
> > > +     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > +     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > +     in cache after a maximum temporal copy, which will maintain
> > > +     in cache a reasonable portion of the thread's stack and other
> > > +     active data. If the threshold is set higher than one thread's
> > > +     share of the cache, it has a substantial risk of negatively
> > > +     impacting the performance of other threads running on the chip. */
> > >    __x86_shared_non_temporal_threshold
> > >      = (cpu_features->non_temporal_threshold != 0
> > >         ? cpu_features->non_temporal_threshold
> > > -       : __x86_shared_cache_size * threads * 3 / 4);
> > > +       : __x86_shared_cache_size * 3 / 4);
> > >  }
> > >
> > >  #endif
> > > --
> > > 1.8.3.1
> > >
> > >
> > >
> > > --
> > > H.J.
> >
> >
> > I am looking into re-tuning the NT store threshold which appears to be
> > too low in many cases.
> >
> > I've played around with some micro-benchmarks:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > I am finding that for the most part, ERMS stays competitive with
> > NT-Stores even as core count increases with heavy read workloads going
> > on on other threads.
> > See: https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/master/results-skx-pdf/skx-memcpy-4--read.pdf
> >
> > I saw: https://vgatherps.github.io/2018-09-02-nontemporal/ although
> > it's not clear how to reproduce the results in the blog. I also see it
> > was only comparing vs standard temporal stores, not ERMS.
> >
> > Does anyone know of benchmarks or an application that can highlight
> > the L3 clobbering issues brought up in this patch?
>
> You can try this:
>
> https://github.com/jeffhammond/STREAM

That's the same as a normal memcpy benchmark no? Its just calling
something like `tuned_STREAM_Copy()` (memcpy) in a loop maybe
scattered with some other reads. Similar to what I was running to get:
https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/master/results-skx-pdf/skx-memcpy-4--read.pdf

Either way on my ICL using the benchmark:

```
ERMS (L3)
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:          323410.5     0.001262     0.001245     0.001285
Scale:          26367.3     0.017114     0.015271     0.029576
Add:            29635.9     0.022948     0.020380     0.032384
Triad:          29401.0     0.021522     0.020543     0.024977

NT (L3)
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:          285375.1     0.001421     0.001411     0.001443
Scale:          26457.3     0.015358     0.015219     0.015730
Add:            29753.9     0.020656     0.020299     0.022881
Triad:          29594.0     0.020732     0.020409     0.022240


ERMS (L3 / 2)
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:          431049.0     0.000620     0.000467     0.001749
Scale:          27071.0     0.007996     0.007437     0.010018
Add:            31005.5     0.009864     0.009740     0.010432
Triad:          30359.7     0.010061     0.009947     0.010434

NT (L3 / 2)
Function    Best Rate MB/s  Avg time     Min time     Max time
Copy:          277315.2     0.000746     0.000726     0.000803
Scale:          27511.1     0.007540     0.007318     0.008739
Add:            30423.9     0.010116     0.009926     0.011031
Triad:          30430.5     0.009980     0.009924     0.010097
```
Seems to suggest ERMS is favorable.


>
> --
> H.J.

  reply	other threads:[~2023-04-19 23:24 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-25 22:21 Patrick McGehearty
2020-09-25 22:26 ` H.J. Lu
2020-09-28 12:55   ` Florian Weimer
2020-09-27 13:54 ` Carlos O'Donell
2020-10-01 16:04   ` Patrick McGehearty
2020-10-01 21:02     ` Carlos O'Donell
     [not found] ` <CAMe9rOr3QUQKGgAnk+UBBq6hLXkU6i8XcNUMKkNRo1iAK=7ceA@mail.gmail.com>
2023-04-19 22:30   ` Noah Goldstein
2023-04-19 22:43     ` H.J. Lu
2023-04-19 23:24       ` Noah Goldstein [this message]
2023-04-20  0:12         ` H.J. Lu
2023-04-20  0:27           ` Noah Goldstein
2023-04-20 16:17             ` H.J. Lu
2023-04-20 20:23               ` Noah Goldstein
2023-04-20 23:50                 ` H.J. Lu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFUsyf+y5+DdcFV8GGz9bPfNvGpFGLT_UcUqZ9DWtk-jdnDecQ@mail.gmail.com \
    --to=goldstein.w.n@gmail.com \
    --cc=hjl.tools@gmail.com \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).