public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
@ 2023-04-24  5:03 Noah Goldstein
  2023-04-24 18:09 ` H.J. Lu
                   ` (11 more replies)
  0 siblings, 12 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-04-24  5:03 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 2`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal threshholds and leads to using non-temporal stores in
cases where `rep movsb` is multiple times faster.

Furthermore, non-temporal stores are written directly to disk so using
it at a size much smaller than L3 can place soon to be accessed data
much further away than it otherwise could be. As well, modern machines
are able to detect streaming patterns (especially if `rep movsb` is
used) and provide LRU hints to the memory subsystem. This in affect
caps the total amount of eviction at 1/cache_assosiativity, far below
meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be `rep movsb` which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, `rep movsb` is ~2x
faster up to `sizeof_L3`.

Benchmarks comparing non-temporal stores, rep movsb, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
---
 sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++---------------------
 1 file changed, 14 insertions(+), 21 deletions(-)

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index ec88945b39..f25309dbc8 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -604,20 +604,11 @@ intel_bug_no_cache_info:
             = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
 	       & 0xff);
         }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
-    {
-      if (threads_l2 > 0)
-        core /= threads_l2;
-      shared += core;
-    }
+    shared += core;
 
   *shared_ptr = shared;
   *threads_ptr = threads;
@@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/2 of size
+     of chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     other methods. As well the point where the fact that non-temporal
+     stores are forced back to disk would already occured to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the the maximum thrashing
+     capped at 1/assosiativity. */
+  unsigned long int non_temporal_threshold = shared / 2;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
@ 2023-04-24 18:09 ` H.J. Lu
  2023-04-24 18:34   ` Noah Goldstein
  2023-04-24 22:30 ` [PATCH v2] " Noah Goldstein
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-04-24 18:09 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Sun, Apr 23, 2023 at 10:03 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 2`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal threshholds and leads to using non-temporal stores in
> cases where `rep movsb` is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to disk so using

Why is "disk" here?

> it at a size much smaller than L3 can place soon to be accessed data
> much further away than it otherwise could be. As well, modern machines
> are able to detect streaming patterns (especially if `rep movsb` is
> used) and provide LRU hints to the memory subsystem. This in affect
> caps the total amount of eviction at 1/cache_assosiativity, far below
> meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be `rep movsb` which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, `rep movsb` is ~2x
> faster up to `sizeof_L3`.
>

Should we limit it to processors with ERMS  (Enhanced REP MOVSB/STOSB)?

> Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> ---
>  sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++---------------------
>  1 file changed, 14 insertions(+), 21 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index ec88945b39..f25309dbc8 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -604,20 +604,11 @@ intel_bug_no_cache_info:
>              = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
>                & 0xff);
>          }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
> -    {
> -      if (threads_l2 > 0)
> -        core /= threads_l2;
> -      shared += core;
> -    }
> +    shared += core;
>
>    *shared_ptr = shared;
>    *threads_ptr = threads;
> @@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/2 of size
> +     of chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> +     estimate the point where non-temporal stores begin outcompeting
> +     other methods. As well the point where the fact that non-temporal
> +     stores are forced back to disk would already occured to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the the maximum thrashing
> +     capped at 1/assosiativity. */
> +  unsigned long int non_temporal_threshold = shared / 2;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>


-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-24 18:09 ` H.J. Lu
@ 2023-04-24 18:34   ` Noah Goldstein
  2023-04-24 20:44     ` H.J. Lu
  0 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-04-24 18:34 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Mon, Apr 24, 2023 at 1:10 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Sun, Apr 23, 2023 at 10:03 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > ncores_per_socket'. This patch updates that value to roughly
> > 'sizeof_L3 / 2`
> >
> > The original value (specifically dividing the `ncores_per_socket`) was
> > done to limit the amount of other threads' data a `memcpy`/`memset`
> > could evict.
> >
> > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > non-temporal threshholds and leads to using non-temporal stores in
> > cases where `rep movsb` is multiple times faster.
> >
> > Furthermore, non-temporal stores are written directly to disk so using
>
> Why is "disk" here?
I mean main-memory. Will update in V2.
>
> > it at a size much smaller than L3 can place soon to be accessed data
> > much further away than it otherwise could be. As well, modern machines
> > are able to detect streaming patterns (especially if `rep movsb` is
> > used) and provide LRU hints to the memory subsystem. This in affect
> > caps the total amount of eviction at 1/cache_assosiativity, far below
> > meaningfully thrashing the entire cache.
> >
> > As best I can tell, the benchmarks that lead this small threshold
> > where done comparing non-temporal stores versus standard cacheable
> > stores. A better comparison (linked below) is to be `rep movsb` which,
> > on the measure systems, is nearly 2x faster than non-temporal stores
> > at the low-end of the previous threshold, and within 10% for over
> > 100MB copies (well past even the current threshold). In cases with a
> > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > faster up to `sizeof_L3`.
> >
>
> Should we limit it to processors with ERMS  (Enhanced REP MOVSB/STOSB)?
>
Think that would probably make sense. We see more meaningful regression
for larger sizes when using standard store loop. Think /nthreads is
still too small.
How about
if ERMS: L3/2
else: L3 / (2 * sqrt(nthreads)) ?


> > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > stores where done using:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > Sheets results (also available in pdf on the github):
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > ---
> >  sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++---------------------
> >  1 file changed, 14 insertions(+), 21 deletions(-)
> >
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index ec88945b39..f25309dbc8 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -604,20 +604,11 @@ intel_bug_no_cache_info:
> >              = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> >                & 0xff);
> >          }
> > -
> > -        /* Cap usage of highest cache level to the number of supported
> > -           threads.  */
> > -        if (shared > 0 && threads > 0)
> > -          shared /= threads;
> >      }
> >
> >    /* Account for non-inclusive L2 and L3 caches.  */
> >    if (!inclusive_cache)
> > -    {
> > -      if (threads_l2 > 0)
> > -        core /= threads_l2;
> > -      shared += core;
> > -    }
> > +    shared += core;
> >
> >    *shared_ptr = shared;
> >    *threads_ptr = threads;
> > @@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > -     thread's share of the chip's cache. For most Intel and AMD processors
> > -     with an initial release date between 2017 and 2020, a thread's typical
> > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > -     in cache after a maximum temporal copy, which will maintain
> > -     in cache a reasonable portion of the thread's stack and other
> > -     active data. If the threshold is set higher than one thread's
> > -     share of the cache, it has a substantial risk of negatively
> > -     impacting the performance of other threads running on the chip. */
> > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > +     of chip's cache. For most Intel and AMD processors with an
> > +     initial release date between 2017 and 2023, a thread's typical
> > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > +     estimate the point where non-temporal stores begin outcompeting
> > +     other methods. As well the point where the fact that non-temporal
> > +     stores are forced back to disk would already occured to the
> > +     majority of the lines in the copy. Note, concerns about the
> > +     entire L3 cache being evicted by the copy are mostly alleviated
> > +     by the fact that modern HW detects streaming patterns and
> > +     provides proper LRU hints so that the the maximum thrashing
> > +     capped at 1/assosiativity. */
> > +  unsigned long int non_temporal_threshold = shared / 2;
> >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > --
> > 2.34.1
> >
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-24 18:34   ` Noah Goldstein
@ 2023-04-24 20:44     ` H.J. Lu
  2023-04-24 22:30       ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-04-24 20:44 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Mon, Apr 24, 2023 at 11:34 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 1:10 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> >
> > On Sun, Apr 23, 2023 at 10:03 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > ncores_per_socket'. This patch updates that value to roughly
> > > 'sizeof_L3 / 2`
> > >
> > > The original value (specifically dividing the `ncores_per_socket`) was
> > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > could evict.
> > >
> > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > non-temporal threshholds and leads to using non-temporal stores in
> > > cases where `rep movsb` is multiple times faster.
> > >
> > > Furthermore, non-temporal stores are written directly to disk so using
> >
> > Why is "disk" here?
> I mean main-memory. Will update in V2.
> >
> > > it at a size much smaller than L3 can place soon to be accessed data
> > > much further away than it otherwise could be. As well, modern machines
> > > are able to detect streaming patterns (especially if `rep movsb` is
> > > used) and provide LRU hints to the memory subsystem. This in affect
> > > caps the total amount of eviction at 1/cache_assosiativity, far below
> > > meaningfully thrashing the entire cache.
> > >
> > > As best I can tell, the benchmarks that lead this small threshold
> > > where done comparing non-temporal stores versus standard cacheable
> > > stores. A better comparison (linked below) is to be `rep movsb` which,
> > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > at the low-end of the previous threshold, and within 10% for over
> > > 100MB copies (well past even the current threshold). In cases with a
> > > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > > faster up to `sizeof_L3`.
> > >
> >
> > Should we limit it to processors with ERMS  (Enhanced REP MOVSB/STOSB)?
> >
> Think that would probably make sense. We see more meaningful regression
> for larger sizes when using standard store loop. Think /nthreads is
> still too small.
> How about
> if ERMS: L3/2
> else: L3 / (2 * sqrt(nthreads)) ?

I think we should leave the non-ERMS case unchanged.

>
>
> > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > > stores where done using:
> > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > >
> > > Sheets results (also available in pdf on the github):
> > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > ---
> > >  sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++---------------------
> > >  1 file changed, 14 insertions(+), 21 deletions(-)
> > >
> > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > index ec88945b39..f25309dbc8 100644
> > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > @@ -604,20 +604,11 @@ intel_bug_no_cache_info:
> > >              = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > >                & 0xff);
> > >          }
> > > -
> > > -        /* Cap usage of highest cache level to the number of supported
> > > -           threads.  */
> > > -        if (shared > 0 && threads > 0)
> > > -          shared /= threads;
> > >      }
> > >
> > >    /* Account for non-inclusive L2 and L3 caches.  */
> > >    if (!inclusive_cache)
> > > -    {
> > > -      if (threads_l2 > 0)
> > > -        core /= threads_l2;
> > > -      shared += core;
> > > -    }
> > > +    shared += core;
> > >
> > >    *shared_ptr = shared;
> > >    *threads_ptr = threads;
> > > @@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > >    cpu_features->level4_cache_size = level4_cache_size;
> > >
> > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > -     in cache after a maximum temporal copy, which will maintain
> > > -     in cache a reasonable portion of the thread's stack and other
> > > -     active data. If the threshold is set higher than one thread's
> > > -     share of the cache, it has a substantial risk of negatively
> > > -     impacting the performance of other threads running on the chip. */
> > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > +     of chip's cache. For most Intel and AMD processors with an
> > > +     initial release date between 2017 and 2023, a thread's typical
> > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > +     estimate the point where non-temporal stores begin outcompeting
> > > +     other methods. As well the point where the fact that non-temporal
> > > +     stores are forced back to disk would already occured to the
> > > +     majority of the lines in the copy. Note, concerns about the
> > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > +     by the fact that modern HW detects streaming patterns and
> > > +     provides proper LRU hints so that the the maximum thrashing
> > > +     capped at 1/assosiativity. */
> > > +  unsigned long int non_temporal_threshold = shared / 2;
> > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > --
> > > 2.34.1
> > >
> >
> >
> > --
> > H.J.



-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-24 20:44     ` H.J. Lu
@ 2023-04-24 22:30       ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-04-24 22:30 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Mon, Apr 24, 2023 at 3:44 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 11:34 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Mon, Apr 24, 2023 at 1:10 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> > >
> > > On Sun, Apr 23, 2023 at 10:03 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > ncores_per_socket'. This patch updates that value to roughly
> > > > 'sizeof_L3 / 2`
> > > >
> > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > could evict.
> > > >
> > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > non-temporal threshholds and leads to using non-temporal stores in
> > > > cases where `rep movsb` is multiple times faster.
> > > >
> > > > Furthermore, non-temporal stores are written directly to disk so using
> > >
> > > Why is "disk" here?
> > I mean main-memory. Will update in V2.
> > >
> > > > it at a size much smaller than L3 can place soon to be accessed data
> > > > much further away than it otherwise could be. As well, modern machines
> > > > are able to detect streaming patterns (especially if `rep movsb` is
> > > > used) and provide LRU hints to the memory subsystem. This in affect
> > > > caps the total amount of eviction at 1/cache_assosiativity, far below
> > > > meaningfully thrashing the entire cache.
> > > >
> > > > As best I can tell, the benchmarks that lead this small threshold
> > > > where done comparing non-temporal stores versus standard cacheable
> > > > stores. A better comparison (linked below) is to be `rep movsb` which,
> > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > at the low-end of the previous threshold, and within 10% for over
> > > > 100MB copies (well past even the current threshold). In cases with a
> > > > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > > > faster up to `sizeof_L3`.
> > > >
> > >
> > > Should we limit it to processors with ERMS  (Enhanced REP MOVSB/STOSB)?
> > >
> > Think that would probably make sense. We see more meaningful regression
> > for larger sizes when using standard store loop. Think /nthreads is
> > still too small.
> > How about
> > if ERMS: L3/2
> > else: L3 / (2 * sqrt(nthreads)) ?
>
> I think we should leave the non-ERMS case unchanged.

Done
>
> >
> >
> > > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > > > stores where done using:
> > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > >
> > > > Sheets results (also available in pdf on the github):
> > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > ---
> > > >  sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++---------------------
> > > >  1 file changed, 14 insertions(+), 21 deletions(-)
> > > >
> > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > index ec88945b39..f25309dbc8 100644
> > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > @@ -604,20 +604,11 @@ intel_bug_no_cache_info:
> > > >              = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > >                & 0xff);
> > > >          }
> > > > -
> > > > -        /* Cap usage of highest cache level to the number of supported
> > > > -           threads.  */
> > > > -        if (shared > 0 && threads > 0)
> > > > -          shared /= threads;
> > > >      }
> > > >
> > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > >    if (!inclusive_cache)
> > > > -    {
> > > > -      if (threads_l2 > 0)
> > > > -        core /= threads_l2;
> > > > -      shared += core;
> > > > -    }
> > > > +    shared += core;
> > > >
> > > >    *shared_ptr = shared;
> > > >    *threads_ptr = threads;
> > > > @@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > >
> > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > -     in cache after a maximum temporal copy, which will maintain
> > > > -     in cache a reasonable portion of the thread's stack and other
> > > > -     active data. If the threshold is set higher than one thread's
> > > > -     share of the cache, it has a substantial risk of negatively
> > > > -     impacting the performance of other threads running on the chip. */
> > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > > +     of chip's cache. For most Intel and AMD processors with an
> > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > > +     estimate the point where non-temporal stores begin outcompeting
> > > > +     other methods. As well the point where the fact that non-temporal
> > > > +     stores are forced back to disk would already occured to the
> > > > +     majority of the lines in the copy. Note, concerns about the
> > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > +     by the fact that modern HW detects streaming patterns and
> > > > +     provides proper LRU hints so that the the maximum thrashing
> > > > +     capped at 1/assosiativity. */
> > > > +  unsigned long int non_temporal_threshold = shared / 2;
> > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > --
> > > > 2.34.1
> > > >
> > >
> > >
> > > --
> > > H.J.
>
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v2] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
  2023-04-24 18:09 ` H.J. Lu
@ 2023-04-24 22:30 ` Noah Goldstein
  2023-04-24 22:48   ` H.J. Lu
  2023-04-25  3:43 ` [PATCH v3] " Noah Goldstein
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-04-24 22:30 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 2`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal threshholds and leads to using non-temporal stores in
cases where `rep movsb` is multiple times faster.

Furthermore, non-temporal stores are written directly to disk so using
it at a size much smaller than L3 can place soon to be accessed data
much further away than it otherwise could be. As well, modern machines
are able to detect streaming patterns (especially if `rep movsb` is
used) and provide LRU hints to the memory subsystem. This in affect
caps the total amount of eviction at 1/cache_assosiativity, far below
meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be `rep movsb` which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, `rep movsb` is ~2x
faster up to `sizeof_L3`.

Because there are still valid concerns about performance of large
memcpy's using cacheable stores (both direct performance and on the
system), if `rep movsb` is not available this patch also introduces a
new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will
continue to use the old calculation and be used if no ERMS memcpy is
supported by the target.

Benchmarks comparing non-temporal stores, rep movsb, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
---
 manual/tunables.texi                          | 16 +++-
 sysdeps/x86/cacheinfo.h                       |  8 +-
 sysdeps/x86/dl-cacheinfo.h                    | 85 +++++++++++++------
 sysdeps/x86/dl-diagnostics-cpu.c              |  2 +
 sysdeps/x86/dl-tunables.list                  |  3 +
 sysdeps/x86/include/cpu-features.h            |  4 +-
 .../multiarch/memmove-vec-unaligned-erms.S    | 12 ++-
 7 files changed, 98 insertions(+), 32 deletions(-)

diff --git a/manual/tunables.texi b/manual/tunables.texi
index 130f94b2bc..8320e724f0 100644
--- a/manual/tunables.texi
+++ b/manual/tunables.texi
@@ -52,6 +52,7 @@ glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483647)
 glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff)
 glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
 glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
+glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
 glibc.cpu.x86_shstk:
 glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffffffffff)
 glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
@@ -486,7 +487,8 @@ thread stack originally backup by Huge Pages to default pages.
 @cindex shared_cache_size tunables
 @cindex tunables, shared_cache_size
 @cindex non_temporal_threshold tunables
-@cindex tunables, non_temporal_threshold
+@cindex non_temporal_threshold tunables_no_erms
+@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_erms
 
 @deftp {Tunable namespace} glibc.cpu
 Behavior of @theglibc{} can be tuned to assume specific hardware capabilities
@@ -559,6 +561,18 @@ like memmove and memcpy.
 This tunable is specific to i386 and x86-64.
 @end deftp
 
+@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms
+The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to
+the above, but is used specifically when the ERMS feature is not
+available. ERMS function are often implemented with optimizations for
+large streaming workloads. This often makes it a better choice than
+non-temporal stores for a wider-range of values. When ERMS is not
+available, however, non-temporal stores become preferable at a much
+lower threshold.
+
+This tunable is specific to i386 and x86-64.
+@end deftp
+
 @deftp Tunable glibc.cpu.x86_rep_movsb_threshold
 The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user to
 set threshold in bytes to start using "rep movsb".  The value must be
diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
index ec1bc142c4..1083bd6018 100644
--- a/sysdeps/x86/cacheinfo.h
+++ b/sysdeps/x86/cacheinfo.h
@@ -35,9 +35,12 @@ long int __x86_data_cache_size attribute_hidden = 32 * 1024;
 long int __x86_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
 long int __x86_shared_cache_size attribute_hidden = 1024 * 1024;
 
-/* Threshold to use non temporal store.  */
+/* Threshold to use non temporal store if ERMS is available.  */
 long int __x86_shared_non_temporal_threshold attribute_hidden;
 
+/* Threshold to use non temporal store if ERMS is not available.  */
+long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden;
+
 /* Threshold to use Enhanced REP MOVSB.  */
 long int __x86_rep_movsb_threshold attribute_hidden = 2048;
 
@@ -77,6 +80,9 @@ init_cacheinfo (void)
   __x86_shared_non_temporal_threshold
     = cpu_features->non_temporal_threshold;
 
+  __x86_shared_non_temporal_threshold_no_erms
+      = cpu_features->non_temporal_threshold_no_erms;
+
   __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
   __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
   __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index ec88945b39..94d5c6183a 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@ handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,24 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/2 of size
+     of chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     other methods. As well the point where the fact that non-temporal
+     stores are forced back to disk would already occured to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the the maximum thrashing
+     capped at 1/assosiativity. */
+  unsigned long int non_temporal_threshold = shared / 2;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, there performance in highly parallel situations is
+     noticeably worse.  */
+  unsigned long int non_temporal_threshold_no_erms = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
@@ -754,6 +769,11 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   else if (non_temporal_threshold > maximum_non_temporal_threshold)
     non_temporal_threshold = maximum_non_temporal_threshold;
 
+  if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold)
+    non_temporal_threshold_no_erms = minimum_non_temporal_threshold;
+  else if (non_temporal_threshold_no_erms > maximum_non_temporal_threshold)
+    non_temporal_threshold_no_erms = maximum_non_temporal_threshold;
+
   /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8.  */
   unsigned int minimum_rep_movsb_threshold;
   /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
@@ -802,6 +822,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       && tunable_size <= maximum_non_temporal_threshold)
     non_temporal_threshold = tunable_size;
 
+  tunable_size
+      = TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, NULL);
+  if (tunable_size > minimum_non_temporal_threshold
+      && tunable_size <= maximum_non_temporal_threshold)
+    non_temporal_threshold_no_erms = tunable_size;
+
   tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL);
   if (tunable_size > minimum_rep_movsb_threshold)
     rep_movsb_threshold = tunable_size;
@@ -817,6 +843,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
 			   minimum_non_temporal_threshold,
 			   maximum_non_temporal_threshold);
+  TUNABLE_SET_WITH_BOUNDS (
+      x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_erms,
+      minimum_non_temporal_threshold, maximum_non_temporal_threshold);
   TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
 			   minimum_rep_movsb_threshold, SIZE_MAX);
   TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
@@ -837,6 +866,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->data_cache_size = data;
   cpu_features->shared_cache_size = shared;
   cpu_features->non_temporal_threshold = non_temporal_threshold;
+  cpu_features->non_temporal_threshold_no_erms
+      = non_temporal_threshold_no_erms;
   cpu_features->rep_movsb_threshold = rep_movsb_threshold;
   cpu_features->rep_stosb_threshold = rep_stosb_threshold;
   cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
index a1578e4665..5c09472a10 100644
--- a/sysdeps/x86/dl-diagnostics-cpu.c
+++ b/sysdeps/x86/dl-diagnostics-cpu.c
@@ -83,6 +83,8 @@ _dl_diagnostics_cpu (void)
                             cpu_features->shared_cache_size);
   print_cpu_features_value ("non_temporal_threshold",
                             cpu_features->non_temporal_threshold);
+  print_cpu_features_value ("non_temporal_threshold_no_erms",
+			    cpu_features->non_temporal_threshold_no_erms);
   print_cpu_features_value ("rep_movsb_threshold",
                             cpu_features->rep_movsb_threshold);
   print_cpu_features_value ("rep_movsb_stop_threshold",
diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
index feb7004036..aac6341716 100644
--- a/sysdeps/x86/dl-tunables.list
+++ b/sysdeps/x86/dl-tunables.list
@@ -30,6 +30,9 @@ glibc {
     x86_non_temporal_threshold {
       type: SIZE_T
     }
+    x86_non_temporal_threshold_no_erms {
+      type: SIZE_T
+    }
     x86_rep_movsb_threshold {
       type: SIZE_T
       # Since there is overhead to set up REP MOVSB operation, REP
diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
index 40b8129d6a..df6c561eac 100644
--- a/sysdeps/x86/include/cpu-features.h
+++ b/sysdeps/x86/include/cpu-features.h
@@ -913,8 +913,10 @@ struct cpu_features
   /* Shared cache size for use in memory and string routines, typically
      L2 or L3 size.  */
   unsigned long int shared_cache_size;
-  /* Threshold to use non temporal store.  */
+  /* Threshold to use non temporal store if ERMS is available.  */
   unsigned long int non_temporal_threshold;
+  /* Threshold to use non temporal store if ERMS is not available.  */
+  unsigned long int non_temporal_threshold_no_erms;
   /* Threshold to use "rep movsb".  */
   unsigned long int rep_movsb_threshold;
   /* Threshold to stop using "rep movsb".  */
diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
index d1b92785b0..856c3daf3b 100644
--- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
+++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
@@ -424,8 +424,16 @@ L(more_8x_vec):
 	jb	L(more_8x_vec_backward_check_nop)
 	/* Check if non-temporal move candidate.  */
 #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
-	/* Check non-temporal store threshold.  */
-	cmp	__x86_shared_non_temporal_threshold(%rip), %RDX_LP
+	/* Check non-temporal store threshold if ERMS is not available.
+	   NB: This path is only hit if we jumped here from L(more_2x_vec).
+	   If we went to L(movsb), then we enter at either the forward loop
+	   directly or go to the backward loop.
+
+	   WARNING: `__x86_shared_non_temporal_threshold_no_erms` should
+	   NEVER be used in a control flow that could come from
+	   L(movsb_more_2x_vec) without checking checkout
+	   `__x86_rep_movsb_threshold` first.  */
+	cmp	__x86_shared_non_temporal_threshold_no_erms(%rip), %RDX_LP
 	ja	L(large_memcpy_2x)
 #endif
 	/* To reach this point there cannot be overlap and dst > src. So
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v2] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-24 22:30 ` [PATCH v2] " Noah Goldstein
@ 2023-04-24 22:48   ` H.J. Lu
  2023-04-25  2:05     ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-04-24 22:48 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Mon, Apr 24, 2023 at 3:30 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 2`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal threshholds and leads to using non-temporal stores in
> cases where `rep movsb` is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to disk so using
> it at a size much smaller than L3 can place soon to be accessed data
> much further away than it otherwise could be. As well, modern machines
> are able to detect streaming patterns (especially if `rep movsb` is
> used) and provide LRU hints to the memory subsystem. This in affect
> caps the total amount of eviction at 1/cache_assosiativity, far below
> meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be `rep movsb` which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, `rep movsb` is ~2x
> faster up to `sizeof_L3`.
>
> Because there are still valid concerns about performance of large
> memcpy's using cacheable stores (both direct performance and on the
> system), if `rep movsb` is not available this patch also introduces a
> new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will
> continue to use the old calculation and be used if no ERMS memcpy is
> supported by the target.
>
> Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> ---
>  manual/tunables.texi                          | 16 +++-
>  sysdeps/x86/cacheinfo.h                       |  8 +-
>  sysdeps/x86/dl-cacheinfo.h                    | 85 +++++++++++++------
>  sysdeps/x86/dl-diagnostics-cpu.c              |  2 +
>  sysdeps/x86/dl-tunables.list                  |  3 +
>  sysdeps/x86/include/cpu-features.h            |  4 +-
>  .../multiarch/memmove-vec-unaligned-erms.S    | 12 ++-
>  7 files changed, 98 insertions(+), 32 deletions(-)
>
> diff --git a/manual/tunables.texi b/manual/tunables.texi
> index 130f94b2bc..8320e724f0 100644
> --- a/manual/tunables.texi
> +++ b/manual/tunables.texi
> @@ -52,6 +52,7 @@ glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483647)
>  glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff)
>  glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
>  glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> +glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)

We don't need this.   We can use

if (CPU_FEATURE_USABLE_P (cpu_features, ERMS))

to check for ERMS processors.

>  glibc.cpu.x86_shstk:
>  glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffffffffff)
>  glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
> @@ -486,7 +487,8 @@ thread stack originally backup by Huge Pages to default pages.
>  @cindex shared_cache_size tunables
>  @cindex tunables, shared_cache_size
>  @cindex non_temporal_threshold tunables
> -@cindex tunables, non_temporal_threshold
> +@cindex non_temporal_threshold tunables_no_erms
> +@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_erms
>
>  @deftp {Tunable namespace} glibc.cpu
>  Behavior of @theglibc{} can be tuned to assume specific hardware capabilities
> @@ -559,6 +561,18 @@ like memmove and memcpy.
>  This tunable is specific to i386 and x86-64.
>  @end deftp
>
> +@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms
> +The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to
> +the above, but is used specifically when the ERMS feature is not
> +available. ERMS function are often implemented with optimizations for
> +large streaming workloads. This often makes it a better choice than
> +non-temporal stores for a wider-range of values. When ERMS is not
> +available, however, non-temporal stores become preferable at a much
> +lower threshold.
> +
> +This tunable is specific to i386 and x86-64.
> +@end deftp
> +
>  @deftp Tunable glibc.cpu.x86_rep_movsb_threshold
>  The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user to
>  set threshold in bytes to start using "rep movsb".  The value must be
> diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> index ec1bc142c4..1083bd6018 100644
> --- a/sysdeps/x86/cacheinfo.h
> +++ b/sysdeps/x86/cacheinfo.h
> @@ -35,9 +35,12 @@ long int __x86_data_cache_size attribute_hidden = 32 * 1024;
>  long int __x86_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
>  long int __x86_shared_cache_size attribute_hidden = 1024 * 1024;
>
> -/* Threshold to use non temporal store.  */
> +/* Threshold to use non temporal store if ERMS is available.  */
>  long int __x86_shared_non_temporal_threshold attribute_hidden;
>
> +/* Threshold to use non temporal store if ERMS is not available.  */
> +long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden;
> +
>  /* Threshold to use Enhanced REP MOVSB.  */
>  long int __x86_rep_movsb_threshold attribute_hidden = 2048;
>
> @@ -77,6 +80,9 @@ init_cacheinfo (void)
>    __x86_shared_non_temporal_threshold
>      = cpu_features->non_temporal_threshold;
>
> +  __x86_shared_non_temporal_threshold_no_erms
> +      = cpu_features->non_temporal_threshold_no_erms;
> +
>    __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
>    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
>    __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index ec88945b39..94d5c6183a 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -              & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +       intel_bug_no_cache_info:
> +         /* Assume that all logical threads share the highest cache
> +            level.  */
> +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +                    & 0xff);
> +
> +         /* Get per-thread size of highest level cache.  */
> +         if (shared_per_thread > 0 && threads > 0)
> +           shared_per_thread /= threads;
> +       }
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +       shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>
>        level1_icache_size
>         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>         shared = core;
> +
> +      if (shared_per_thread <= 0)
> +       shared_per_thread = shared;
>      }
>
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,24 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/2 of size
> +     of chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> +     estimate the point where non-temporal stores begin outcompeting
> +     other methods. As well the point where the fact that non-temporal
> +     stores are forced back to disk would already occured to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the the maximum thrashing
> +     capped at 1/assosiativity. */
> +  unsigned long int non_temporal_threshold = shared / 2;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, there performance in highly parallel situations is
> +     noticeably worse.  */
> +  unsigned long int non_temporal_threshold_no_erms = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> @@ -754,6 +769,11 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    else if (non_temporal_threshold > maximum_non_temporal_threshold)
>      non_temporal_threshold = maximum_non_temporal_threshold;
>
> +  if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold)
> +    non_temporal_threshold_no_erms = minimum_non_temporal_threshold;
> +  else if (non_temporal_threshold_no_erms > maximum_non_temporal_threshold)
> +    non_temporal_threshold_no_erms = maximum_non_temporal_threshold;
> +
>    /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8.  */
>    unsigned int minimum_rep_movsb_threshold;
>    /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
> @@ -802,6 +822,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        && tunable_size <= maximum_non_temporal_threshold)
>      non_temporal_threshold = tunable_size;
>
> +  tunable_size
> +      = TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, NULL);
> +  if (tunable_size > minimum_non_temporal_threshold
> +      && tunable_size <= maximum_non_temporal_threshold)
> +    non_temporal_threshold_no_erms = tunable_size;
> +
>    tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL);
>    if (tunable_size > minimum_rep_movsb_threshold)
>      rep_movsb_threshold = tunable_size;
> @@ -817,6 +843,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
>                            minimum_non_temporal_threshold,
>                            maximum_non_temporal_threshold);
> +  TUNABLE_SET_WITH_BOUNDS (
> +      x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_erms,
> +      minimum_non_temporal_threshold, maximum_non_temporal_threshold);
>    TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
>                            minimum_rep_movsb_threshold, SIZE_MAX);
>    TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
> @@ -837,6 +866,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->data_cache_size = data;
>    cpu_features->shared_cache_size = shared;
>    cpu_features->non_temporal_threshold = non_temporal_threshold;
> +  cpu_features->non_temporal_threshold_no_erms
> +      = non_temporal_threshold_no_erms;
>    cpu_features->rep_movsb_threshold = rep_movsb_threshold;
>    cpu_features->rep_stosb_threshold = rep_stosb_threshold;
>    cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
> diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
> index a1578e4665..5c09472a10 100644
> --- a/sysdeps/x86/dl-diagnostics-cpu.c
> +++ b/sysdeps/x86/dl-diagnostics-cpu.c
> @@ -83,6 +83,8 @@ _dl_diagnostics_cpu (void)
>                              cpu_features->shared_cache_size);
>    print_cpu_features_value ("non_temporal_threshold",
>                              cpu_features->non_temporal_threshold);
> +  print_cpu_features_value ("non_temporal_threshold_no_erms",
> +                           cpu_features->non_temporal_threshold_no_erms);
>    print_cpu_features_value ("rep_movsb_threshold",
>                              cpu_features->rep_movsb_threshold);
>    print_cpu_features_value ("rep_movsb_stop_threshold",
> diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
> index feb7004036..aac6341716 100644
> --- a/sysdeps/x86/dl-tunables.list
> +++ b/sysdeps/x86/dl-tunables.list
> @@ -30,6 +30,9 @@ glibc {
>      x86_non_temporal_threshold {
>        type: SIZE_T
>      }
> +    x86_non_temporal_threshold_no_erms {
> +      type: SIZE_T
> +    }
>      x86_rep_movsb_threshold {
>        type: SIZE_T
>        # Since there is overhead to set up REP MOVSB operation, REP
> diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> index 40b8129d6a..df6c561eac 100644
> --- a/sysdeps/x86/include/cpu-features.h
> +++ b/sysdeps/x86/include/cpu-features.h
> @@ -913,8 +913,10 @@ struct cpu_features
>    /* Shared cache size for use in memory and string routines, typically
>       L2 or L3 size.  */
>    unsigned long int shared_cache_size;
> -  /* Threshold to use non temporal store.  */
> +  /* Threshold to use non temporal store if ERMS is available.  */
>    unsigned long int non_temporal_threshold;
> +  /* Threshold to use non temporal store if ERMS is not available.  */
> +  unsigned long int non_temporal_threshold_no_erms;
>    /* Threshold to use "rep movsb".  */
>    unsigned long int rep_movsb_threshold;
>    /* Threshold to stop using "rep movsb".  */
> diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> index d1b92785b0..856c3daf3b 100644
> --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> @@ -424,8 +424,16 @@ L(more_8x_vec):
>         jb      L(more_8x_vec_backward_check_nop)
>         /* Check if non-temporal move candidate.  */
>  #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
> -       /* Check non-temporal store threshold.  */
> -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> +       /* Check non-temporal store threshold if ERMS is not available.
> +          NB: This path is only hit if we jumped here from L(more_2x_vec).
> +          If we went to L(movsb), then we enter at either the forward loop
> +          directly or go to the backward loop.
> +
> +          WARNING: `__x86_shared_non_temporal_threshold_no_erms` should
> +          NEVER be used in a control flow that could come from
> +          L(movsb_more_2x_vec) without checking checkout
> +          `__x86_rep_movsb_threshold` first.  */
> +       cmp     __x86_shared_non_temporal_threshold_no_erms(%rip), %RDX_LP
>         ja      L(large_memcpy_2x)
>  #endif
>         /* To reach this point there cannot be overlap and dst > src. So
> --
> 2.34.1
>


-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v2] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-24 22:48   ` H.J. Lu
@ 2023-04-25  2:05     ` Noah Goldstein
  2023-04-25  2:55       ` H.J. Lu
  0 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-04-25  2:05 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Mon, Apr 24, 2023 at 5:49 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 3:30 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > ncores_per_socket'. This patch updates that value to roughly
> > 'sizeof_L3 / 2`
> >
> > The original value (specifically dividing the `ncores_per_socket`) was
> > done to limit the amount of other threads' data a `memcpy`/`memset`
> > could evict.
> >
> > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > non-temporal threshholds and leads to using non-temporal stores in
> > cases where `rep movsb` is multiple times faster.
> >
> > Furthermore, non-temporal stores are written directly to disk so using
> > it at a size much smaller than L3 can place soon to be accessed data
> > much further away than it otherwise could be. As well, modern machines
> > are able to detect streaming patterns (especially if `rep movsb` is
> > used) and provide LRU hints to the memory subsystem. This in affect
> > caps the total amount of eviction at 1/cache_assosiativity, far below
> > meaningfully thrashing the entire cache.
> >
> > As best I can tell, the benchmarks that lead this small threshold
> > where done comparing non-temporal stores versus standard cacheable
> > stores. A better comparison (linked below) is to be `rep movsb` which,
> > on the measure systems, is nearly 2x faster than non-temporal stores
> > at the low-end of the previous threshold, and within 10% for over
> > 100MB copies (well past even the current threshold). In cases with a
> > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > faster up to `sizeof_L3`.
> >
> > Because there are still valid concerns about performance of large
> > memcpy's using cacheable stores (both direct performance and on the
> > system), if `rep movsb` is not available this patch also introduces a
> > new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will
> > continue to use the old calculation and be used if no ERMS memcpy is
> > supported by the target.
> >
> > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > stores where done using:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > Sheets results (also available in pdf on the github):
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > ---
> >  manual/tunables.texi                          | 16 +++-
> >  sysdeps/x86/cacheinfo.h                       |  8 +-
> >  sysdeps/x86/dl-cacheinfo.h                    | 85 +++++++++++++------
> >  sysdeps/x86/dl-diagnostics-cpu.c              |  2 +
> >  sysdeps/x86/dl-tunables.list                  |  3 +
> >  sysdeps/x86/include/cpu-features.h            |  4 +-
> >  .../multiarch/memmove-vec-unaligned-erms.S    | 12 ++-
> >  7 files changed, 98 insertions(+), 32 deletions(-)
> >
> > diff --git a/manual/tunables.texi b/manual/tunables.texi
> > index 130f94b2bc..8320e724f0 100644
> > --- a/manual/tunables.texi
> > +++ b/manual/tunables.texi
> > @@ -52,6 +52,7 @@ glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483647)
> >  glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff)
> >  glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
> >  glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> > +glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
>
> We don't need this.   We can use
>
> if (CPU_FEATURE_USABLE_P (cpu_features, ERMS))
>
> to check for ERMS processors.
>

Ah makes sense. Does that work for FSRM as well?
> >  glibc.cpu.x86_shstk:
> >  glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffffffffff)
> >  glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
> > @@ -486,7 +487,8 @@ thread stack originally backup by Huge Pages to default pages.
> >  @cindex shared_cache_size tunables
> >  @cindex tunables, shared_cache_size
> >  @cindex non_temporal_threshold tunables
> > -@cindex tunables, non_temporal_threshold
> > +@cindex non_temporal_threshold tunables_no_erms
> > +@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_erms
> >
> >  @deftp {Tunable namespace} glibc.cpu
> >  Behavior of @theglibc{} can be tuned to assume specific hardware capabilities
> > @@ -559,6 +561,18 @@ like memmove and memcpy.
> >  This tunable is specific to i386 and x86-64.
> >  @end deftp
> >
> > +@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms
> > +The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to
> > +the above, but is used specifically when the ERMS feature is not
> > +available. ERMS function are often implemented with optimizations for
> > +large streaming workloads. This often makes it a better choice than
> > +non-temporal stores for a wider-range of values. When ERMS is not
> > +available, however, non-temporal stores become preferable at a much
> > +lower threshold.
> > +
> > +This tunable is specific to i386 and x86-64.
> > +@end deftp
> > +
> >  @deftp Tunable glibc.cpu.x86_rep_movsb_threshold
> >  The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user to
> >  set threshold in bytes to start using "rep movsb".  The value must be
> > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> > index ec1bc142c4..1083bd6018 100644
> > --- a/sysdeps/x86/cacheinfo.h
> > +++ b/sysdeps/x86/cacheinfo.h
> > @@ -35,9 +35,12 @@ long int __x86_data_cache_size attribute_hidden = 32 * 1024;
> >  long int __x86_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
> >  long int __x86_shared_cache_size attribute_hidden = 1024 * 1024;
> >
> > -/* Threshold to use non temporal store.  */
> > +/* Threshold to use non temporal store if ERMS is available.  */
> >  long int __x86_shared_non_temporal_threshold attribute_hidden;
> >
> > +/* Threshold to use non temporal store if ERMS is not available.  */
> > +long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden;
> > +
> >  /* Threshold to use Enhanced REP MOVSB.  */
> >  long int __x86_rep_movsb_threshold attribute_hidden = 2048;
> >
> > @@ -77,6 +80,9 @@ init_cacheinfo (void)
> >    __x86_shared_non_temporal_threshold
> >      = cpu_features->non_temporal_threshold;
> >
> > +  __x86_shared_non_temporal_threshold_no_erms
> > +      = cpu_features->non_temporal_threshold_no_erms;
> > +
> >    __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
> >    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
> >    __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index ec88945b39..94d5c6183a 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> >  }
> >
> >  static void
> > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> >                  long int core)
> >  {
> >    unsigned int eax;
> > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >    unsigned int family = cpu_features->basic.family;
> >    unsigned int model = cpu_features->basic.model;
> >    long int shared = *shared_ptr;
> > +  long int shared_per_thread = *shared_per_thread_ptr;
> >    unsigned int threads = *threads_ptr;
> >    bool inclusive_cache = true;
> >    bool support_count_mask = true;
> > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >        /* Try L2 otherwise.  */
> >        level  = 2;
> >        shared = core;
> > +      shared_per_thread = core;
> >        threads_l2 = 0;
> >        threads_l3 = -1;
> >      }
> > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >          }
> >        else
> >          {
> > -intel_bug_no_cache_info:
> > -          /* Assume that all logical threads share the highest cache
> > -             level.  */
> > -          threads
> > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > -              & 0xff);
> > -        }
> > -
> > -        /* Cap usage of highest cache level to the number of supported
> > -           threads.  */
> > -        if (shared > 0 && threads > 0)
> > -          shared /= threads;
> > +       intel_bug_no_cache_info:
> > +         /* Assume that all logical threads share the highest cache
> > +            level.  */
> > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > +                    & 0xff);
> > +
> > +         /* Get per-thread size of highest level cache.  */
> > +         if (shared_per_thread > 0 && threads > 0)
> > +           shared_per_thread /= threads;
> > +       }
> >      }
> >
> >    /* Account for non-inclusive L2 and L3 caches.  */
> >    if (!inclusive_cache)
> >      {
> >        if (threads_l2 > 0)
> > -        core /= threads_l2;
> > +       shared_per_thread += core / threads_l2;
> >        shared += core;
> >      }
> >
> >    *shared_ptr = shared;
> > +  *shared_per_thread_ptr = shared_per_thread;
> >    *threads_ptr = threads;
> >  }
> >
> > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    /* Find out what brand of processor.  */
> >    long int data = -1;
> >    long int shared = -1;
> > +  long int shared_per_thread = -1;
> >    long int core = -1;
> >    unsigned int threads = 0;
> >    unsigned long int level1_icache_size = -1;
> > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size
> >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level4_cache_size
> >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> >      {
> >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_amd)
> >      {
> >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        if (shared <= 0)
> >          /* No shared L3 cache.  All we have is the L2 cache.  */
> >         shared = core;
> > +
> > +      if (shared_per_thread <= 0)
> > +       shared_per_thread = shared;
> >      }
> >
> >    cpu_features->level1_icache_size = level1_icache_size;
> > @@ -730,17 +738,24 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > -     thread's share of the chip's cache. For most Intel and AMD processors
> > -     with an initial release date between 2017 and 2020, a thread's typical
> > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > -     in cache after a maximum temporal copy, which will maintain
> > -     in cache a reasonable portion of the thread's stack and other
> > -     active data. If the threshold is set higher than one thread's
> > -     share of the cache, it has a substantial risk of negatively
> > -     impacting the performance of other threads running on the chip. */
> > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > +     of chip's cache. For most Intel and AMD processors with an
> > +     initial release date between 2017 and 2023, a thread's typical
> > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > +     estimate the point where non-temporal stores begin outcompeting
> > +     other methods. As well the point where the fact that non-temporal
> > +     stores are forced back to disk would already occured to the
> > +     majority of the lines in the copy. Note, concerns about the
> > +     entire L3 cache being evicted by the copy are mostly alleviated
> > +     by the fact that modern HW detects streaming patterns and
> > +     provides proper LRU hints so that the the maximum thrashing
> > +     capped at 1/assosiativity. */
> > +  unsigned long int non_temporal_threshold = shared / 2;
> > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > +     hint. As well, there performance in highly parallel situations is
> > +     noticeably worse.  */
> > +  unsigned long int non_temporal_threshold_no_erms = shared_per_thread * 3 / 4;
> >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > @@ -754,6 +769,11 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    else if (non_temporal_threshold > maximum_non_temporal_threshold)
> >      non_temporal_threshold = maximum_non_temporal_threshold;
> >
> > +  if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold)
> > +    non_temporal_threshold_no_erms = minimum_non_temporal_threshold;
> > +  else if (non_temporal_threshold_no_erms > maximum_non_temporal_threshold)
> > +    non_temporal_threshold_no_erms = maximum_non_temporal_threshold;
> > +
> >    /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8.  */
> >    unsigned int minimum_rep_movsb_threshold;
> >    /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
> > @@ -802,6 +822,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        && tunable_size <= maximum_non_temporal_threshold)
> >      non_temporal_threshold = tunable_size;
> >
> > +  tunable_size
> > +      = TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, NULL);
> > +  if (tunable_size > minimum_non_temporal_threshold
> > +      && tunable_size <= maximum_non_temporal_threshold)
> > +    non_temporal_threshold_no_erms = tunable_size;
> > +
> >    tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL);
> >    if (tunable_size > minimum_rep_movsb_threshold)
> >      rep_movsb_threshold = tunable_size;
> > @@ -817,6 +843,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
> >                            minimum_non_temporal_threshold,
> >                            maximum_non_temporal_threshold);
> > +  TUNABLE_SET_WITH_BOUNDS (
> > +      x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_erms,
> > +      minimum_non_temporal_threshold, maximum_non_temporal_threshold);
> >    TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
> >                            minimum_rep_movsb_threshold, SIZE_MAX);
> >    TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
> > @@ -837,6 +866,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->data_cache_size = data;
> >    cpu_features->shared_cache_size = shared;
> >    cpu_features->non_temporal_threshold = non_temporal_threshold;
> > +  cpu_features->non_temporal_threshold_no_erms
> > +      = non_temporal_threshold_no_erms;
> >    cpu_features->rep_movsb_threshold = rep_movsb_threshold;
> >    cpu_features->rep_stosb_threshold = rep_stosb_threshold;
> >    cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
> > diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
> > index a1578e4665..5c09472a10 100644
> > --- a/sysdeps/x86/dl-diagnostics-cpu.c
> > +++ b/sysdeps/x86/dl-diagnostics-cpu.c
> > @@ -83,6 +83,8 @@ _dl_diagnostics_cpu (void)
> >                              cpu_features->shared_cache_size);
> >    print_cpu_features_value ("non_temporal_threshold",
> >                              cpu_features->non_temporal_threshold);
> > +  print_cpu_features_value ("non_temporal_threshold_no_erms",
> > +                           cpu_features->non_temporal_threshold_no_erms);
> >    print_cpu_features_value ("rep_movsb_threshold",
> >                              cpu_features->rep_movsb_threshold);
> >    print_cpu_features_value ("rep_movsb_stop_threshold",
> > diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
> > index feb7004036..aac6341716 100644
> > --- a/sysdeps/x86/dl-tunables.list
> > +++ b/sysdeps/x86/dl-tunables.list
> > @@ -30,6 +30,9 @@ glibc {
> >      x86_non_temporal_threshold {
> >        type: SIZE_T
> >      }
> > +    x86_non_temporal_threshold_no_erms {
> > +      type: SIZE_T
> > +    }
> >      x86_rep_movsb_threshold {
> >        type: SIZE_T
> >        # Since there is overhead to set up REP MOVSB operation, REP
> > diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> > index 40b8129d6a..df6c561eac 100644
> > --- a/sysdeps/x86/include/cpu-features.h
> > +++ b/sysdeps/x86/include/cpu-features.h
> > @@ -913,8 +913,10 @@ struct cpu_features
> >    /* Shared cache size for use in memory and string routines, typically
> >       L2 or L3 size.  */
> >    unsigned long int shared_cache_size;
> > -  /* Threshold to use non temporal store.  */
> > +  /* Threshold to use non temporal store if ERMS is available.  */
> >    unsigned long int non_temporal_threshold;
> > +  /* Threshold to use non temporal store if ERMS is not available.  */
> > +  unsigned long int non_temporal_threshold_no_erms;
> >    /* Threshold to use "rep movsb".  */
> >    unsigned long int rep_movsb_threshold;
> >    /* Threshold to stop using "rep movsb".  */
> > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > index d1b92785b0..856c3daf3b 100644
> > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > @@ -424,8 +424,16 @@ L(more_8x_vec):
> >         jb      L(more_8x_vec_backward_check_nop)
> >         /* Check if non-temporal move candidate.  */
> >  #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
> > -       /* Check non-temporal store threshold.  */
> > -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> > +       /* Check non-temporal store threshold if ERMS is not available.
> > +          NB: This path is only hit if we jumped here from L(more_2x_vec).
> > +          If we went to L(movsb), then we enter at either the forward loop
> > +          directly or go to the backward loop.
> > +
> > +          WARNING: `__x86_shared_non_temporal_threshold_no_erms` should
> > +          NEVER be used in a control flow that could come from
> > +          L(movsb_more_2x_vec) without checking checkout
> > +          `__x86_rep_movsb_threshold` first.  */
> > +       cmp     __x86_shared_non_temporal_threshold_no_erms(%rip), %RDX_LP
> >         ja      L(large_memcpy_2x)
> >  #endif
> >         /* To reach this point there cannot be overlap and dst > src. So
> > --
> > 2.34.1
> >
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v2] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-25  2:05     ` Noah Goldstein
@ 2023-04-25  2:55       ` H.J. Lu
  2023-04-25  3:43         ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-04-25  2:55 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Mon, Apr 24, 2023 at 7:05 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 5:49 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> >
> > On Mon, Apr 24, 2023 at 3:30 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > ncores_per_socket'. This patch updates that value to roughly
> > > 'sizeof_L3 / 2`
> > >
> > > The original value (specifically dividing the `ncores_per_socket`) was
> > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > could evict.
> > >
> > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > non-temporal threshholds and leads to using non-temporal stores in
> > > cases where `rep movsb` is multiple times faster.
> > >
> > > Furthermore, non-temporal stores are written directly to disk so using
> > > it at a size much smaller than L3 can place soon to be accessed data
> > > much further away than it otherwise could be. As well, modern machines
> > > are able to detect streaming patterns (especially if `rep movsb` is
> > > used) and provide LRU hints to the memory subsystem. This in affect
> > > caps the total amount of eviction at 1/cache_assosiativity, far below
> > > meaningfully thrashing the entire cache.
> > >
> > > As best I can tell, the benchmarks that lead this small threshold
> > > where done comparing non-temporal stores versus standard cacheable
> > > stores. A better comparison (linked below) is to be `rep movsb` which,
> > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > at the low-end of the previous threshold, and within 10% for over
> > > 100MB copies (well past even the current threshold). In cases with a
> > > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > > faster up to `sizeof_L3`.
> > >
> > > Because there are still valid concerns about performance of large
> > > memcpy's using cacheable stores (both direct performance and on the
> > > system), if `rep movsb` is not available this patch also introduces a
> > > new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will
> > > continue to use the old calculation and be used if no ERMS memcpy is
> > > supported by the target.
> > >
> > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > > stores where done using:
> > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > >
> > > Sheets results (also available in pdf on the github):
> > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > ---
> > >  manual/tunables.texi                          | 16 +++-
> > >  sysdeps/x86/cacheinfo.h                       |  8 +-
> > >  sysdeps/x86/dl-cacheinfo.h                    | 85 +++++++++++++------
> > >  sysdeps/x86/dl-diagnostics-cpu.c              |  2 +
> > >  sysdeps/x86/dl-tunables.list                  |  3 +
> > >  sysdeps/x86/include/cpu-features.h            |  4 +-
> > >  .../multiarch/memmove-vec-unaligned-erms.S    | 12 ++-
> > >  7 files changed, 98 insertions(+), 32 deletions(-)
> > >
> > > diff --git a/manual/tunables.texi b/manual/tunables.texi
> > > index 130f94b2bc..8320e724f0 100644
> > > --- a/manual/tunables.texi
> > > +++ b/manual/tunables.texi
> > > @@ -52,6 +52,7 @@ glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483647)
> > >  glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff)
> > >  glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
> > >  glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> > > +glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> >
> > We don't need this.   We can use
> >
> > if (CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> >
> > to check for ERMS processors.
> >
>
> Ah makes sense. Does that work for FSRM as well?

All FSRM processors are also ERMS processors.  In any case, memcpy
checks ERMS, not FSRM.

> > >  glibc.cpu.x86_shstk:
> > >  glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffffffffff)
> > >  glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
> > > @@ -486,7 +487,8 @@ thread stack originally backup by Huge Pages to default pages.
> > >  @cindex shared_cache_size tunables
> > >  @cindex tunables, shared_cache_size
> > >  @cindex non_temporal_threshold tunables
> > > -@cindex tunables, non_temporal_threshold
> > > +@cindex non_temporal_threshold tunables_no_erms
> > > +@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_erms
> > >
> > >  @deftp {Tunable namespace} glibc.cpu
> > >  Behavior of @theglibc{} can be tuned to assume specific hardware capabilities
> > > @@ -559,6 +561,18 @@ like memmove and memcpy.
> > >  This tunable is specific to i386 and x86-64.
> > >  @end deftp
> > >
> > > +@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms
> > > +The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to
> > > +the above, but is used specifically when the ERMS feature is not
> > > +available. ERMS function are often implemented with optimizations for
> > > +large streaming workloads. This often makes it a better choice than
> > > +non-temporal stores for a wider-range of values. When ERMS is not
> > > +available, however, non-temporal stores become preferable at a much
> > > +lower threshold.
> > > +
> > > +This tunable is specific to i386 and x86-64.
> > > +@end deftp
> > > +
> > >  @deftp Tunable glibc.cpu.x86_rep_movsb_threshold
> > >  The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user to
> > >  set threshold in bytes to start using "rep movsb".  The value must be
> > > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> > > index ec1bc142c4..1083bd6018 100644
> > > --- a/sysdeps/x86/cacheinfo.h
> > > +++ b/sysdeps/x86/cacheinfo.h
> > > @@ -35,9 +35,12 @@ long int __x86_data_cache_size attribute_hidden = 32 * 1024;
> > >  long int __x86_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
> > >  long int __x86_shared_cache_size attribute_hidden = 1024 * 1024;
> > >
> > > -/* Threshold to use non temporal store.  */
> > > +/* Threshold to use non temporal store if ERMS is available.  */
> > >  long int __x86_shared_non_temporal_threshold attribute_hidden;
> > >
> > > +/* Threshold to use non temporal store if ERMS is not available.  */
> > > +long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden;
> > > +
> > >  /* Threshold to use Enhanced REP MOVSB.  */
> > >  long int __x86_rep_movsb_threshold attribute_hidden = 2048;
> > >
> > > @@ -77,6 +80,9 @@ init_cacheinfo (void)
> > >    __x86_shared_non_temporal_threshold
> > >      = cpu_features->non_temporal_threshold;
> > >
> > > +  __x86_shared_non_temporal_threshold_no_erms
> > > +      = cpu_features->non_temporal_threshold_no_erms;
> > > +
> > >    __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
> > >    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
> > >    __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
> > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > index ec88945b39..94d5c6183a 100644
> > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > >  }
> > >
> > >  static void
> > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > >                  long int core)
> > >  {
> > >    unsigned int eax;
> > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >    unsigned int family = cpu_features->basic.family;
> > >    unsigned int model = cpu_features->basic.model;
> > >    long int shared = *shared_ptr;
> > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > >    unsigned int threads = *threads_ptr;
> > >    bool inclusive_cache = true;
> > >    bool support_count_mask = true;
> > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >        /* Try L2 otherwise.  */
> > >        level  = 2;
> > >        shared = core;
> > > +      shared_per_thread = core;
> > >        threads_l2 = 0;
> > >        threads_l3 = -1;
> > >      }
> > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >          }
> > >        else
> > >          {
> > > -intel_bug_no_cache_info:
> > > -          /* Assume that all logical threads share the highest cache
> > > -             level.  */
> > > -          threads
> > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > -              & 0xff);
> > > -        }
> > > -
> > > -        /* Cap usage of highest cache level to the number of supported
> > > -           threads.  */
> > > -        if (shared > 0 && threads > 0)
> > > -          shared /= threads;
> > > +       intel_bug_no_cache_info:
> > > +         /* Assume that all logical threads share the highest cache
> > > +            level.  */
> > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > +                    & 0xff);
> > > +
> > > +         /* Get per-thread size of highest level cache.  */
> > > +         if (shared_per_thread > 0 && threads > 0)
> > > +           shared_per_thread /= threads;
> > > +       }
> > >      }
> > >
> > >    /* Account for non-inclusive L2 and L3 caches.  */
> > >    if (!inclusive_cache)
> > >      {
> > >        if (threads_l2 > 0)
> > > -        core /= threads_l2;
> > > +       shared_per_thread += core / threads_l2;
> > >        shared += core;
> > >      }
> > >
> > >    *shared_ptr = shared;
> > > +  *shared_per_thread_ptr = shared_per_thread;
> > >    *threads_ptr = threads;
> > >  }
> > >
> > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    /* Find out what brand of processor.  */
> > >    long int data = -1;
> > >    long int shared = -1;
> > > +  long int shared_per_thread = -1;
> > >    long int core = -1;
> > >    unsigned int threads = 0;
> > >    unsigned long int level1_icache_size = -1;
> > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size
> > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        level4_cache_size
> > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > >
> > > -      get_common_cache_info (&shared, &threads, core);
> > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > >      }
> > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > >      {
> > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > >
> > > -      get_common_cache_info (&shared, &threads, core);
> > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > >      }
> > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > >      {
> > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        if (shared <= 0)
> > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > >         shared = core;
> > > +
> > > +      if (shared_per_thread <= 0)
> > > +       shared_per_thread = shared;
> > >      }
> > >
> > >    cpu_features->level1_icache_size = level1_icache_size;
> > > @@ -730,17 +738,24 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > >    cpu_features->level4_cache_size = level4_cache_size;
> > >
> > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > -     in cache after a maximum temporal copy, which will maintain
> > > -     in cache a reasonable portion of the thread's stack and other
> > > -     active data. If the threshold is set higher than one thread's
> > > -     share of the cache, it has a substantial risk of negatively
> > > -     impacting the performance of other threads running on the chip. */
> > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > +     of chip's cache. For most Intel and AMD processors with an
> > > +     initial release date between 2017 and 2023, a thread's typical
> > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > +     estimate the point where non-temporal stores begin outcompeting
> > > +     other methods. As well the point where the fact that non-temporal
> > > +     stores are forced back to disk would already occured to the
> > > +     majority of the lines in the copy. Note, concerns about the
> > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > +     by the fact that modern HW detects streaming patterns and
> > > +     provides proper LRU hints so that the the maximum thrashing
> > > +     capped at 1/assosiativity. */
> > > +  unsigned long int non_temporal_threshold = shared / 2;
> > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > +     hint. As well, there performance in highly parallel situations is
> > > +     noticeably worse.  */
> > > +  unsigned long int non_temporal_threshold_no_erms = shared_per_thread * 3 / 4;
> > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > @@ -754,6 +769,11 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    else if (non_temporal_threshold > maximum_non_temporal_threshold)
> > >      non_temporal_threshold = maximum_non_temporal_threshold;
> > >
> > > +  if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold)
> > > +    non_temporal_threshold_no_erms = minimum_non_temporal_threshold;
> > > +  else if (non_temporal_threshold_no_erms > maximum_non_temporal_threshold)
> > > +    non_temporal_threshold_no_erms = maximum_non_temporal_threshold;
> > > +
> > >    /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8.  */
> > >    unsigned int minimum_rep_movsb_threshold;
> > >    /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
> > > @@ -802,6 +822,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        && tunable_size <= maximum_non_temporal_threshold)
> > >      non_temporal_threshold = tunable_size;
> > >
> > > +  tunable_size
> > > +      = TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, NULL);
> > > +  if (tunable_size > minimum_non_temporal_threshold
> > > +      && tunable_size <= maximum_non_temporal_threshold)
> > > +    non_temporal_threshold_no_erms = tunable_size;
> > > +
> > >    tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL);
> > >    if (tunable_size > minimum_rep_movsb_threshold)
> > >      rep_movsb_threshold = tunable_size;
> > > @@ -817,6 +843,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
> > >                            minimum_non_temporal_threshold,
> > >                            maximum_non_temporal_threshold);
> > > +  TUNABLE_SET_WITH_BOUNDS (
> > > +      x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_erms,
> > > +      minimum_non_temporal_threshold, maximum_non_temporal_threshold);
> > >    TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
> > >                            minimum_rep_movsb_threshold, SIZE_MAX);
> > >    TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
> > > @@ -837,6 +866,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    cpu_features->data_cache_size = data;
> > >    cpu_features->shared_cache_size = shared;
> > >    cpu_features->non_temporal_threshold = non_temporal_threshold;
> > > +  cpu_features->non_temporal_threshold_no_erms
> > > +      = non_temporal_threshold_no_erms;
> > >    cpu_features->rep_movsb_threshold = rep_movsb_threshold;
> > >    cpu_features->rep_stosb_threshold = rep_stosb_threshold;
> > >    cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
> > > diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
> > > index a1578e4665..5c09472a10 100644
> > > --- a/sysdeps/x86/dl-diagnostics-cpu.c
> > > +++ b/sysdeps/x86/dl-diagnostics-cpu.c
> > > @@ -83,6 +83,8 @@ _dl_diagnostics_cpu (void)
> > >                              cpu_features->shared_cache_size);
> > >    print_cpu_features_value ("non_temporal_threshold",
> > >                              cpu_features->non_temporal_threshold);
> > > +  print_cpu_features_value ("non_temporal_threshold_no_erms",
> > > +                           cpu_features->non_temporal_threshold_no_erms);
> > >    print_cpu_features_value ("rep_movsb_threshold",
> > >                              cpu_features->rep_movsb_threshold);
> > >    print_cpu_features_value ("rep_movsb_stop_threshold",
> > > diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
> > > index feb7004036..aac6341716 100644
> > > --- a/sysdeps/x86/dl-tunables.list
> > > +++ b/sysdeps/x86/dl-tunables.list
> > > @@ -30,6 +30,9 @@ glibc {
> > >      x86_non_temporal_threshold {
> > >        type: SIZE_T
> > >      }
> > > +    x86_non_temporal_threshold_no_erms {
> > > +      type: SIZE_T
> > > +    }
> > >      x86_rep_movsb_threshold {
> > >        type: SIZE_T
> > >        # Since there is overhead to set up REP MOVSB operation, REP
> > > diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> > > index 40b8129d6a..df6c561eac 100644
> > > --- a/sysdeps/x86/include/cpu-features.h
> > > +++ b/sysdeps/x86/include/cpu-features.h
> > > @@ -913,8 +913,10 @@ struct cpu_features
> > >    /* Shared cache size for use in memory and string routines, typically
> > >       L2 or L3 size.  */
> > >    unsigned long int shared_cache_size;
> > > -  /* Threshold to use non temporal store.  */
> > > +  /* Threshold to use non temporal store if ERMS is available.  */
> > >    unsigned long int non_temporal_threshold;
> > > +  /* Threshold to use non temporal store if ERMS is not available.  */
> > > +  unsigned long int non_temporal_threshold_no_erms;
> > >    /* Threshold to use "rep movsb".  */
> > >    unsigned long int rep_movsb_threshold;
> > >    /* Threshold to stop using "rep movsb".  */
> > > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > index d1b92785b0..856c3daf3b 100644
> > > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > @@ -424,8 +424,16 @@ L(more_8x_vec):
> > >         jb      L(more_8x_vec_backward_check_nop)
> > >         /* Check if non-temporal move candidate.  */
> > >  #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
> > > -       /* Check non-temporal store threshold.  */
> > > -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> > > +       /* Check non-temporal store threshold if ERMS is not available.
> > > +          NB: This path is only hit if we jumped here from L(more_2x_vec).
> > > +          If we went to L(movsb), then we enter at either the forward loop
> > > +          directly or go to the backward loop.
> > > +
> > > +          WARNING: `__x86_shared_non_temporal_threshold_no_erms` should
> > > +          NEVER be used in a control flow that could come from
> > > +          L(movsb_more_2x_vec) without checking checkout
> > > +          `__x86_rep_movsb_threshold` first.  */
> > > +       cmp     __x86_shared_non_temporal_threshold_no_erms(%rip), %RDX_LP
> > >         ja      L(large_memcpy_2x)
> > >  #endif
> > >         /* To reach this point there cannot be overlap and dst > src. So
> > > --
> > > 2.34.1
> > >
> >
> >
> > --
> > H.J.



-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
  2023-04-24 18:09 ` H.J. Lu
  2023-04-24 22:30 ` [PATCH v2] " Noah Goldstein
@ 2023-04-25  3:43 ` Noah Goldstein
  2023-04-25 17:42   ` H.J. Lu
  2023-04-25 21:45 ` [PATCH v4] " Noah Goldstein
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-04-25  3:43 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 2`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal threshholds and leads to using non-temporal stores in
cases where `rep movsb` is multiple times faster.

Furthermore, non-temporal stores are written directly to disk so using
it at a size much smaller than L3 can place soon to be accessed data
much further away than it otherwise could be. As well, modern machines
are able to detect streaming patterns (especially if `rep movsb` is
used) and provide LRU hints to the memory subsystem. This in affect
caps the total amount of eviction at 1/cache_assosiativity, far below
meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be `rep movsb` which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, `rep movsb` is ~2x
faster up to `sizeof_L3`.

Benchmarks comparing non-temporal stores, rep movsb, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
---
 sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
 1 file changed, 43 insertions(+), 27 deletions(-)

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index ec88945b39..df75fbd868 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@ handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/2 of size
+     of chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     other methods. As well the point where the fact that non-temporal
+     stores are forced back to disk would already occured to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the the maximum thrashing
+     capped at 1/assosiativity. */
+  unsigned long int non_temporal_threshold = shared / 2;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, there performance in highly parallel situations is
+     noticeably worse.  */
+  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
+    non_temporal_threshold = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v2] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-25  2:55       ` H.J. Lu
@ 2023-04-25  3:43         ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-04-25  3:43 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Mon, Apr 24, 2023 at 9:56 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 7:05 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Mon, Apr 24, 2023 at 5:49 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> > >
> > > On Mon, Apr 24, 2023 at 3:30 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > ncores_per_socket'. This patch updates that value to roughly
> > > > 'sizeof_L3 / 2`
> > > >
> > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > could evict.
> > > >
> > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > non-temporal threshholds and leads to using non-temporal stores in
> > > > cases where `rep movsb` is multiple times faster.
> > > >
> > > > Furthermore, non-temporal stores are written directly to disk so using
> > > > it at a size much smaller than L3 can place soon to be accessed data
> > > > much further away than it otherwise could be. As well, modern machines
> > > > are able to detect streaming patterns (especially if `rep movsb` is
> > > > used) and provide LRU hints to the memory subsystem. This in affect
> > > > caps the total amount of eviction at 1/cache_assosiativity, far below
> > > > meaningfully thrashing the entire cache.
> > > >
> > > > As best I can tell, the benchmarks that lead this small threshold
> > > > where done comparing non-temporal stores versus standard cacheable
> > > > stores. A better comparison (linked below) is to be `rep movsb` which,
> > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > at the low-end of the previous threshold, and within 10% for over
> > > > 100MB copies (well past even the current threshold). In cases with a
> > > > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > > > faster up to `sizeof_L3`.
> > > >
> > > > Because there are still valid concerns about performance of large
> > > > memcpy's using cacheable stores (both direct performance and on the
> > > > system), if `rep movsb` is not available this patch also introduces a
> > > > new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will
> > > > continue to use the old calculation and be used if no ERMS memcpy is
> > > > supported by the target.
> > > >
> > > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > > > stores where done using:
> > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > >
> > > > Sheets results (also available in pdf on the github):
> > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > ---
> > > >  manual/tunables.texi                          | 16 +++-
> > > >  sysdeps/x86/cacheinfo.h                       |  8 +-
> > > >  sysdeps/x86/dl-cacheinfo.h                    | 85 +++++++++++++------
> > > >  sysdeps/x86/dl-diagnostics-cpu.c              |  2 +
> > > >  sysdeps/x86/dl-tunables.list                  |  3 +
> > > >  sysdeps/x86/include/cpu-features.h            |  4 +-
> > > >  .../multiarch/memmove-vec-unaligned-erms.S    | 12 ++-
> > > >  7 files changed, 98 insertions(+), 32 deletions(-)
> > > >
> > > > diff --git a/manual/tunables.texi b/manual/tunables.texi
> > > > index 130f94b2bc..8320e724f0 100644
> > > > --- a/manual/tunables.texi
> > > > +++ b/manual/tunables.texi
> > > > @@ -52,6 +52,7 @@ glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483647)
> > > >  glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff)
> > > >  glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
> > > >  glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> > > > +glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> > >
> > > We don't need this.   We can use
> > >
> > > if (CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > >
> > > to check for ERMS processors.
> > >
> >
> > Ah makes sense. Does that work for FSRM as well?
>
> All FSRM processors are also ERMS processors.  In any case, memcpy
> checks ERMS, not FSRM.

Fixed.
>
> > > >  glibc.cpu.x86_shstk:
> > > >  glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffffffffff)
> > > >  glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
> > > > @@ -486,7 +487,8 @@ thread stack originally backup by Huge Pages to default pages.
> > > >  @cindex shared_cache_size tunables
> > > >  @cindex tunables, shared_cache_size
> > > >  @cindex non_temporal_threshold tunables
> > > > -@cindex tunables, non_temporal_threshold
> > > > +@cindex non_temporal_threshold tunables_no_erms
> > > > +@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_erms
> > > >
> > > >  @deftp {Tunable namespace} glibc.cpu
> > > >  Behavior of @theglibc{} can be tuned to assume specific hardware capabilities
> > > > @@ -559,6 +561,18 @@ like memmove and memcpy.
> > > >  This tunable is specific to i386 and x86-64.
> > > >  @end deftp
> > > >
> > > > +@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms
> > > > +The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to
> > > > +the above, but is used specifically when the ERMS feature is not
> > > > +available. ERMS function are often implemented with optimizations for
> > > > +large streaming workloads. This often makes it a better choice than
> > > > +non-temporal stores for a wider-range of values. When ERMS is not
> > > > +available, however, non-temporal stores become preferable at a much
> > > > +lower threshold.
> > > > +
> > > > +This tunable is specific to i386 and x86-64.
> > > > +@end deftp
> > > > +
> > > >  @deftp Tunable glibc.cpu.x86_rep_movsb_threshold
> > > >  The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user to
> > > >  set threshold in bytes to start using "rep movsb".  The value must be
> > > > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> > > > index ec1bc142c4..1083bd6018 100644
> > > > --- a/sysdeps/x86/cacheinfo.h
> > > > +++ b/sysdeps/x86/cacheinfo.h
> > > > @@ -35,9 +35,12 @@ long int __x86_data_cache_size attribute_hidden = 32 * 1024;
> > > >  long int __x86_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
> > > >  long int __x86_shared_cache_size attribute_hidden = 1024 * 1024;
> > > >
> > > > -/* Threshold to use non temporal store.  */
> > > > +/* Threshold to use non temporal store if ERMS is available.  */
> > > >  long int __x86_shared_non_temporal_threshold attribute_hidden;
> > > >
> > > > +/* Threshold to use non temporal store if ERMS is not available.  */
> > > > +long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden;
> > > > +
> > > >  /* Threshold to use Enhanced REP MOVSB.  */
> > > >  long int __x86_rep_movsb_threshold attribute_hidden = 2048;
> > > >
> > > > @@ -77,6 +80,9 @@ init_cacheinfo (void)
> > > >    __x86_shared_non_temporal_threshold
> > > >      = cpu_features->non_temporal_threshold;
> > > >
> > > > +  __x86_shared_non_temporal_threshold_no_erms
> > > > +      = cpu_features->non_temporal_threshold_no_erms;
> > > > +
> > > >    __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
> > > >    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
> > > >    __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
> > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > index ec88945b39..94d5c6183a 100644
> > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > > >  }
> > > >
> > > >  static void
> > > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > > >                  long int core)
> > > >  {
> > > >    unsigned int eax;
> > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >    unsigned int family = cpu_features->basic.family;
> > > >    unsigned int model = cpu_features->basic.model;
> > > >    long int shared = *shared_ptr;
> > > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > > >    unsigned int threads = *threads_ptr;
> > > >    bool inclusive_cache = true;
> > > >    bool support_count_mask = true;
> > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >        /* Try L2 otherwise.  */
> > > >        level  = 2;
> > > >        shared = core;
> > > > +      shared_per_thread = core;
> > > >        threads_l2 = 0;
> > > >        threads_l3 = -1;
> > > >      }
> > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >          }
> > > >        else
> > > >          {
> > > > -intel_bug_no_cache_info:
> > > > -          /* Assume that all logical threads share the highest cache
> > > > -             level.  */
> > > > -          threads
> > > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > -              & 0xff);
> > > > -        }
> > > > -
> > > > -        /* Cap usage of highest cache level to the number of supported
> > > > -           threads.  */
> > > > -        if (shared > 0 && threads > 0)
> > > > -          shared /= threads;
> > > > +       intel_bug_no_cache_info:
> > > > +         /* Assume that all logical threads share the highest cache
> > > > +            level.  */
> > > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > +                    & 0xff);
> > > > +
> > > > +         /* Get per-thread size of highest level cache.  */
> > > > +         if (shared_per_thread > 0 && threads > 0)
> > > > +           shared_per_thread /= threads;
> > > > +       }
> > > >      }
> > > >
> > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > >    if (!inclusive_cache)
> > > >      {
> > > >        if (threads_l2 > 0)
> > > > -        core /= threads_l2;
> > > > +       shared_per_thread += core / threads_l2;
> > > >        shared += core;
> > > >      }
> > > >
> > > >    *shared_ptr = shared;
> > > > +  *shared_per_thread_ptr = shared_per_thread;
> > > >    *threads_ptr = threads;
> > > >  }
> > > >
> > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    /* Find out what brand of processor.  */
> > > >    long int data = -1;
> > > >    long int shared = -1;
> > > > +  long int shared_per_thread = -1;
> > > >    long int core = -1;
> > > >    unsigned int threads = 0;
> > > >    unsigned long int level1_icache_size = -1;
> > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size
> > > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        level4_cache_size
> > > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > > >
> > > > -      get_common_cache_info (&shared, &threads, core);
> > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > >      }
> > > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > > >      {
> > > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > > >
> > > > -      get_common_cache_info (&shared, &threads, core);
> > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > >      }
> > > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > > >      {
> > > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        if (shared <= 0)
> > > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > > >         shared = core;
> > > > +
> > > > +      if (shared_per_thread <= 0)
> > > > +       shared_per_thread = shared;
> > > >      }
> > > >
> > > >    cpu_features->level1_icache_size = level1_icache_size;
> > > > @@ -730,17 +738,24 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > >
> > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > -     in cache after a maximum temporal copy, which will maintain
> > > > -     in cache a reasonable portion of the thread's stack and other
> > > > -     active data. If the threshold is set higher than one thread's
> > > > -     share of the cache, it has a substantial risk of negatively
> > > > -     impacting the performance of other threads running on the chip. */
> > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > > +     of chip's cache. For most Intel and AMD processors with an
> > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > > +     estimate the point where non-temporal stores begin outcompeting
> > > > +     other methods. As well the point where the fact that non-temporal
> > > > +     stores are forced back to disk would already occured to the
> > > > +     majority of the lines in the copy. Note, concerns about the
> > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > +     by the fact that modern HW detects streaming patterns and
> > > > +     provides proper LRU hints so that the the maximum thrashing
> > > > +     capped at 1/assosiativity. */
> > > > +  unsigned long int non_temporal_threshold = shared / 2;
> > > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > > +     hint. As well, there performance in highly parallel situations is
> > > > +     noticeably worse.  */
> > > > +  unsigned long int non_temporal_threshold_no_erms = shared_per_thread * 3 / 4;
> > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > @@ -754,6 +769,11 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    else if (non_temporal_threshold > maximum_non_temporal_threshold)
> > > >      non_temporal_threshold = maximum_non_temporal_threshold;
> > > >
> > > > +  if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold)
> > > > +    non_temporal_threshold_no_erms = minimum_non_temporal_threshold;
> > > > +  else if (non_temporal_threshold_no_erms > maximum_non_temporal_threshold)
> > > > +    non_temporal_threshold_no_erms = maximum_non_temporal_threshold;
> > > > +
> > > >    /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8.  */
> > > >    unsigned int minimum_rep_movsb_threshold;
> > > >    /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
> > > > @@ -802,6 +822,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        && tunable_size <= maximum_non_temporal_threshold)
> > > >      non_temporal_threshold = tunable_size;
> > > >
> > > > +  tunable_size
> > > > +      = TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, NULL);
> > > > +  if (tunable_size > minimum_non_temporal_threshold
> > > > +      && tunable_size <= maximum_non_temporal_threshold)
> > > > +    non_temporal_threshold_no_erms = tunable_size;
> > > > +
> > > >    tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL);
> > > >    if (tunable_size > minimum_rep_movsb_threshold)
> > > >      rep_movsb_threshold = tunable_size;
> > > > @@ -817,6 +843,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
> > > >                            minimum_non_temporal_threshold,
> > > >                            maximum_non_temporal_threshold);
> > > > +  TUNABLE_SET_WITH_BOUNDS (
> > > > +      x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_erms,
> > > > +      minimum_non_temporal_threshold, maximum_non_temporal_threshold);
> > > >    TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
> > > >                            minimum_rep_movsb_threshold, SIZE_MAX);
> > > >    TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
> > > > @@ -837,6 +866,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    cpu_features->data_cache_size = data;
> > > >    cpu_features->shared_cache_size = shared;
> > > >    cpu_features->non_temporal_threshold = non_temporal_threshold;
> > > > +  cpu_features->non_temporal_threshold_no_erms
> > > > +      = non_temporal_threshold_no_erms;
> > > >    cpu_features->rep_movsb_threshold = rep_movsb_threshold;
> > > >    cpu_features->rep_stosb_threshold = rep_stosb_threshold;
> > > >    cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
> > > > diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
> > > > index a1578e4665..5c09472a10 100644
> > > > --- a/sysdeps/x86/dl-diagnostics-cpu.c
> > > > +++ b/sysdeps/x86/dl-diagnostics-cpu.c
> > > > @@ -83,6 +83,8 @@ _dl_diagnostics_cpu (void)
> > > >                              cpu_features->shared_cache_size);
> > > >    print_cpu_features_value ("non_temporal_threshold",
> > > >                              cpu_features->non_temporal_threshold);
> > > > +  print_cpu_features_value ("non_temporal_threshold_no_erms",
> > > > +                           cpu_features->non_temporal_threshold_no_erms);
> > > >    print_cpu_features_value ("rep_movsb_threshold",
> > > >                              cpu_features->rep_movsb_threshold);
> > > >    print_cpu_features_value ("rep_movsb_stop_threshold",
> > > > diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
> > > > index feb7004036..aac6341716 100644
> > > > --- a/sysdeps/x86/dl-tunables.list
> > > > +++ b/sysdeps/x86/dl-tunables.list
> > > > @@ -30,6 +30,9 @@ glibc {
> > > >      x86_non_temporal_threshold {
> > > >        type: SIZE_T
> > > >      }
> > > > +    x86_non_temporal_threshold_no_erms {
> > > > +      type: SIZE_T
> > > > +    }
> > > >      x86_rep_movsb_threshold {
> > > >        type: SIZE_T
> > > >        # Since there is overhead to set up REP MOVSB operation, REP
> > > > diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> > > > index 40b8129d6a..df6c561eac 100644
> > > > --- a/sysdeps/x86/include/cpu-features.h
> > > > +++ b/sysdeps/x86/include/cpu-features.h
> > > > @@ -913,8 +913,10 @@ struct cpu_features
> > > >    /* Shared cache size for use in memory and string routines, typically
> > > >       L2 or L3 size.  */
> > > >    unsigned long int shared_cache_size;
> > > > -  /* Threshold to use non temporal store.  */
> > > > +  /* Threshold to use non temporal store if ERMS is available.  */
> > > >    unsigned long int non_temporal_threshold;
> > > > +  /* Threshold to use non temporal store if ERMS is not available.  */
> > > > +  unsigned long int non_temporal_threshold_no_erms;
> > > >    /* Threshold to use "rep movsb".  */
> > > >    unsigned long int rep_movsb_threshold;
> > > >    /* Threshold to stop using "rep movsb".  */
> > > > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > > index d1b92785b0..856c3daf3b 100644
> > > > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > > @@ -424,8 +424,16 @@ L(more_8x_vec):
> > > >         jb      L(more_8x_vec_backward_check_nop)
> > > >         /* Check if non-temporal move candidate.  */
> > > >  #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
> > > > -       /* Check non-temporal store threshold.  */
> > > > -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> > > > +       /* Check non-temporal store threshold if ERMS is not available.
> > > > +          NB: This path is only hit if we jumped here from L(more_2x_vec).
> > > > +          If we went to L(movsb), then we enter at either the forward loop
> > > > +          directly or go to the backward loop.
> > > > +
> > > > +          WARNING: `__x86_shared_non_temporal_threshold_no_erms` should
> > > > +          NEVER be used in a control flow that could come from
> > > > +          L(movsb_more_2x_vec) without checking checkout
> > > > +          `__x86_rep_movsb_threshold` first.  */
> > > > +       cmp     __x86_shared_non_temporal_threshold_no_erms(%rip), %RDX_LP
> > > >         ja      L(large_memcpy_2x)
> > > >  #endif
> > > >         /* To reach this point there cannot be overlap and dst > src. So
> > > > --
> > > > 2.34.1
> > > >
> > >
> > >
> > > --
> > > H.J.
>
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-25  3:43 ` [PATCH v3] " Noah Goldstein
@ 2023-04-25 17:42   ` H.J. Lu
  2023-04-25 21:45     ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-04-25 17:42 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Mon, Apr 24, 2023 at 8:43 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 2`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal threshholds and leads to using non-temporal stores in
> cases where `rep movsb` is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to disk so using
> it at a size much smaller than L3 can place soon to be accessed data
> much further away than it otherwise could be. As well, modern machines
> are able to detect streaming patterns (especially if `rep movsb` is
> used) and provide LRU hints to the memory subsystem. This in affect
> caps the total amount of eviction at 1/cache_assosiativity, far below
> meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be `rep movsb` which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, `rep movsb` is ~2x
> faster up to `sizeof_L3`.
>
> Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> ---
>  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
>  1 file changed, 43 insertions(+), 27 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index ec88945b39..df75fbd868 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -              & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +       intel_bug_no_cache_info:
> +         /* Assume that all logical threads share the highest cache
> +            level.  */
> +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +                    & 0xff);
> +
> +         /* Get per-thread size of highest level cache.  */
> +         if (shared_per_thread > 0 && threads > 0)
> +           shared_per_thread /= threads;
> +       }
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +       shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>
>        level1_icache_size
>         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>         shared = core;
> +
> +      if (shared_per_thread <= 0)
> +       shared_per_thread = shared;
>      }
>
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/2 of size
> +     of chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> +     estimate the point where non-temporal stores begin outcompeting
> +     other methods. As well the point where the fact that non-temporal
          I think we should just say REP MOVSB.
> +     stores are forced back to disk would already occured to the
                                                   ^^^^^ main memory.
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the the maximum thrashing
                                                                 ^^^ Dup.
> +     capped at 1/assosiativity. */
associativity
> +  unsigned long int non_temporal_threshold = shared / 2;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, there performance in highly parallel situations is
                               the
> +     noticeably worse.  */
> +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> +    non_temporal_threshold = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>


-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-25 17:42   ` H.J. Lu
@ 2023-04-25 21:45     ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-04-25 21:45 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Tue, Apr 25, 2023 at 12:43 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 8:43 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > ncores_per_socket'. This patch updates that value to roughly
> > 'sizeof_L3 / 2`
> >
> > The original value (specifically dividing the `ncores_per_socket`) was
> > done to limit the amount of other threads' data a `memcpy`/`memset`
> > could evict.
> >
> > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > non-temporal threshholds and leads to using non-temporal stores in
> > cases where `rep movsb` is multiple times faster.
> >
> > Furthermore, non-temporal stores are written directly to disk so using
> > it at a size much smaller than L3 can place soon to be accessed data
> > much further away than it otherwise could be. As well, modern machines
> > are able to detect streaming patterns (especially if `rep movsb` is
> > used) and provide LRU hints to the memory subsystem. This in affect
> > caps the total amount of eviction at 1/cache_assosiativity, far below
> > meaningfully thrashing the entire cache.
> >
> > As best I can tell, the benchmarks that lead this small threshold
> > where done comparing non-temporal stores versus standard cacheable
> > stores. A better comparison (linked below) is to be `rep movsb` which,
> > on the measure systems, is nearly 2x faster than non-temporal stores
> > at the low-end of the previous threshold, and within 10% for over
> > 100MB copies (well past even the current threshold). In cases with a
> > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > faster up to `sizeof_L3`.
> >
> > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > stores where done using:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > Sheets results (also available in pdf on the github):
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > ---
> >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> >  1 file changed, 43 insertions(+), 27 deletions(-)
> >
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index ec88945b39..df75fbd868 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> >  }
> >
> >  static void
> > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> >                  long int core)
> >  {
> >    unsigned int eax;
> > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >    unsigned int family = cpu_features->basic.family;
> >    unsigned int model = cpu_features->basic.model;
> >    long int shared = *shared_ptr;
> > +  long int shared_per_thread = *shared_per_thread_ptr;
> >    unsigned int threads = *threads_ptr;
> >    bool inclusive_cache = true;
> >    bool support_count_mask = true;
> > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >        /* Try L2 otherwise.  */
> >        level  = 2;
> >        shared = core;
> > +      shared_per_thread = core;
> >        threads_l2 = 0;
> >        threads_l3 = -1;
> >      }
> > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >          }
> >        else
> >          {
> > -intel_bug_no_cache_info:
> > -          /* Assume that all logical threads share the highest cache
> > -             level.  */
> > -          threads
> > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > -              & 0xff);
> > -        }
> > -
> > -        /* Cap usage of highest cache level to the number of supported
> > -           threads.  */
> > -        if (shared > 0 && threads > 0)
> > -          shared /= threads;
> > +       intel_bug_no_cache_info:
> > +         /* Assume that all logical threads share the highest cache
> > +            level.  */
> > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > +                    & 0xff);
> > +
> > +         /* Get per-thread size of highest level cache.  */
> > +         if (shared_per_thread > 0 && threads > 0)
> > +           shared_per_thread /= threads;
> > +       }
> >      }
> >
> >    /* Account for non-inclusive L2 and L3 caches.  */
> >    if (!inclusive_cache)
> >      {
> >        if (threads_l2 > 0)
> > -        core /= threads_l2;
> > +       shared_per_thread += core / threads_l2;
> >        shared += core;
> >      }
> >
> >    *shared_ptr = shared;
> > +  *shared_per_thread_ptr = shared_per_thread;
> >    *threads_ptr = threads;
> >  }
> >
> > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    /* Find out what brand of processor.  */
> >    long int data = -1;
> >    long int shared = -1;
> > +  long int shared_per_thread = -1;
> >    long int core = -1;
> >    unsigned int threads = 0;
> >    unsigned long int level1_icache_size = -1;
> > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size
> >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level4_cache_size
> >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> >      {
> >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_amd)
> >      {
> >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        if (shared <= 0)
> >          /* No shared L3 cache.  All we have is the L2 cache.  */
> >         shared = core;
> > +
> > +      if (shared_per_thread <= 0)
> > +       shared_per_thread = shared;
> >      }
> >
> >    cpu_features->level1_icache_size = level1_icache_size;
> > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > -     thread's share of the chip's cache. For most Intel and AMD processors
> > -     with an initial release date between 2017 and 2020, a thread's typical
> > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > -     in cache after a maximum temporal copy, which will maintain
> > -     in cache a reasonable portion of the thread's stack and other
> > -     active data. If the threshold is set higher than one thread's
> > -     share of the cache, it has a substantial risk of negatively
> > -     impacting the performance of other threads running on the chip. */
> > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > +     of chip's cache. For most Intel and AMD processors with an
> > +     initial release date between 2017 and 2023, a thread's typical
> > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > +     estimate the point where non-temporal stores begin outcompeting
> > +     other methods. As well the point where the fact that non-temporal
>           I think we should just say REP MOVSB.
done.
> > +     stores are forced back to disk would already occured to the
>                                                    ^^^^^ main memory.
done
> > +     majority of the lines in the copy. Note, concerns about the
> > +     entire L3 cache being evicted by the copy are mostly alleviated
> > +     by the fact that modern HW detects streaming patterns and
> > +     provides proper LRU hints so that the the maximum thrashing
>                                                                  ^^^ Dup.
done
> > +     capped at 1/assosiativity. */
> associativity
Done
> > +  unsigned long int non_temporal_threshold = shared / 2;
> > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > +     hint. As well, there performance in highly parallel situations is
>                                the
I think there?
> > +     noticeably worse.  */
> > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > --
> > 2.34.1
> >
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
                   ` (2 preceding siblings ...)
  2023-04-25  3:43 ` [PATCH v3] " Noah Goldstein
@ 2023-04-25 21:45 ` Noah Goldstein
  2023-04-26 15:59   ` H.J. Lu
  2023-05-09  3:13 ` [PATCH v5 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-04-25 21:45 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 2`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.

Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.

Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
---
 sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
 1 file changed, 43 insertions(+), 27 deletions(-)

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index ec88945b39..4f1fd419f8 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@ handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/2 of size
+     of the chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     REP MOVSB. As well the point where the fact that non-temporal
+     stores are forced back to main memory would already occurred to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the maximum thrashing
+     capped at 1/associativity. */
+  unsigned long int non_temporal_threshold = shared / 2;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, there performance in highly parallel situations is
+     noticeably worse.  */
+  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
+    non_temporal_threshold = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-25 21:45 ` [PATCH v4] " Noah Goldstein
@ 2023-04-26 15:59   ` H.J. Lu
  2023-04-26 17:15     ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-04-26 15:59 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Tue, Apr 25, 2023 at 2:45 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 2`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal thresholds and leads to using non-temporal stores in
> cases where REP MOVSB is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to main memory
> so using it at a size much smaller than L3 can place soon to be
> accessed data much further away than it otherwise could be. As well,
> modern machines are able to detect streaming patterns (especially if
> REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> in affect caps the total amount of eviction at 1/cache_associativity,
> far below meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be REP MOVSB which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> up to `sizeof_L3`.
>
> Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> ---
>  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
>  1 file changed, 43 insertions(+), 27 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index ec88945b39..4f1fd419f8 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -              & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +       intel_bug_no_cache_info:
> +         /* Assume that all logical threads share the highest cache
> +            level.  */
> +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +                    & 0xff);
> +
> +         /* Get per-thread size of highest level cache.  */
> +         if (shared_per_thread > 0 && threads > 0)
> +           shared_per_thread /= threads;
> +       }
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +       shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>
>        level1_icache_size
>         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>         shared = core;
> +
> +      if (shared_per_thread <= 0)
> +       shared_per_thread = shared;
>      }
>
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/2 of size
> +     of the chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> +     estimate the point where non-temporal stores begin outcompeting
> +     REP MOVSB. As well the point where the fact that non-temporal
> +     stores are forced back to main memory would already occurred to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the maximum thrashing
> +     capped at 1/associativity. */
> +  unsigned long int non_temporal_threshold = shared / 2;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, there performance in highly parallel situations is
> +     noticeably worse.  */
> +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> +    non_temporal_threshold = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>

LGTM.

Thanks.


-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-26 15:59   ` H.J. Lu
@ 2023-04-26 17:15     ` Noah Goldstein
  2023-05-04  3:28       ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-04-26 17:15 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Wed, Apr 26, 2023 at 10:59 AM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Tue, Apr 25, 2023 at 2:45 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > ncores_per_socket'. This patch updates that value to roughly
> > 'sizeof_L3 / 2`
> >
> > The original value (specifically dividing the `ncores_per_socket`) was
> > done to limit the amount of other threads' data a `memcpy`/`memset`
> > could evict.
> >
> > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > non-temporal thresholds and leads to using non-temporal stores in
> > cases where REP MOVSB is multiple times faster.
> >
> > Furthermore, non-temporal stores are written directly to main memory
> > so using it at a size much smaller than L3 can place soon to be
> > accessed data much further away than it otherwise could be. As well,
> > modern machines are able to detect streaming patterns (especially if
> > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > in affect caps the total amount of eviction at 1/cache_associativity,
> > far below meaningfully thrashing the entire cache.
> >
> > As best I can tell, the benchmarks that lead this small threshold
> > where done comparing non-temporal stores versus standard cacheable
> > stores. A better comparison (linked below) is to be REP MOVSB which,
> > on the measure systems, is nearly 2x faster than non-temporal stores
> > at the low-end of the previous threshold, and within 10% for over
> > 100MB copies (well past even the current threshold). In cases with a
> > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > up to `sizeof_L3`.
> >
> > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > stores where done using:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > Sheets results (also available in pdf on the github):
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > ---
> >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> >  1 file changed, 43 insertions(+), 27 deletions(-)
> >
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index ec88945b39..4f1fd419f8 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> >  }
> >
> >  static void
> > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> >                  long int core)
> >  {
> >    unsigned int eax;
> > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >    unsigned int family = cpu_features->basic.family;
> >    unsigned int model = cpu_features->basic.model;
> >    long int shared = *shared_ptr;
> > +  long int shared_per_thread = *shared_per_thread_ptr;
> >    unsigned int threads = *threads_ptr;
> >    bool inclusive_cache = true;
> >    bool support_count_mask = true;
> > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >        /* Try L2 otherwise.  */
> >        level  = 2;
> >        shared = core;
> > +      shared_per_thread = core;
> >        threads_l2 = 0;
> >        threads_l3 = -1;
> >      }
> > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >          }
> >        else
> >          {
> > -intel_bug_no_cache_info:
> > -          /* Assume that all logical threads share the highest cache
> > -             level.  */
> > -          threads
> > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > -              & 0xff);
> > -        }
> > -
> > -        /* Cap usage of highest cache level to the number of supported
> > -           threads.  */
> > -        if (shared > 0 && threads > 0)
> > -          shared /= threads;
> > +       intel_bug_no_cache_info:
> > +         /* Assume that all logical threads share the highest cache
> > +            level.  */
> > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > +                    & 0xff);
> > +
> > +         /* Get per-thread size of highest level cache.  */
> > +         if (shared_per_thread > 0 && threads > 0)
> > +           shared_per_thread /= threads;
> > +       }
> >      }
> >
> >    /* Account for non-inclusive L2 and L3 caches.  */
> >    if (!inclusive_cache)
> >      {
> >        if (threads_l2 > 0)
> > -        core /= threads_l2;
> > +       shared_per_thread += core / threads_l2;
> >        shared += core;
> >      }
> >
> >    *shared_ptr = shared;
> > +  *shared_per_thread_ptr = shared_per_thread;
> >    *threads_ptr = threads;
> >  }
> >
> > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    /* Find out what brand of processor.  */
> >    long int data = -1;
> >    long int shared = -1;
> > +  long int shared_per_thread = -1;
> >    long int core = -1;
> >    unsigned int threads = 0;
> >    unsigned long int level1_icache_size = -1;
> > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size
> >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level4_cache_size
> >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> >      {
> >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_amd)
> >      {
> >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        if (shared <= 0)
> >          /* No shared L3 cache.  All we have is the L2 cache.  */
> >         shared = core;
> > +
> > +      if (shared_per_thread <= 0)
> > +       shared_per_thread = shared;
> >      }
> >
> >    cpu_features->level1_icache_size = level1_icache_size;
> > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > -     thread's share of the chip's cache. For most Intel and AMD processors
> > -     with an initial release date between 2017 and 2020, a thread's typical
> > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > -     in cache after a maximum temporal copy, which will maintain
> > -     in cache a reasonable portion of the thread's stack and other
> > -     active data. If the threshold is set higher than one thread's
> > -     share of the cache, it has a substantial risk of negatively
> > -     impacting the performance of other threads running on the chip. */
> > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > +     of the chip's cache. For most Intel and AMD processors with an
> > +     initial release date between 2017 and 2023, a thread's typical
> > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > +     estimate the point where non-temporal stores begin outcompeting
> > +     REP MOVSB. As well the point where the fact that non-temporal
> > +     stores are forced back to main memory would already occurred to the
> > +     majority of the lines in the copy. Note, concerns about the
> > +     entire L3 cache being evicted by the copy are mostly alleviated
> > +     by the fact that modern HW detects streaming patterns and
> > +     provides proper LRU hints so that the maximum thrashing
> > +     capped at 1/associativity. */
> > +  unsigned long int non_temporal_threshold = shared / 2;
> > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > +     hint. As well, there performance in highly parallel situations is
> > +     noticeably worse.  */
> > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > --
> > 2.34.1
> >
>
> LGTM.
>
> Thanks.
>

Thanks.

I'm currently running some benchmarks on Broadwell and Carlos is reproducing
independently (on ICX I think), so will wait to push until all that
has come to fruition.
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-04-26 17:15     ` Noah Goldstein
@ 2023-05-04  3:28       ` Noah Goldstein
  2023-05-05 18:06         ` H.J. Lu
  0 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-05-04  3:28 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Wed, Apr 26, 2023 at 12:15 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Wed, Apr 26, 2023 at 10:59 AM H.J. Lu <hjl.tools@gmail.com> wrote:
> >
> > On Tue, Apr 25, 2023 at 2:45 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > ncores_per_socket'. This patch updates that value to roughly
> > > 'sizeof_L3 / 2`
> > >
> > > The original value (specifically dividing the `ncores_per_socket`) was
> > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > could evict.
> > >
> > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > non-temporal thresholds and leads to using non-temporal stores in
> > > cases where REP MOVSB is multiple times faster.
> > >
> > > Furthermore, non-temporal stores are written directly to main memory
> > > so using it at a size much smaller than L3 can place soon to be
> > > accessed data much further away than it otherwise could be. As well,
> > > modern machines are able to detect streaming patterns (especially if
> > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > far below meaningfully thrashing the entire cache.
> > >
> > > As best I can tell, the benchmarks that lead this small threshold
> > > where done comparing non-temporal stores versus standard cacheable
> > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > at the low-end of the previous threshold, and within 10% for over
> > > 100MB copies (well past even the current threshold). In cases with a
> > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > up to `sizeof_L3`.
> > >
> > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > stores where done using:
> > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > >
> > > Sheets results (also available in pdf on the github):
> > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > ---
> > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > >
> > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > index ec88945b39..4f1fd419f8 100644
> > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > >  }
> > >
> > >  static void
> > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > >                  long int core)
> > >  {
> > >    unsigned int eax;
> > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >    unsigned int family = cpu_features->basic.family;
> > >    unsigned int model = cpu_features->basic.model;
> > >    long int shared = *shared_ptr;
> > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > >    unsigned int threads = *threads_ptr;
> > >    bool inclusive_cache = true;
> > >    bool support_count_mask = true;
> > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >        /* Try L2 otherwise.  */
> > >        level  = 2;
> > >        shared = core;
> > > +      shared_per_thread = core;
> > >        threads_l2 = 0;
> > >        threads_l3 = -1;
> > >      }
> > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >          }
> > >        else
> > >          {
> > > -intel_bug_no_cache_info:
> > > -          /* Assume that all logical threads share the highest cache
> > > -             level.  */
> > > -          threads
> > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > -              & 0xff);
> > > -        }
> > > -
> > > -        /* Cap usage of highest cache level to the number of supported
> > > -           threads.  */
> > > -        if (shared > 0 && threads > 0)
> > > -          shared /= threads;
> > > +       intel_bug_no_cache_info:
> > > +         /* Assume that all logical threads share the highest cache
> > > +            level.  */
> > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > +                    & 0xff);
> > > +
> > > +         /* Get per-thread size of highest level cache.  */
> > > +         if (shared_per_thread > 0 && threads > 0)
> > > +           shared_per_thread /= threads;
> > > +       }
> > >      }
> > >
> > >    /* Account for non-inclusive L2 and L3 caches.  */
> > >    if (!inclusive_cache)
> > >      {
> > >        if (threads_l2 > 0)
> > > -        core /= threads_l2;
> > > +       shared_per_thread += core / threads_l2;
> > >        shared += core;
> > >      }
> > >
> > >    *shared_ptr = shared;
> > > +  *shared_per_thread_ptr = shared_per_thread;
> > >    *threads_ptr = threads;
> > >  }
> > >
> > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    /* Find out what brand of processor.  */
> > >    long int data = -1;
> > >    long int shared = -1;
> > > +  long int shared_per_thread = -1;
> > >    long int core = -1;
> > >    unsigned int threads = 0;
> > >    unsigned long int level1_icache_size = -1;
> > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size
> > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        level4_cache_size
> > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > >
> > > -      get_common_cache_info (&shared, &threads, core);
> > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > >      }
> > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > >      {
> > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > >
> > > -      get_common_cache_info (&shared, &threads, core);
> > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > >      }
> > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > >      {
> > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        if (shared <= 0)
> > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > >         shared = core;
> > > +
> > > +      if (shared_per_thread <= 0)
> > > +       shared_per_thread = shared;
> > >      }
> > >
> > >    cpu_features->level1_icache_size = level1_icache_size;
> > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > >    cpu_features->level4_cache_size = level4_cache_size;
> > >
> > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > -     in cache after a maximum temporal copy, which will maintain
> > > -     in cache a reasonable portion of the thread's stack and other
> > > -     active data. If the threshold is set higher than one thread's
> > > -     share of the cache, it has a substantial risk of negatively
> > > -     impacting the performance of other threads running on the chip. */
> > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > +     of the chip's cache. For most Intel and AMD processors with an
> > > +     initial release date between 2017 and 2023, a thread's typical
> > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > +     estimate the point where non-temporal stores begin outcompeting
> > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > +     stores are forced back to main memory would already occurred to the
> > > +     majority of the lines in the copy. Note, concerns about the
> > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > +     by the fact that modern HW detects streaming patterns and
> > > +     provides proper LRU hints so that the maximum thrashing
> > > +     capped at 1/associativity. */
> > > +  unsigned long int non_temporal_threshold = shared / 2;
> > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > +     hint. As well, there performance in highly parallel situations is
> > > +     noticeably worse.  */
> > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > --
> > > 2.34.1
> > >
> >
> > LGTM.
> >
> > Thanks.
> >
>
> Thanks.
>
> I'm currently running some benchmarks on Broadwell and Carlos is reproducing
> independently (on ICX I think), so will wait to push until all that
> has come to fruition.
> >
> > --
> > H.J.

Carlos, I benchmarked on BWD:
https://docs.google.com/spreadsheets/d/1kfXonk4LAZXBySuPnfDenrTizZ52IN0vvLIt3k1ex9c/edit?usp=sharing
or
https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/master/results-bwd-pdf/bwd-memcpy-0--standard.pdf

On BWD, unlike SKX/ICX, non-temporal stores perform better than REP_MOVSB
and standard stores. Somewhat counter-intuitively the results are most
pronounced
in the single-threaded.

At roughly the 4MB range non-tempora stores become by far the best
basically regardless
of the number of threads.
The machine I tested on had 35MB of cache and 28 threads per socket so
our current
threshold is ~1MB which is still to low. But the proposal in this
patch is do L3 / 2 is too
high (~16MB in this case).

At the current threshold, in the multithreaded case. Between ~[1MB,
4MB) non-temporal
stores at 60-110% SLOWER than ERMS. OTOH, between [4MB, 16MB]
non-temporal stores
are about 10-30% faster.

I think we still have a net benefit from this patch, but maybe we want
to tune the exact
percentage by CPU arch?
HJ what do you think about that? SKX and newer -> L3 / 2. BWD and
older -> L3 / 8.
Then we can add additional cases for different machines as benchmarks indicate.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-05-04  3:28       ` Noah Goldstein
@ 2023-05-05 18:06         ` H.J. Lu
  2023-05-09  3:14           ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-05-05 18:06 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Wed, May 3, 2023 at 8:28 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Wed, Apr 26, 2023 at 12:15 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Wed, Apr 26, 2023 at 10:59 AM H.J. Lu <hjl.tools@gmail.com> wrote:
> > >
> > > On Tue, Apr 25, 2023 at 2:45 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > ncores_per_socket'. This patch updates that value to roughly
> > > > 'sizeof_L3 / 2`
> > > >
> > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > could evict.
> > > >
> > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > non-temporal thresholds and leads to using non-temporal stores in
> > > > cases where REP MOVSB is multiple times faster.
> > > >
> > > > Furthermore, non-temporal stores are written directly to main memory
> > > > so using it at a size much smaller than L3 can place soon to be
> > > > accessed data much further away than it otherwise could be. As well,
> > > > modern machines are able to detect streaming patterns (especially if
> > > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > > far below meaningfully thrashing the entire cache.
> > > >
> > > > As best I can tell, the benchmarks that lead this small threshold
> > > > where done comparing non-temporal stores versus standard cacheable
> > > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > at the low-end of the previous threshold, and within 10% for over
> > > > 100MB copies (well past even the current threshold). In cases with a
> > > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > > up to `sizeof_L3`.
> > > >
> > > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > > stores where done using:
> > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > >
> > > > Sheets results (also available in pdf on the github):
> > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > ---
> > > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > > >
> > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > index ec88945b39..4f1fd419f8 100644
> > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > > >  }
> > > >
> > > >  static void
> > > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > > >                  long int core)
> > > >  {
> > > >    unsigned int eax;
> > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >    unsigned int family = cpu_features->basic.family;
> > > >    unsigned int model = cpu_features->basic.model;
> > > >    long int shared = *shared_ptr;
> > > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > > >    unsigned int threads = *threads_ptr;
> > > >    bool inclusive_cache = true;
> > > >    bool support_count_mask = true;
> > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >        /* Try L2 otherwise.  */
> > > >        level  = 2;
> > > >        shared = core;
> > > > +      shared_per_thread = core;
> > > >        threads_l2 = 0;
> > > >        threads_l3 = -1;
> > > >      }
> > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >          }
> > > >        else
> > > >          {
> > > > -intel_bug_no_cache_info:
> > > > -          /* Assume that all logical threads share the highest cache
> > > > -             level.  */
> > > > -          threads
> > > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > -              & 0xff);
> > > > -        }
> > > > -
> > > > -        /* Cap usage of highest cache level to the number of supported
> > > > -           threads.  */
> > > > -        if (shared > 0 && threads > 0)
> > > > -          shared /= threads;
> > > > +       intel_bug_no_cache_info:
> > > > +         /* Assume that all logical threads share the highest cache
> > > > +            level.  */
> > > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > +                    & 0xff);
> > > > +
> > > > +         /* Get per-thread size of highest level cache.  */
> > > > +         if (shared_per_thread > 0 && threads > 0)
> > > > +           shared_per_thread /= threads;
> > > > +       }
> > > >      }
> > > >
> > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > >    if (!inclusive_cache)
> > > >      {
> > > >        if (threads_l2 > 0)
> > > > -        core /= threads_l2;
> > > > +       shared_per_thread += core / threads_l2;
> > > >        shared += core;
> > > >      }
> > > >
> > > >    *shared_ptr = shared;
> > > > +  *shared_per_thread_ptr = shared_per_thread;
> > > >    *threads_ptr = threads;
> > > >  }
> > > >
> > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    /* Find out what brand of processor.  */
> > > >    long int data = -1;
> > > >    long int shared = -1;
> > > > +  long int shared_per_thread = -1;
> > > >    long int core = -1;
> > > >    unsigned int threads = 0;
> > > >    unsigned long int level1_icache_size = -1;
> > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size
> > > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        level4_cache_size
> > > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > > >
> > > > -      get_common_cache_info (&shared, &threads, core);
> > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > >      }
> > > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > > >      {
> > > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > > >
> > > > -      get_common_cache_info (&shared, &threads, core);
> > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > >      }
> > > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > > >      {
> > > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        if (shared <= 0)
> > > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > > >         shared = core;
> > > > +
> > > > +      if (shared_per_thread <= 0)
> > > > +       shared_per_thread = shared;
> > > >      }
> > > >
> > > >    cpu_features->level1_icache_size = level1_icache_size;
> > > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > >
> > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > -     in cache after a maximum temporal copy, which will maintain
> > > > -     in cache a reasonable portion of the thread's stack and other
> > > > -     active data. If the threshold is set higher than one thread's
> > > > -     share of the cache, it has a substantial risk of negatively
> > > > -     impacting the performance of other threads running on the chip. */
> > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > > +     of the chip's cache. For most Intel and AMD processors with an
> > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > > +     estimate the point where non-temporal stores begin outcompeting
> > > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > > +     stores are forced back to main memory would already occurred to the
> > > > +     majority of the lines in the copy. Note, concerns about the
> > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > +     by the fact that modern HW detects streaming patterns and
> > > > +     provides proper LRU hints so that the maximum thrashing
> > > > +     capped at 1/associativity. */
> > > > +  unsigned long int non_temporal_threshold = shared / 2;
> > > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > > +     hint. As well, there performance in highly parallel situations is
> > > > +     noticeably worse.  */
> > > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > --
> > > > 2.34.1
> > > >
> > >
> > > LGTM.
> > >
> > > Thanks.
> > >
> >
> > Thanks.
> >
> > I'm currently running some benchmarks on Broadwell and Carlos is reproducing
> > independently (on ICX I think), so will wait to push until all that
> > has come to fruition.
> > >
> > > --
> > > H.J.
>
> Carlos, I benchmarked on BWD:
> https://docs.google.com/spreadsheets/d/1kfXonk4LAZXBySuPnfDenrTizZ52IN0vvLIt3k1ex9c/edit?usp=sharing
> or
> https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/master/results-bwd-pdf/bwd-memcpy-0--standard.pdf
>
> On BWD, unlike SKX/ICX, non-temporal stores perform better than REP_MOVSB
> and standard stores. Somewhat counter-intuitively the results are most
> pronounced
> in the single-threaded.
>
> At roughly the 4MB range non-tempora stores become by far the best
> basically regardless
> of the number of threads.
> The machine I tested on had 35MB of cache and 28 threads per socket so
> our current
> threshold is ~1MB which is still to low. But the proposal in this
> patch is do L3 / 2 is too
> high (~16MB in this case).
>
> At the current threshold, in the multithreaded case. Between ~[1MB,
> 4MB) non-temporal
> stores at 60-110% SLOWER than ERMS. OTOH, between [4MB, 16MB]
> non-temporal stores
> are about 10-30% faster.
>
> I think we still have a net benefit from this patch, but maybe we want
> to tune the exact
> percentage by CPU arch?
> HJ what do you think about that? SKX and newer -> L3 / 2. BWD and
> older -> L3 / 8.

This sounds good to me.

> Then we can add additional cases for different machines as benchmarks indicate.

Thanks.

-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v5 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
                   ` (3 preceding siblings ...)
  2023-04-25 21:45 ` [PATCH v4] " Noah Goldstein
@ 2023-05-09  3:13 ` Noah Goldstein
  2023-05-09  3:13   ` [PATCH v5 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
  2023-05-09  3:13   ` [PATCH v5 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
  2023-05-10  0:33 ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
                   ` (6 subsequent siblings)
  11 siblings, 2 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-09  3:13 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.

Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.

The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.

Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
---
 sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
 1 file changed, 43 insertions(+), 27 deletions(-)

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index 877e73d700..c7e41029fa 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@ handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/4 of size
+     of the chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     REP MOVSB. As well the point where the fact that non-temporal
+     stores are forced back to main memory would already occurred to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the maximum thrashing
+     capped at 1/associativity. */
+  unsigned long int non_temporal_threshold = shared / 4;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, there performance in highly parallel situations is
+     noticeably worse.  */
+  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
+    non_temporal_threshold = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v5 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-09  3:13 ` [PATCH v5 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-05-09  3:13   ` Noah Goldstein
  2023-05-09 21:58     ` H.J. Lu
  2023-05-09  3:13   ` [PATCH v5 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
  1 sibling, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-05-09  3:13 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

This patch should have no affect on existing functionality.

The current code, which has a single switch for model detection and
setting prefered features, is difficult to follow/extend. The cases
use magic numbers and many microarchitectures are missing. This makes
it difficult to reason about what is implemented so far and/or
how/where to add support for new features.

This patch splits the model detection and preference setting stages so
that CPU preferences can be set based on a complete list of available
microarchitectures, rather than based on model magic numbers.
---
 sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++--------
 1 file changed, 316 insertions(+), 85 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 5bff8ec0b4..bec70c3c49 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
 		     == index_arch_Fast_Copy_Backward)),
 		"Incorrect index_arch_Fast_Unaligned_Load");
 
+
+/* Intel Family-6 microarch list.  */
+enum
+{
+  /* Atom processors.  */
+  INTEL_ATOM_BONNELL,
+  INTEL_ATOM_SALTWELL,
+  INTEL_ATOM_SILVERMONT,
+  INTEL_ATOM_AIRMONT,
+  INTEL_ATOM_GOLDMONT,
+  INTEL_ATOM_GOLDMONT_PLUS,
+  INTEL_ATOM_SIERRAFOREST,
+  INTEL_ATOM_GRANDRIDGE,
+  INTEL_ATOM_TREMONT,
+
+  /* Bigcore processors.  */
+  INTEL_BIGCORE_MEROM,
+  INTEL_BIGCORE_PENRYN,
+  INTEL_BIGCORE_DUNNINGTON,
+  INTEL_BIGCORE_NEHALEM,
+  INTEL_BIGCORE_WESTMERE,
+  INTEL_BIGCORE_SANDYBRIDGE,
+  INTEL_BIGCORE_IVYBRIDGE,
+  INTEL_BIGCORE_HASWELL,
+  INTEL_BIGCORE_BROADWELL,
+  INTEL_BIGCORE_SKYLAKE,
+  INTEL_BIGCORE_AMBERLAKE,
+  INTEL_BIGCORE_COFFEELAKE,
+  INTEL_BIGCORE_WHISKEYLAKE,
+  INTEL_BIGCORE_KABYLAKE,
+  INTEL_BIGCORE_COMETLAKE,
+  INTEL_BIGCORE_SKYLAKE_AVX512,
+  INTEL_BIGCORE_CANNONLAKE,
+  INTEL_BIGCORE_CASCADELAKE,
+  INTEL_BIGCORE_COOPERLAKE,
+  INTEL_BIGCORE_ICELAKE,
+  INTEL_BIGCORE_TIGERLAKE,
+  INTEL_BIGCORE_ROCKETLAKE,
+  INTEL_BIGCORE_SAPPHIRERAPIDS,
+  INTEL_BIGCORE_RAPTORLAKE,
+  INTEL_BIGCORE_EMERALDRAPIDS,
+  INTEL_BIGCORE_METEORLAKE,
+  INTEL_BIGCORE_LUNARLAKE,
+  INTEL_BIGCORE_ARROWLAKE,
+  INTEL_BIGCORE_GRANITERAPIDS,
+
+  /* Mixed (bigcore + atom SOC).  */
+  INTEL_MIXED_LAKEFIELD,
+  INTEL_MIXED_ALDERLAKE,
+
+  /* KNL.  */
+  INTEL_KNIGHTS_MILL,
+  INTEL_KNIGHTS_LANDING,
+
+  /* Unknown.  */
+  INTEL_UNKNOWN,
+};
+
+static unsigned int
+intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
+{
+  switch (model)
+    {
+    case 0x1C:
+    case 0x26:
+      return INTEL_ATOM_BONNELL;
+    case 0x27:
+    case 0x35:
+    case 0x36:
+      return INTEL_ATOM_SALTWELL;
+    case 0x37:
+    case 0x4A:
+    case 0x4D:
+    case 0x5D:
+      return INTEL_ATOM_SILVERMONT;
+    case 0x4C:
+    case 0x5A:
+    case 0x75:
+      return INTEL_ATOM_AIRMONT;
+    case 0x5C:
+    case 0x5F:
+      return INTEL_ATOM_GOLDMONT;
+    case 0x7A:
+      return INTEL_ATOM_GOLDMONT_PLUS;
+    case 0xAF:
+      return INTEL_ATOM_SIERRAFOREST;
+    case 0xB6:
+      return INTEL_ATOM_GRANDRIDGE;
+    case 0x86:
+    case 0x96:
+    case 0x9C:
+      return INTEL_ATOM_TREMONT;
+    case 0x0F:
+    case 0x16:
+      return INTEL_BIGCORE_MEROM;
+    case 0x17:
+      return INTEL_BIGCORE_PENRYN;
+    case 0x1D:
+      return INTEL_BIGCORE_DUNNINGTON;
+    case 0x1A:
+    case 0x1E:
+    case 0x1F:
+    case 0x2E:
+      return INTEL_BIGCORE_NEHALEM;
+    case 0x25:
+    case 0x2C:
+    case 0x2F:
+      return INTEL_BIGCORE_WESTMERE;
+    case 0x2A:
+    case 0x2D:
+      return INTEL_BIGCORE_SANDYBRIDGE;
+    case 0x3A:
+    case 0x3E:
+      return INTEL_BIGCORE_IVYBRIDGE;
+    case 0x3C:
+    case 0x3F:
+    case 0x45:
+    case 0x46:
+      return INTEL_BIGCORE_HASWELL;
+    case 0x3D:
+    case 0x47:
+    case 0x4F:
+    case 0x56:
+      return INTEL_BIGCORE_BROADWELL;
+    case 0x4E:
+    case 0x5E:
+      return INTEL_BIGCORE_SKYLAKE;
+    case 0x8E:
+      switch (stepping)
+	{
+	case 0x09:
+	  return INTEL_BIGCORE_AMBERLAKE;
+	case 0x0A:
+	  return INTEL_BIGCORE_COFFEELAKE;
+	case 0x0B:
+	case 0x0C:
+	  return INTEL_BIGCORE_WHISKEYLAKE;
+	default:
+	  return INTEL_BIGCORE_KABYLAKE;
+	}
+    case 0x9E:
+      switch (stepping)
+	{
+	case 0x0A:
+	case 0x0B:
+	case 0x0C:
+	case 0x0D:
+	  return INTEL_BIGCORE_COFFEELAKE;
+	default:
+	  return INTEL_BIGCORE_KABYLAKE;
+	}
+    case 0xA5:
+    case 0xA6:
+      return INTEL_BIGCORE_COMETLAKE;
+    case 0x66:
+      return INTEL_BIGCORE_CANNONLAKE;
+    case 0x55:
+      switch (stepping)
+	{
+	case 0x06:
+	case 0x07:
+	  return INTEL_BIGCORE_CASCADELAKE;
+	case 0x0b:
+	  return INTEL_BIGCORE_COOPERLAKE;
+	default:
+	  return INTEL_BIGCORE_SKYLAKE_AVX512;
+	}
+    case 0x6A:
+    case 0x6C:
+    case 0x7D:
+    case 0x7E:
+    case 0x9D:
+      return INTEL_BIGCORE_ICELAKE;
+    case 0x8C:
+    case 0x8D:
+      return INTEL_BIGCORE_TIGERLAKE;
+    case 0xA7:
+      return INTEL_BIGCORE_ROCKETLAKE;
+    case 0x8F:
+      return INTEL_BIGCORE_SAPPHIRERAPIDS;
+    case 0xB7:
+    case 0xBA:
+    case 0xBF:
+      return INTEL_BIGCORE_RAPTORLAKE;
+    case 0xCF:
+      return INTEL_BIGCORE_EMERALDRAPIDS;
+    case 0xAA:
+    case 0xAC:
+      return INTEL_BIGCORE_METEORLAKE;
+    case 0xbd:
+      return INTEL_BIGCORE_LUNARLAKE;
+    case 0xc6:
+      return INTEL_BIGCORE_ARROWLAKE;
+    case 0xAD:
+    case 0xAE:
+      return INTEL_BIGCORE_GRANITERAPIDS;
+    case 0x8A:
+      return INTEL_MIXED_LAKEFIELD;
+    case 0x97:
+    case 0x9A:
+    case 0xBE:
+      return INTEL_MIXED_ALDERLAKE;
+    case 0x85:
+      return INTEL_KNIGHTS_MILL;
+    case 0x57:
+      return INTEL_KNIGHTS_LANDING;
+    default:
+      return INTEL_UNKNOWN;
+    }
+}
+
 static inline void
 init_cpu_features (struct cpu_features *cpu_features)
 {
@@ -453,129 +664,149 @@ init_cpu_features (struct cpu_features *cpu_features)
       if (family == 0x06)
 	{
 	  model += extended_model;
-	  switch (model)
+	  unsigned int microarch
+	      = intel_get_fam6_microarch (model, stepping);
+
+	  switch (microarch)
 	    {
-	    case 0x1c:
-	    case 0x26:
-	      /* BSF is slow on Atom.  */
+	      /* Atom / KNL tuning.  */
+	    case INTEL_ATOM_BONNELL:
+	      /* BSF is slow on Bonnell.  */
 	      cpu_features->preferred[index_arch_Slow_BSF]
-		|= bit_arch_Slow_BSF;
+		  |= bit_arch_Slow_BSF;
 	      break;
 
-	    case 0x57:
-	      /* Knights Landing.  Enable Silvermont optimizations.  */
-
-	    case 0x7a:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont Plus.  */
-
-	    case 0x5c:
-	    case 0x5f:
 	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont.  */
+		     on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
+	    case INTEL_ATOM_AIRMONT:
+	    case INTEL_ATOM_SILVERMONT:
+	    case INTEL_ATOM_GOLDMONT:
+	    case INTEL_ATOM_GOLDMONT_PLUS:
 
-	    case 0x4c:
-	    case 0x5a:
-	    case 0x75:
-	      /* Airmont is a die shrink of Silvermont.  */
+            /* Knights Landing.  Enable Silvermont optimizations.  */
+	    case INTEL_KNIGHTS_LANDING:
 
-	    case 0x37:
-	    case 0x4a:
-	    case 0x4d:
-	    case 0x5d:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Silvermont.  */
 	      cpu_features->preferred[index_arch_Fast_Unaligned_Load]
-		|= (bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
-	    case 0x86:
-	    case 0x96:
-	    case 0x9c:
+	    case INTEL_ATOM_TREMONT:
 	      /* Enable rep string instructions, unaligned load, unaligned
-	         copy, pminub and avoid SSE 4.2 on Tremont.  */
+		 copy, pminub and avoid SSE 4.2 on Tremont.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
+	      break;
+
+	      /* Untuned KNL microarch.  */
+	    case INTEL_KNIGHTS_MILL:
+	      /* Untuned atom microarch.  */
+	    case INTEL_ATOM_SIERRAFOREST:
+	    case INTEL_ATOM_GRANDRIDGE:
+	    case INTEL_ATOM_SALTWELL:
 	      break;
 
+	      /* Bigcore Tuning.  */
+	    case INTEL_UNKNOWN:
 	    default:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	      /* Fall through.  */
-
-	    case 0x1a:
-	    case 0x1e:
-	    case 0x1f:
-	    case 0x25:
-	    case 0x2c:
-	    case 0x2e:
-	    case 0x2f:
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop);
+		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop);
+	      break;
+
+	      /* Untuned Bigcore microarch.  */
+	    case INTEL_BIGCORE_SANDYBRIDGE:
+	    case INTEL_BIGCORE_IVYBRIDGE:
+	    case INTEL_BIGCORE_HASWELL:
+	    case INTEL_BIGCORE_BROADWELL:
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_AMBERLAKE:
+	    case INTEL_BIGCORE_COFFEELAKE:
+	    case INTEL_BIGCORE_WHISKEYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+	    case INTEL_BIGCORE_COMETLAKE:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	    case INTEL_BIGCORE_CASCADELAKE:
+	    case INTEL_BIGCORE_COOPERLAKE:
+	    case INTEL_BIGCORE_CANNONLAKE:
+	    case INTEL_BIGCORE_ICELAKE:
+	    case INTEL_BIGCORE_TIGERLAKE:
+	    case INTEL_BIGCORE_ROCKETLAKE:
+	    case INTEL_BIGCORE_RAPTORLAKE:
+	    case INTEL_BIGCORE_METEORLAKE:
+	    case INTEL_BIGCORE_LUNARLAKE:
+	    case INTEL_BIGCORE_ARROWLAKE:
+	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
+	    case INTEL_BIGCORE_EMERALDRAPIDS:
+	    case INTEL_BIGCORE_GRANITERAPIDS:
+	      break;
+
+	    /* Untuned Mixed (bigcore + atom SOC).  */
+	    case INTEL_MIXED_LAKEFIELD:
+	    case INTEL_MIXED_ALDERLAKE:
 	      break;
 	    }
 
-	 /* Disable TSX on some processors to avoid TSX on kernels that
-	    weren't updated with the latest microcode package (which
-	    disables broken feature by default).  */
-	 switch (model)
+	      /* Disable TSX on some processors to avoid TSX on kernels that
+		 weren't updated with the latest microcode package (which
+		 disables broken feature by default).  */
+	  switch (microarch)
 	    {
-	    case 0x55:
-	      if (stepping <= 5)
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	      /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and
+	         Cooperlake also have model == 0x55 so double check the
+	         stepping to be safe.  */
+	      if (model == 0x55 && stepping <= 5)
 		goto disable_tsx;
 	      break;
-	    case 0x8e:
-	      /* NB: Although the errata documents that for model == 0x8e,
-		 only 0xb stepping or lower are impacted, the intention of
-		 the errata was to disable TSX on all client processors on
-		 all steppings.  Include 0xc stepping which is an Intel
-		 Core i7-8665U, a client mobile processor.  */
-	    case 0x9e:
-	      if (stepping > 0xc)
+
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_AMBERLAKE:
+	    case INTEL_BIGCORE_COFFEELAKE:
+	    case INTEL_BIGCORE_WHISKEYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+		/* NB: Although the errata documents that for model == 0x8e
+		   (skylake client), only 0xb stepping or lower are impacted,
+		   the intention of the errata was to disable TSX on all client
+		   processors on all steppings.  Include 0xc stepping which is
+		   an Intel Core i7-8665U, a client mobile processor.  */
+		if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
 		break;
-	      /* Fall through.  */
-	    case 0x4e:
-	    case 0x5e:
-	      {
+
 		/* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
 		   processors listed in:
 
 https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
 		 */
-disable_tsx:
+	    disable_tsx:
 		CPU_FEATURE_UNSET (cpu_features, HLE);
 		CPU_FEATURE_UNSET (cpu_features, RTM);
 		CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
-	      }
-	      break;
-	    case 0x3f:
-	      /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
-	      if (stepping >= 4)
 		break;
-	      /* Fall through.  */
-	    case 0x3c:
-	    case 0x45:
-	    case 0x46:
-	      /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
-		 with stepping >= 4) to avoid TSX on kernels that weren't
-		 updated with the latest microcode package (which disables
-		 broken feature by default).  */
-	      CPU_FEATURE_UNSET (cpu_features, RTM);
-	      break;
+
+	    case INTEL_BIGCORE_HASWELL:
+		/* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
+		   TSX.  Haswell also include other model numbers that have
+		   working TSX.  */
+		if (model == 0x3f && stepping >= 4)
+		break;
+
+		CPU_FEATURE_UNSET (cpu_features, RTM);
+		break;
 	    }
 	}
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v5 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific
  2023-05-09  3:13 ` [PATCH v5 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
  2023-05-09  3:13   ` [PATCH v5 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-05-09  3:13   ` Noah Goldstein
  1 sibling, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-09  3:13 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Different systems prefer a different divisors.

From benchmarks[1] so far the following divisors have been found:
    ICX     : 2
    SKX     : 2
    BWD     : 8

For Intel, we are generalizing that BWD and older prefers 8 as a
divisor, and SKL and newer prefers 2. This number can be further tuned
as benchmarks are run.

[1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
---
 sysdeps/x86/cpu-features.c         | 11 ++++++----
 sysdeps/x86/dl-cacheinfo.h         | 32 ++++++++++++++++++------------
 sysdeps/x86/include/cpu-features.h |  3 +++
 3 files changed, 29 insertions(+), 17 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index bec70c3c49..3c1a77906a 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -637,6 +637,7 @@ init_cpu_features (struct cpu_features *cpu_features)
   unsigned int stepping = 0;
   enum cpu_features_kind kind;
 
+  cpu_features->cachesize_non_temporal_divisor = 4;
 #if !HAS_CPUID
   if (__get_cpuid_max (0, 0) == 0)
     {
@@ -720,6 +721,8 @@ init_cpu_features (struct cpu_features *cpu_features)
 		break;
 	    case INTEL_BIGCORE_NEHALEM:
 	    case INTEL_BIGCORE_WESTMERE:
+	      /* Older CPUs prefer non-temporal stores at lower threshold.  */
+	      cpu_features->cachesize_non_temporal_divisor = 8;
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
@@ -728,11 +731,12 @@ init_cpu_features (struct cpu_features *cpu_features)
 		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
 
-	      /* Untuned Bigcore microarch.  */
 	    case INTEL_BIGCORE_SANDYBRIDGE:
 	    case INTEL_BIGCORE_IVYBRIDGE:
 	    case INTEL_BIGCORE_HASWELL:
 	    case INTEL_BIGCORE_BROADWELL:
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      break;
 	    case INTEL_BIGCORE_SKYLAKE:
 	    case INTEL_BIGCORE_AMBERLAKE:
 	    case INTEL_BIGCORE_COFFEELAKE:
@@ -753,11 +757,10 @@ init_cpu_features (struct cpu_features *cpu_features)
 	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
 	    case INTEL_BIGCORE_EMERALDRAPIDS:
 	    case INTEL_BIGCORE_GRANITERAPIDS:
-	      break;
-
-	    /* Untuned Mixed (bigcore + atom SOC).  */
+	    /* Mixed (bigcore + atom SOC).  */
 	    case INTEL_MIXED_LAKEFIELD:
 	    case INTEL_MIXED_ALDERLAKE:
+	      cpu_features->cachesize_non_temporal_divisor = 2;
 	      break;
 	    }
 
diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index c7e41029fa..6225c852f6 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 1/4 of size
-     of the chip's cache. For most Intel and AMD processors with an
-     initial release date between 2017 and 2023, a thread's typical
-     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
-     estimate the point where non-temporal stores begin outcompeting
-     REP MOVSB. As well the point where the fact that non-temporal
-     stores are forced back to main memory would already occurred to the
-     majority of the lines in the copy. Note, concerns about the
-     entire L3 cache being evicted by the copy are mostly alleviated
-     by the fact that modern HW detects streaming patterns and
-     provides proper LRU hints so that the maximum thrashing
-     capped at 1/associativity. */
-  unsigned long int non_temporal_threshold = shared / 4;
+  unsigned long int cachesize_non_temporal_divisor
+      = cpu_features->cachesize_non_temporal_divisor;
+  if (cachesize_non_temporal_divisor <= 0)
+    cachesize_non_temporal_divisor = 4;
+
+  /* The default setting for the non_temporal threshold is [1/2, 1/8] of size
+     of the chip's cache (depending on `cachesize_non_temporal_divisor` which
+     is microarch specific. The defeault is 1/4). For most Intel and AMD
+     processors with an initial release date between 2017 and 2023, a thread's
+     typical share of the cache is from 18-64MB. Using a reasonable size
+     fraction of L3 is meant to estimate the point where non-temporal stores
+     begin outcompeting REP MOVSB. As well the point where the fact that
+     non-temporal stores are forced back to main memory would already occurred
+     to the majority of the lines in the copy. Note, concerns about the entire
+     L3 cache being evicted by the copy are mostly alleviated by the fact that
+     modern HW detects streaming patterns and provides proper LRU hints so that
+     the maximum thrashing capped at 1/associativity. */
+  unsigned long int non_temporal_threshold
+      = shared / cachesize_non_temporal_divisor;
   /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
      a higher risk of actually thrashing the cache as they don't have a HW LRU
      hint. As well, there performance in highly parallel situations is
diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
index 40b8129d6a..f5b9dd54fe 100644
--- a/sysdeps/x86/include/cpu-features.h
+++ b/sysdeps/x86/include/cpu-features.h
@@ -915,6 +915,9 @@ struct cpu_features
   unsigned long int shared_cache_size;
   /* Threshold to use non temporal store.  */
   unsigned long int non_temporal_threshold;
+  /* When no user non_temporal_threshold is specified. We default to
+     cachesize / cachesize_non_temporal_divisor.  */
+  unsigned long int cachesize_non_temporal_divisor;
   /* Threshold to use "rep movsb".  */
   unsigned long int rep_movsb_threshold;
   /* Threshold to stop using "rep movsb".  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`
  2023-05-05 18:06         ` H.J. Lu
@ 2023-05-09  3:14           ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-09  3:14 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Fri, May 5, 2023 at 1:07 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Wed, May 3, 2023 at 8:28 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Wed, Apr 26, 2023 at 12:15 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > On Wed, Apr 26, 2023 at 10:59 AM H.J. Lu <hjl.tools@gmail.com> wrote:
> > > >
> > > > On Tue, Apr 25, 2023 at 2:45 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > > >
> > > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > > ncores_per_socket'. This patch updates that value to roughly
> > > > > 'sizeof_L3 / 2`
> > > > >
> > > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > > could evict.
> > > > >
> > > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > > non-temporal thresholds and leads to using non-temporal stores in
> > > > > cases where REP MOVSB is multiple times faster.
> > > > >
> > > > > Furthermore, non-temporal stores are written directly to main memory
> > > > > so using it at a size much smaller than L3 can place soon to be
> > > > > accessed data much further away than it otherwise could be. As well,
> > > > > modern machines are able to detect streaming patterns (especially if
> > > > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > > > far below meaningfully thrashing the entire cache.
> > > > >
> > > > > As best I can tell, the benchmarks that lead this small threshold
> > > > > where done comparing non-temporal stores versus standard cacheable
> > > > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > > at the low-end of the previous threshold, and within 10% for over
> > > > > 100MB copies (well past even the current threshold). In cases with a
> > > > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > > > up to `sizeof_L3`.
> > > > >
> > > > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > > > stores where done using:
> > > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > > >
> > > > > Sheets results (also available in pdf on the github):
> > > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > > ---
> > > > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > > > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > > > >
> > > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > > index ec88945b39..4f1fd419f8 100644
> > > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > > > >  }
> > > > >
> > > > >  static void
> > > > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > > > >                  long int core)
> > > > >  {
> > > > >    unsigned int eax;
> > > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > >    unsigned int family = cpu_features->basic.family;
> > > > >    unsigned int model = cpu_features->basic.model;
> > > > >    long int shared = *shared_ptr;
> > > > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > > > >    unsigned int threads = *threads_ptr;
> > > > >    bool inclusive_cache = true;
> > > > >    bool support_count_mask = true;
> > > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > >        /* Try L2 otherwise.  */
> > > > >        level  = 2;
> > > > >        shared = core;
> > > > > +      shared_per_thread = core;
> > > > >        threads_l2 = 0;
> > > > >        threads_l3 = -1;
> > > > >      }
> > > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > >          }
> > > > >        else
> > > > >          {
> > > > > -intel_bug_no_cache_info:
> > > > > -          /* Assume that all logical threads share the highest cache
> > > > > -             level.  */
> > > > > -          threads
> > > > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > > -              & 0xff);
> > > > > -        }
> > > > > -
> > > > > -        /* Cap usage of highest cache level to the number of supported
> > > > > -           threads.  */
> > > > > -        if (shared > 0 && threads > 0)
> > > > > -          shared /= threads;
> > > > > +       intel_bug_no_cache_info:
> > > > > +         /* Assume that all logical threads share the highest cache
> > > > > +            level.  */
> > > > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > > +                    & 0xff);
> > > > > +
> > > > > +         /* Get per-thread size of highest level cache.  */
> > > > > +         if (shared_per_thread > 0 && threads > 0)
> > > > > +           shared_per_thread /= threads;
> > > > > +       }
> > > > >      }
> > > > >
> > > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > > >    if (!inclusive_cache)
> > > > >      {
> > > > >        if (threads_l2 > 0)
> > > > > -        core /= threads_l2;
> > > > > +       shared_per_thread += core / threads_l2;
> > > > >        shared += core;
> > > > >      }
> > > > >
> > > > >    *shared_ptr = shared;
> > > > > +  *shared_per_thread_ptr = shared_per_thread;
> > > > >    *threads_ptr = threads;
> > > > >  }
> > > > >
> > > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >    /* Find out what brand of processor.  */
> > > > >    long int data = -1;
> > > > >    long int shared = -1;
> > > > > +  long int shared_per_thread = -1;
> > > > >    long int core = -1;
> > > > >    unsigned int threads = 0;
> > > > >    unsigned long int level1_icache_size = -1;
> > > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > > > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > > > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > > > +      shared_per_thread = shared;
> > > > >
> > > > >        level1_icache_size
> > > > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        level4_cache_size
> > > > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > > > >
> > > > > -      get_common_cache_info (&shared, &threads, core);
> > > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > > >      }
> > > > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > > > >      {
> > > > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > > > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > > > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > > > +      shared_per_thread = shared;
> > > > >
> > > > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > > > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > > > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > > > >
> > > > > -      get_common_cache_info (&shared, &threads, core);
> > > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > > >      }
> > > > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > > > >      {
> > > > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > > > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > > > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > > > +      shared_per_thread = shared;
> > > > >
> > > > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > > > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        if (shared <= 0)
> > > > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > > > >         shared = core;
> > > > > +
> > > > > +      if (shared_per_thread <= 0)
> > > > > +       shared_per_thread = shared;
> > > > >      }
> > > > >
> > > > >    cpu_features->level1_icache_size = level1_icache_size;
> > > > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > > >
> > > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > > -     in cache after a maximum temporal copy, which will maintain
> > > > > -     in cache a reasonable portion of the thread's stack and other
> > > > > -     active data. If the threshold is set higher than one thread's
> > > > > -     share of the cache, it has a substantial risk of negatively
> > > > > -     impacting the performance of other threads running on the chip. */
> > > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > > > +     of the chip's cache. For most Intel and AMD processors with an
> > > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > > > +     estimate the point where non-temporal stores begin outcompeting
> > > > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > > > +     stores are forced back to main memory would already occurred to the
> > > > > +     majority of the lines in the copy. Note, concerns about the
> > > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > > +     by the fact that modern HW detects streaming patterns and
> > > > > +     provides proper LRU hints so that the maximum thrashing
> > > > > +     capped at 1/associativity. */
> > > > > +  unsigned long int non_temporal_threshold = shared / 2;
> > > > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > > > +     hint. As well, there performance in highly parallel situations is
> > > > > +     noticeably worse.  */
> > > > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > > --
> > > > > 2.34.1
> > > > >
> > > >
> > > > LGTM.
> > > >
> > > > Thanks.
> > > >
> > >
> > > Thanks.
> > >
> > > I'm currently running some benchmarks on Broadwell and Carlos is reproducing
> > > independently (on ICX I think), so will wait to push until all that
> > > has come to fruition.
> > > >
> > > > --
> > > > H.J.
> >
> > Carlos, I benchmarked on BWD:
> > https://docs.google.com/spreadsheets/d/1kfXonk4LAZXBySuPnfDenrTizZ52IN0vvLIt3k1ex9c/edit?usp=sharing
> > or
> > https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/master/results-bwd-pdf/bwd-memcpy-0--standard.pdf
> >
> > On BWD, unlike SKX/ICX, non-temporal stores perform better than REP_MOVSB
> > and standard stores. Somewhat counter-intuitively the results are most
> > pronounced
> > in the single-threaded.
> >
> > At roughly the 4MB range non-tempora stores become by far the best
> > basically regardless
> > of the number of threads.
> > The machine I tested on had 35MB of cache and 28 threads per socket so
> > our current
> > threshold is ~1MB which is still to low. But the proposal in this
> > patch is do L3 / 2 is too
> > high (~16MB in this case).
> >
> > At the current threshold, in the multithreaded case. Between ~[1MB,
> > 4MB) non-temporal
> > stores at 60-110% SLOWER than ERMS. OTOH, between [4MB, 16MB]
> > non-temporal stores
> > are about 10-30% faster.
> >
> > I think we still have a net benefit from this patch, but maybe we want
> > to tune the exact
> > percentage by CPU arch?
> > HJ what do you think about that? SKX and newer -> L3 / 2. BWD and
> > older -> L3 / 8.
>
> This sounds good to me.
>
> > Then we can add additional cases for different machines as benchmarks indicate.
>
> Thanks.
>

Posted v5 to do this (in new series as title changes).

> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v5 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-09  3:13   ` [PATCH v5 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-05-09 21:58     ` H.J. Lu
  2023-05-10  0:33       ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-05-09 21:58 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Mon, May 8, 2023 at 8:13 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> This patch should have no affect on existing functionality.
>
> The current code, which has a single switch for model detection and
> setting prefered features, is difficult to follow/extend. The cases
> use magic numbers and many microarchitectures are missing. This makes
> it difficult to reason about what is implemented so far and/or
> how/where to add support for new features.
>
> This patch splits the model detection and preference setting stages so
> that CPU preferences can be set based on a complete list of available
> microarchitectures, rather than based on model magic numbers.
> ---
>  sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++--------
>  1 file changed, 316 insertions(+), 85 deletions(-)
>
> diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> index 5bff8ec0b4..bec70c3c49 100644
> --- a/sysdeps/x86/cpu-features.c
> +++ b/sysdeps/x86/cpu-features.c
> @@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
>                      == index_arch_Fast_Copy_Backward)),
>                 "Incorrect index_arch_Fast_Unaligned_Load");
>
> +
> +/* Intel Family-6 microarch list.  */
> +enum
> +{
> +  /* Atom processors.  */
> +  INTEL_ATOM_BONNELL,
> +  INTEL_ATOM_SALTWELL,
> +  INTEL_ATOM_SILVERMONT,
> +  INTEL_ATOM_AIRMONT,
> +  INTEL_ATOM_GOLDMONT,
> +  INTEL_ATOM_GOLDMONT_PLUS,
> +  INTEL_ATOM_SIERRAFOREST,
> +  INTEL_ATOM_GRANDRIDGE,
> +  INTEL_ATOM_TREMONT,
> +
> +  /* Bigcore processors.  */
> +  INTEL_BIGCORE_MEROM,
> +  INTEL_BIGCORE_PENRYN,
> +  INTEL_BIGCORE_DUNNINGTON,
> +  INTEL_BIGCORE_NEHALEM,
> +  INTEL_BIGCORE_WESTMERE,
> +  INTEL_BIGCORE_SANDYBRIDGE,
> +  INTEL_BIGCORE_IVYBRIDGE,
> +  INTEL_BIGCORE_HASWELL,
> +  INTEL_BIGCORE_BROADWELL,
> +  INTEL_BIGCORE_SKYLAKE,
> +  INTEL_BIGCORE_AMBERLAKE,
> +  INTEL_BIGCORE_COFFEELAKE,
> +  INTEL_BIGCORE_WHISKEYLAKE,
> +  INTEL_BIGCORE_KABYLAKE,
> +  INTEL_BIGCORE_COMETLAKE,
> +  INTEL_BIGCORE_SKYLAKE_AVX512,
> +  INTEL_BIGCORE_CANNONLAKE,
> +  INTEL_BIGCORE_CASCADELAKE,
> +  INTEL_BIGCORE_COOPERLAKE,
> +  INTEL_BIGCORE_ICELAKE,
> +  INTEL_BIGCORE_TIGERLAKE,
> +  INTEL_BIGCORE_ROCKETLAKE,
> +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> +  INTEL_BIGCORE_RAPTORLAKE,
> +  INTEL_BIGCORE_EMERALDRAPIDS,
> +  INTEL_BIGCORE_METEORLAKE,
> +  INTEL_BIGCORE_LUNARLAKE,
> +  INTEL_BIGCORE_ARROWLAKE,
> +  INTEL_BIGCORE_GRANITERAPIDS,
> +
> +  /* Mixed (bigcore + atom SOC).  */
> +  INTEL_MIXED_LAKEFIELD,
> +  INTEL_MIXED_ALDERLAKE,
> +
> +  /* KNL.  */
> +  INTEL_KNIGHTS_MILL,
> +  INTEL_KNIGHTS_LANDING,
> +
> +  /* Unknown.  */
> +  INTEL_UNKNOWN,
> +};
> +
> +static unsigned int
> +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
> +{
> +  switch (model)
> +    {
> +    case 0x1C:
> +    case 0x26:
> +      return INTEL_ATOM_BONNELL;
> +    case 0x27:
> +    case 0x35:
> +    case 0x36:
> +      return INTEL_ATOM_SALTWELL;
> +    case 0x37:
> +    case 0x4A:
> +    case 0x4D:
> +    case 0x5D:
> +      return INTEL_ATOM_SILVERMONT;
> +    case 0x4C:
> +    case 0x5A:
> +    case 0x75:
> +      return INTEL_ATOM_AIRMONT;
> +    case 0x5C:
> +    case 0x5F:
> +      return INTEL_ATOM_GOLDMONT;
> +    case 0x7A:
> +      return INTEL_ATOM_GOLDMONT_PLUS;
> +    case 0xAF:
> +      return INTEL_ATOM_SIERRAFOREST;
> +    case 0xB6:
> +      return INTEL_ATOM_GRANDRIDGE;
> +    case 0x86:
> +    case 0x96:
> +    case 0x9C:
> +      return INTEL_ATOM_TREMONT;
> +    case 0x0F:
> +    case 0x16:
> +      return INTEL_BIGCORE_MEROM;
> +    case 0x17:
> +      return INTEL_BIGCORE_PENRYN;
> +    case 0x1D:
> +      return INTEL_BIGCORE_DUNNINGTON;
> +    case 0x1A:
> +    case 0x1E:
> +    case 0x1F:
> +    case 0x2E:
> +      return INTEL_BIGCORE_NEHALEM;
> +    case 0x25:
> +    case 0x2C:
> +    case 0x2F:
> +      return INTEL_BIGCORE_WESTMERE;
> +    case 0x2A:
> +    case 0x2D:
> +      return INTEL_BIGCORE_SANDYBRIDGE;
> +    case 0x3A:
> +    case 0x3E:
> +      return INTEL_BIGCORE_IVYBRIDGE;
> +    case 0x3C:
> +    case 0x3F:
> +    case 0x45:
> +    case 0x46:
> +      return INTEL_BIGCORE_HASWELL;
> +    case 0x3D:
> +    case 0x47:
> +    case 0x4F:
> +    case 0x56:
> +      return INTEL_BIGCORE_BROADWELL;
> +    case 0x4E:
> +    case 0x5E:
> +      return INTEL_BIGCORE_SKYLAKE;
> +    case 0x8E:
> +      switch (stepping)
> +       {
> +       case 0x09:
> +         return INTEL_BIGCORE_AMBERLAKE;
> +       case 0x0A:
> +         return INTEL_BIGCORE_COFFEELAKE;
> +       case 0x0B:
> +       case 0x0C:
> +         return INTEL_BIGCORE_WHISKEYLAKE;
> +       default:
> +         return INTEL_BIGCORE_KABYLAKE;
> +       }
> +    case 0x9E:
> +      switch (stepping)
> +       {
> +       case 0x0A:
> +       case 0x0B:
> +       case 0x0C:
> +       case 0x0D:
> +         return INTEL_BIGCORE_COFFEELAKE;
> +       default:
> +         return INTEL_BIGCORE_KABYLAKE;
> +       }
> +    case 0xA5:
> +    case 0xA6:
> +      return INTEL_BIGCORE_COMETLAKE;
> +    case 0x66:
> +      return INTEL_BIGCORE_CANNONLAKE;
> +    case 0x55:
> +      switch (stepping)
> +       {
> +       case 0x06:
> +       case 0x07:
> +         return INTEL_BIGCORE_CASCADELAKE;
> +       case 0x0b:
> +         return INTEL_BIGCORE_COOPERLAKE;
> +       default:
> +         return INTEL_BIGCORE_SKYLAKE_AVX512;
> +       }
> +    case 0x6A:
> +    case 0x6C:
> +    case 0x7D:
> +    case 0x7E:
> +    case 0x9D:
> +      return INTEL_BIGCORE_ICELAKE;
> +    case 0x8C:
> +    case 0x8D:
> +      return INTEL_BIGCORE_TIGERLAKE;
> +    case 0xA7:
> +      return INTEL_BIGCORE_ROCKETLAKE;
> +    case 0x8F:
> +      return INTEL_BIGCORE_SAPPHIRERAPIDS;
> +    case 0xB7:
> +    case 0xBA:
> +    case 0xBF:
> +      return INTEL_BIGCORE_RAPTORLAKE;
> +    case 0xCF:
> +      return INTEL_BIGCORE_EMERALDRAPIDS;
> +    case 0xAA:
> +    case 0xAC:
> +      return INTEL_BIGCORE_METEORLAKE;
> +    case 0xbd:
> +      return INTEL_BIGCORE_LUNARLAKE;
> +    case 0xc6:
> +      return INTEL_BIGCORE_ARROWLAKE;
> +    case 0xAD:
> +    case 0xAE:
> +      return INTEL_BIGCORE_GRANITERAPIDS;
> +    case 0x8A:
> +      return INTEL_MIXED_LAKEFIELD;
> +    case 0x97:
> +    case 0x9A:
> +    case 0xBE:
> +      return INTEL_MIXED_ALDERLAKE;
> +    case 0x85:
> +      return INTEL_KNIGHTS_MILL;
> +    case 0x57:
> +      return INTEL_KNIGHTS_LANDING;
> +    default:
> +      return INTEL_UNKNOWN;
> +    }
> +}
> +
>  static inline void
>  init_cpu_features (struct cpu_features *cpu_features)
>  {
> @@ -453,129 +664,149 @@ init_cpu_features (struct cpu_features *cpu_features)
>        if (family == 0x06)
>         {
>           model += extended_model;
> -         switch (model)
> +         unsigned int microarch
> +             = intel_get_fam6_microarch (model, stepping);
> +
> +         switch (microarch)
>             {
> -           case 0x1c:
> -           case 0x26:
> -             /* BSF is slow on Atom.  */
> +             /* Atom / KNL tuning.  */
> +           case INTEL_ATOM_BONNELL:

Since Saltwell is a shrink of Bonnell, INTEL_ATOM_SALTWELL
should be added here.

> +             /* BSF is slow on Bonnell.  */
>               cpu_features->preferred[index_arch_Slow_BSF]
> -               |= bit_arch_Slow_BSF;
> +                 |= bit_arch_Slow_BSF;
>               break;
>
> -           case 0x57:
> -             /* Knights Landing.  Enable Silvermont optimizations.  */
> -
> -           case 0x7a:
> -             /* Unaligned load versions are faster than SSSE3
> -                on Goldmont Plus.  */
> -
> -           case 0x5c:
> -           case 0x5f:
>               /* Unaligned load versions are faster than SSSE3
> -                on Goldmont.  */
> +                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> +           case INTEL_ATOM_AIRMONT:
> +           case INTEL_ATOM_SILVERMONT:
> +           case INTEL_ATOM_GOLDMONT:
> +           case INTEL_ATOM_GOLDMONT_PLUS:
>
> -           case 0x4c:
> -           case 0x5a:
> -           case 0x75:
> -             /* Airmont is a die shrink of Silvermont.  */
> +            /* Knights Landing.  Enable Silvermont optimizations.  */
> +           case INTEL_KNIGHTS_LANDING:
>
> -           case 0x37:
> -           case 0x4a:
> -           case 0x4d:
> -           case 0x5d:
> -             /* Unaligned load versions are faster than SSSE3
> -                on Silvermont.  */
>               cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> -               |= (bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop
> -                   | bit_arch_Slow_SSE4_2);
> +                 |= (bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop
> +                     | bit_arch_Slow_SSE4_2);
>               break;
>
> -           case 0x86:
> -           case 0x96:
> -           case 0x9c:
> +           case INTEL_ATOM_TREMONT:
>               /* Enable rep string instructions, unaligned load, unaligned
> -                copy, pminub and avoid SSE 4.2 on Tremont.  */
> +                copy, pminub and avoid SSE 4.2 on Tremont.  */
>               cpu_features->preferred[index_arch_Fast_Rep_String]
> -               |= (bit_arch_Fast_Rep_String
> -                   | bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop
> -                   | bit_arch_Slow_SSE4_2);
> +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop
> +                     | bit_arch_Slow_SSE4_2);
> +             break;
> +
> +             /* Untuned KNL microarch.  */
> +           case INTEL_KNIGHTS_MILL:
> +             /* Untuned atom microarch.  */
> +           case INTEL_ATOM_SIERRAFOREST:
> +           case INTEL_ATOM_GRANDRIDGE:
> +           case INTEL_ATOM_SALTWELL:
>               break;

"break" should be removed to enable the optimizations
for processors with AVX.

>
> +             /* Bigcore Tuning.  */
> +           case INTEL_UNKNOWN:
>             default:
>               /* Unknown family 0x06 processors.  Assuming this is one
>                  of Core i3/i5/i7 processors if AVX is available.  */
>               if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
>                 break;
> -             /* Fall through.  */
> -
> -           case 0x1a:
> -           case 0x1e:
> -           case 0x1f:
> -           case 0x25:
> -           case 0x2c:
> -           case 0x2e:
> -           case 0x2f:
> +           case INTEL_BIGCORE_NEHALEM:
> +           case INTEL_BIGCORE_WESTMERE:
>               /* Rep string instructions, unaligned load, unaligned copy,
>                  and pminub are fast on Intel Core i3, i5 and i7.  */
>               cpu_features->preferred[index_arch_Fast_Rep_String]
> -               |= (bit_arch_Fast_Rep_String
> -                   | bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop);
> +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop);
> +             break;
> +
> +             /* Untuned Bigcore microarch.  */
> +           case INTEL_BIGCORE_SANDYBRIDGE:
> +           case INTEL_BIGCORE_IVYBRIDGE:
> +           case INTEL_BIGCORE_HASWELL:
> +           case INTEL_BIGCORE_BROADWELL:
> +           case INTEL_BIGCORE_SKYLAKE:
> +           case INTEL_BIGCORE_AMBERLAKE:
> +           case INTEL_BIGCORE_COFFEELAKE:
> +           case INTEL_BIGCORE_WHISKEYLAKE:
> +           case INTEL_BIGCORE_KABYLAKE:
> +           case INTEL_BIGCORE_COMETLAKE:
> +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> +           case INTEL_BIGCORE_CASCADELAKE:
> +           case INTEL_BIGCORE_COOPERLAKE:
> +           case INTEL_BIGCORE_CANNONLAKE:
> +           case INTEL_BIGCORE_ICELAKE:
> +           case INTEL_BIGCORE_TIGERLAKE:
> +           case INTEL_BIGCORE_ROCKETLAKE:
> +           case INTEL_BIGCORE_RAPTORLAKE:
> +           case INTEL_BIGCORE_METEORLAKE:
> +           case INTEL_BIGCORE_LUNARLAKE:
> +           case INTEL_BIGCORE_ARROWLAKE:
> +           case INTEL_BIGCORE_SAPPHIRERAPIDS:
> +           case INTEL_BIGCORE_EMERALDRAPIDS:
> +           case INTEL_BIGCORE_GRANITERAPIDS:
> +             break;
> +
> +           /* Untuned Mixed (bigcore + atom SOC).  */
> +           case INTEL_MIXED_LAKEFIELD:
> +           case INTEL_MIXED_ALDERLAKE:

All these processors should be treated as default.

>               break;
>             }
>
> -        /* Disable TSX on some processors to avoid TSX on kernels that
> -           weren't updated with the latest microcode package (which
> -           disables broken feature by default).  */
> -        switch (model)
> +             /* Disable TSX on some processors to avoid TSX on kernels that
> +                weren't updated with the latest microcode package (which
> +                disables broken feature by default).  */
> +         switch (microarch)
>             {
> -           case 0x55:
> -             if (stepping <= 5)
> +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> +             /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and
> +                Cooperlake also have model == 0x55 so double check the
> +                stepping to be safe.  */
> +             if (model == 0x55 && stepping <= 5)

No need to check model == 0x55.

>                 goto disable_tsx;
>               break;
> -           case 0x8e:
> -             /* NB: Although the errata documents that for model == 0x8e,
> -                only 0xb stepping or lower are impacted, the intention of
> -                the errata was to disable TSX on all client processors on
> -                all steppings.  Include 0xc stepping which is an Intel
> -                Core i7-8665U, a client mobile processor.  */
> -           case 0x9e:
> -             if (stepping > 0xc)
> +
> +           case INTEL_BIGCORE_SKYLAKE:
> +           case INTEL_BIGCORE_AMBERLAKE:
> +           case INTEL_BIGCORE_COFFEELAKE:
> +           case INTEL_BIGCORE_WHISKEYLAKE:
> +           case INTEL_BIGCORE_KABYLAKE:
> +               /* NB: Although the errata documents that for model == 0x8e
> +                  (skylake client), only 0xb stepping or lower are impacted,
> +                  the intention of the errata was to disable TSX on all client
> +                  processors on all steppings.  Include 0xc stepping which is
> +                  an Intel Core i7-8665U, a client mobile processor.  */
> +               if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
>                 break;
> -             /* Fall through.  */
> -           case 0x4e:
> -           case 0x5e:
> -             {
> +
>                 /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
>                    processors listed in:
>
>  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
>                  */
> -disable_tsx:
> +           disable_tsx:
>                 CPU_FEATURE_UNSET (cpu_features, HLE);
>                 CPU_FEATURE_UNSET (cpu_features, RTM);
>                 CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> -             }
> -             break;
> -           case 0x3f:
> -             /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> -             if (stepping >= 4)
>                 break;
> -             /* Fall through.  */
> -           case 0x3c:
> -           case 0x45:
> -           case 0x46:
> -             /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> -                with stepping >= 4) to avoid TSX on kernels that weren't
> -                updated with the latest microcode package (which disables
> -                broken feature by default).  */
> -             CPU_FEATURE_UNSET (cpu_features, RTM);
> -             break;
> +
> +           case INTEL_BIGCORE_HASWELL:
> +               /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> +                  TSX.  Haswell also include other model numbers that have
> +                  working TSX.  */
> +               if (model == 0x3f && stepping >= 4)
> +               break;
> +
> +               CPU_FEATURE_UNSET (cpu_features, RTM);
> +               break;
>             }
>         }
>
> --
> 2.34.1
>


-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v5 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-09 21:58     ` H.J. Lu
@ 2023-05-10  0:33       ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10  0:33 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Tue, May 9, 2023 at 4:59 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Mon, May 8, 2023 at 8:13 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > This patch should have no affect on existing functionality.
> >
> > The current code, which has a single switch for model detection and
> > setting prefered features, is difficult to follow/extend. The cases
> > use magic numbers and many microarchitectures are missing. This makes
> > it difficult to reason about what is implemented so far and/or
> > how/where to add support for new features.
> >
> > This patch splits the model detection and preference setting stages so
> > that CPU preferences can be set based on a complete list of available
> > microarchitectures, rather than based on model magic numbers.
> > ---
> >  sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++--------
> >  1 file changed, 316 insertions(+), 85 deletions(-)
> >
> > diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> > index 5bff8ec0b4..bec70c3c49 100644
> > --- a/sysdeps/x86/cpu-features.c
> > +++ b/sysdeps/x86/cpu-features.c
> > @@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
> >                      == index_arch_Fast_Copy_Backward)),
> >                 "Incorrect index_arch_Fast_Unaligned_Load");
> >
> > +
> > +/* Intel Family-6 microarch list.  */
> > +enum
> > +{
> > +  /* Atom processors.  */
> > +  INTEL_ATOM_BONNELL,
> > +  INTEL_ATOM_SALTWELL,
> > +  INTEL_ATOM_SILVERMONT,
> > +  INTEL_ATOM_AIRMONT,
> > +  INTEL_ATOM_GOLDMONT,
> > +  INTEL_ATOM_GOLDMONT_PLUS,
> > +  INTEL_ATOM_SIERRAFOREST,
> > +  INTEL_ATOM_GRANDRIDGE,
> > +  INTEL_ATOM_TREMONT,
> > +
> > +  /* Bigcore processors.  */
> > +  INTEL_BIGCORE_MEROM,
> > +  INTEL_BIGCORE_PENRYN,
> > +  INTEL_BIGCORE_DUNNINGTON,
> > +  INTEL_BIGCORE_NEHALEM,
> > +  INTEL_BIGCORE_WESTMERE,
> > +  INTEL_BIGCORE_SANDYBRIDGE,
> > +  INTEL_BIGCORE_IVYBRIDGE,
> > +  INTEL_BIGCORE_HASWELL,
> > +  INTEL_BIGCORE_BROADWELL,
> > +  INTEL_BIGCORE_SKYLAKE,
> > +  INTEL_BIGCORE_AMBERLAKE,
> > +  INTEL_BIGCORE_COFFEELAKE,
> > +  INTEL_BIGCORE_WHISKEYLAKE,
> > +  INTEL_BIGCORE_KABYLAKE,
> > +  INTEL_BIGCORE_COMETLAKE,
> > +  INTEL_BIGCORE_SKYLAKE_AVX512,
> > +  INTEL_BIGCORE_CANNONLAKE,
> > +  INTEL_BIGCORE_CASCADELAKE,
> > +  INTEL_BIGCORE_COOPERLAKE,
> > +  INTEL_BIGCORE_ICELAKE,
> > +  INTEL_BIGCORE_TIGERLAKE,
> > +  INTEL_BIGCORE_ROCKETLAKE,
> > +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> > +  INTEL_BIGCORE_RAPTORLAKE,
> > +  INTEL_BIGCORE_EMERALDRAPIDS,
> > +  INTEL_BIGCORE_METEORLAKE,
> > +  INTEL_BIGCORE_LUNARLAKE,
> > +  INTEL_BIGCORE_ARROWLAKE,
> > +  INTEL_BIGCORE_GRANITERAPIDS,
> > +
> > +  /* Mixed (bigcore + atom SOC).  */
> > +  INTEL_MIXED_LAKEFIELD,
> > +  INTEL_MIXED_ALDERLAKE,
> > +
> > +  /* KNL.  */
> > +  INTEL_KNIGHTS_MILL,
> > +  INTEL_KNIGHTS_LANDING,
> > +
> > +  /* Unknown.  */
> > +  INTEL_UNKNOWN,
> > +};
> > +
> > +static unsigned int
> > +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
> > +{
> > +  switch (model)
> > +    {
> > +    case 0x1C:
> > +    case 0x26:
> > +      return INTEL_ATOM_BONNELL;
> > +    case 0x27:
> > +    case 0x35:
> > +    case 0x36:
> > +      return INTEL_ATOM_SALTWELL;
> > +    case 0x37:
> > +    case 0x4A:
> > +    case 0x4D:
> > +    case 0x5D:
> > +      return INTEL_ATOM_SILVERMONT;
> > +    case 0x4C:
> > +    case 0x5A:
> > +    case 0x75:
> > +      return INTEL_ATOM_AIRMONT;
> > +    case 0x5C:
> > +    case 0x5F:
> > +      return INTEL_ATOM_GOLDMONT;
> > +    case 0x7A:
> > +      return INTEL_ATOM_GOLDMONT_PLUS;
> > +    case 0xAF:
> > +      return INTEL_ATOM_SIERRAFOREST;
> > +    case 0xB6:
> > +      return INTEL_ATOM_GRANDRIDGE;
> > +    case 0x86:
> > +    case 0x96:
> > +    case 0x9C:
> > +      return INTEL_ATOM_TREMONT;
> > +    case 0x0F:
> > +    case 0x16:
> > +      return INTEL_BIGCORE_MEROM;
> > +    case 0x17:
> > +      return INTEL_BIGCORE_PENRYN;
> > +    case 0x1D:
> > +      return INTEL_BIGCORE_DUNNINGTON;
> > +    case 0x1A:
> > +    case 0x1E:
> > +    case 0x1F:
> > +    case 0x2E:
> > +      return INTEL_BIGCORE_NEHALEM;
> > +    case 0x25:
> > +    case 0x2C:
> > +    case 0x2F:
> > +      return INTEL_BIGCORE_WESTMERE;
> > +    case 0x2A:
> > +    case 0x2D:
> > +      return INTEL_BIGCORE_SANDYBRIDGE;
> > +    case 0x3A:
> > +    case 0x3E:
> > +      return INTEL_BIGCORE_IVYBRIDGE;
> > +    case 0x3C:
> > +    case 0x3F:
> > +    case 0x45:
> > +    case 0x46:
> > +      return INTEL_BIGCORE_HASWELL;
> > +    case 0x3D:
> > +    case 0x47:
> > +    case 0x4F:
> > +    case 0x56:
> > +      return INTEL_BIGCORE_BROADWELL;
> > +    case 0x4E:
> > +    case 0x5E:
> > +      return INTEL_BIGCORE_SKYLAKE;
> > +    case 0x8E:
> > +      switch (stepping)
> > +       {
> > +       case 0x09:
> > +         return INTEL_BIGCORE_AMBERLAKE;
> > +       case 0x0A:
> > +         return INTEL_BIGCORE_COFFEELAKE;
> > +       case 0x0B:
> > +       case 0x0C:
> > +         return INTEL_BIGCORE_WHISKEYLAKE;
> > +       default:
> > +         return INTEL_BIGCORE_KABYLAKE;
> > +       }
> > +    case 0x9E:
> > +      switch (stepping)
> > +       {
> > +       case 0x0A:
> > +       case 0x0B:
> > +       case 0x0C:
> > +       case 0x0D:
> > +         return INTEL_BIGCORE_COFFEELAKE;
> > +       default:
> > +         return INTEL_BIGCORE_KABYLAKE;
> > +       }
> > +    case 0xA5:
> > +    case 0xA6:
> > +      return INTEL_BIGCORE_COMETLAKE;
> > +    case 0x66:
> > +      return INTEL_BIGCORE_CANNONLAKE;
> > +    case 0x55:
> > +      switch (stepping)
> > +       {
> > +       case 0x06:
> > +       case 0x07:
> > +         return INTEL_BIGCORE_CASCADELAKE;
> > +       case 0x0b:
> > +         return INTEL_BIGCORE_COOPERLAKE;
> > +       default:
> > +         return INTEL_BIGCORE_SKYLAKE_AVX512;
> > +       }
> > +    case 0x6A:
> > +    case 0x6C:
> > +    case 0x7D:
> > +    case 0x7E:
> > +    case 0x9D:
> > +      return INTEL_BIGCORE_ICELAKE;
> > +    case 0x8C:
> > +    case 0x8D:
> > +      return INTEL_BIGCORE_TIGERLAKE;
> > +    case 0xA7:
> > +      return INTEL_BIGCORE_ROCKETLAKE;
> > +    case 0x8F:
> > +      return INTEL_BIGCORE_SAPPHIRERAPIDS;
> > +    case 0xB7:
> > +    case 0xBA:
> > +    case 0xBF:
> > +      return INTEL_BIGCORE_RAPTORLAKE;
> > +    case 0xCF:
> > +      return INTEL_BIGCORE_EMERALDRAPIDS;
> > +    case 0xAA:
> > +    case 0xAC:
> > +      return INTEL_BIGCORE_METEORLAKE;
> > +    case 0xbd:
> > +      return INTEL_BIGCORE_LUNARLAKE;
> > +    case 0xc6:
> > +      return INTEL_BIGCORE_ARROWLAKE;
> > +    case 0xAD:
> > +    case 0xAE:
> > +      return INTEL_BIGCORE_GRANITERAPIDS;
> > +    case 0x8A:
> > +      return INTEL_MIXED_LAKEFIELD;
> > +    case 0x97:
> > +    case 0x9A:
> > +    case 0xBE:
> > +      return INTEL_MIXED_ALDERLAKE;
> > +    case 0x85:
> > +      return INTEL_KNIGHTS_MILL;
> > +    case 0x57:
> > +      return INTEL_KNIGHTS_LANDING;
> > +    default:
> > +      return INTEL_UNKNOWN;
> > +    }
> > +}
> > +
> >  static inline void
> >  init_cpu_features (struct cpu_features *cpu_features)
> >  {
> > @@ -453,129 +664,149 @@ init_cpu_features (struct cpu_features *cpu_features)
> >        if (family == 0x06)
> >         {
> >           model += extended_model;
> > -         switch (model)
> > +         unsigned int microarch
> > +             = intel_get_fam6_microarch (model, stepping);
> > +
> > +         switch (microarch)
> >             {
> > -           case 0x1c:
> > -           case 0x26:
> > -             /* BSF is slow on Atom.  */
> > +             /* Atom / KNL tuning.  */
> > +           case INTEL_ATOM_BONNELL:
>
> Since Saltwell is a shrink of Bonnell, INTEL_ATOM_SALTWELL
> should be added here.
>
Would rather leave this patch as no-functionality change. Will do so
in follow up patch.

> > +             /* BSF is slow on Bonnell.  */
> >               cpu_features->preferred[index_arch_Slow_BSF]
> > -               |= bit_arch_Slow_BSF;
> > +                 |= bit_arch_Slow_BSF;
> >               break;
> >
> > -           case 0x57:
> > -             /* Knights Landing.  Enable Silvermont optimizations.  */
> > -
> > -           case 0x7a:
> > -             /* Unaligned load versions are faster than SSSE3
> > -                on Goldmont Plus.  */
> > -
> > -           case 0x5c:
> > -           case 0x5f:
> >               /* Unaligned load versions are faster than SSSE3
> > -                on Goldmont.  */
> > +                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> > +           case INTEL_ATOM_AIRMONT:
> > +           case INTEL_ATOM_SILVERMONT:
> > +           case INTEL_ATOM_GOLDMONT:
> > +           case INTEL_ATOM_GOLDMONT_PLUS:
> >
> > -           case 0x4c:
> > -           case 0x5a:
> > -           case 0x75:
> > -             /* Airmont is a die shrink of Silvermont.  */
> > +            /* Knights Landing.  Enable Silvermont optimizations.  */
> > +           case INTEL_KNIGHTS_LANDING:
> >
> > -           case 0x37:
> > -           case 0x4a:
> > -           case 0x4d:
> > -           case 0x5d:
> > -             /* Unaligned load versions are faster than SSSE3
> > -                on Silvermont.  */
> >               cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> > -               |= (bit_arch_Fast_Unaligned_Load
> > -                   | bit_arch_Fast_Unaligned_Copy
> > -                   | bit_arch_Prefer_PMINUB_for_stringop
> > -                   | bit_arch_Slow_SSE4_2);
> > +                 |= (bit_arch_Fast_Unaligned_Load
> > +                     | bit_arch_Fast_Unaligned_Copy
> > +                     | bit_arch_Prefer_PMINUB_for_stringop
> > +                     | bit_arch_Slow_SSE4_2);
> >               break;
> >
> > -           case 0x86:
> > -           case 0x96:
> > -           case 0x9c:
> > +           case INTEL_ATOM_TREMONT:
> >               /* Enable rep string instructions, unaligned load, unaligned
> > -                copy, pminub and avoid SSE 4.2 on Tremont.  */
> > +                copy, pminub and avoid SSE 4.2 on Tremont.  */
> >               cpu_features->preferred[index_arch_Fast_Rep_String]
> > -               |= (bit_arch_Fast_Rep_String
> > -                   | bit_arch_Fast_Unaligned_Load
> > -                   | bit_arch_Fast_Unaligned_Copy
> > -                   | bit_arch_Prefer_PMINUB_for_stringop
> > -                   | bit_arch_Slow_SSE4_2);
> > +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > +                     | bit_arch_Fast_Unaligned_Copy
> > +                     | bit_arch_Prefer_PMINUB_for_stringop
> > +                     | bit_arch_Slow_SSE4_2);
> > +             break;
> > +
> > +             /* Untuned KNL microarch.  */
> > +           case INTEL_KNIGHTS_MILL:
> > +             /* Untuned atom microarch.  */
> > +           case INTEL_ATOM_SIERRAFOREST:
> > +           case INTEL_ATOM_GRANDRIDGE:
> > +           case INTEL_ATOM_SALTWELL:
> >               break;
>
> "break" should be removed to enable the optimizations
> for processors with AVX.
Done.
>
> >
> > +             /* Bigcore Tuning.  */
> > +           case INTEL_UNKNOWN:
> >             default:
> >               /* Unknown family 0x06 processors.  Assuming this is one
> >                  of Core i3/i5/i7 processors if AVX is available.  */
> >               if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
> >                 break;
> > -             /* Fall through.  */
> > -
> > -           case 0x1a:
> > -           case 0x1e:
> > -           case 0x1f:
> > -           case 0x25:
> > -           case 0x2c:
> > -           case 0x2e:
> > -           case 0x2f:
> > +           case INTEL_BIGCORE_NEHALEM:
> > +           case INTEL_BIGCORE_WESTMERE:
> >               /* Rep string instructions, unaligned load, unaligned copy,
> >                  and pminub are fast on Intel Core i3, i5 and i7.  */
> >               cpu_features->preferred[index_arch_Fast_Rep_String]
> > -               |= (bit_arch_Fast_Rep_String
> > -                   | bit_arch_Fast_Unaligned_Load
> > -                   | bit_arch_Fast_Unaligned_Copy
> > -                   | bit_arch_Prefer_PMINUB_for_stringop);
> > +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > +                     | bit_arch_Fast_Unaligned_Copy
> > +                     | bit_arch_Prefer_PMINUB_for_stringop);
> > +             break;
> > +
> > +             /* Untuned Bigcore microarch.  */
> > +           case INTEL_BIGCORE_SANDYBRIDGE:
> > +           case INTEL_BIGCORE_IVYBRIDGE:
> > +           case INTEL_BIGCORE_HASWELL:
> > +           case INTEL_BIGCORE_BROADWELL:
> > +           case INTEL_BIGCORE_SKYLAKE:
> > +           case INTEL_BIGCORE_AMBERLAKE:
> > +           case INTEL_BIGCORE_COFFEELAKE:
> > +           case INTEL_BIGCORE_WHISKEYLAKE:
> > +           case INTEL_BIGCORE_KABYLAKE:
> > +           case INTEL_BIGCORE_COMETLAKE:
> > +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> > +           case INTEL_BIGCORE_CASCADELAKE:
> > +           case INTEL_BIGCORE_COOPERLAKE:
> > +           case INTEL_BIGCORE_CANNONLAKE:
> > +           case INTEL_BIGCORE_ICELAKE:
> > +           case INTEL_BIGCORE_TIGERLAKE:
> > +           case INTEL_BIGCORE_ROCKETLAKE:
> > +           case INTEL_BIGCORE_RAPTORLAKE:
> > +           case INTEL_BIGCORE_METEORLAKE:
> > +           case INTEL_BIGCORE_LUNARLAKE:
> > +           case INTEL_BIGCORE_ARROWLAKE:
> > +           case INTEL_BIGCORE_SAPPHIRERAPIDS:
> > +           case INTEL_BIGCORE_EMERALDRAPIDS:
> > +           case INTEL_BIGCORE_GRANITERAPIDS:
> > +             break;
> > +
> > +           /* Untuned Mixed (bigcore + atom SOC).  */
> > +           case INTEL_MIXED_LAKEFIELD:
> > +           case INTEL_MIXED_ALDERLAKE:
>
> All these processors should be treated as default.
Done.

>
> >               break;
> >             }
> >
> > -        /* Disable TSX on some processors to avoid TSX on kernels that
> > -           weren't updated with the latest microcode package (which
> > -           disables broken feature by default).  */
> > -        switch (model)
> > +             /* Disable TSX on some processors to avoid TSX on kernels that
> > +                weren't updated with the latest microcode package (which
> > +                disables broken feature by default).  */
> > +         switch (microarch)
> >             {
> > -           case 0x55:
> > -             if (stepping <= 5)
> > +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> > +             /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and
> > +                Cooperlake also have model == 0x55 so double check the
> > +                stepping to be safe.  */
> > +             if (model == 0x55 && stepping <= 5)
>
> No need to check model == 0x55.
Okay.
>
> >                 goto disable_tsx;
> >               break;
> > -           case 0x8e:
> > -             /* NB: Although the errata documents that for model == 0x8e,
> > -                only 0xb stepping or lower are impacted, the intention of
> > -                the errata was to disable TSX on all client processors on
> > -                all steppings.  Include 0xc stepping which is an Intel
> > -                Core i7-8665U, a client mobile processor.  */
> > -           case 0x9e:
> > -             if (stepping > 0xc)
> > +
> > +           case INTEL_BIGCORE_SKYLAKE:
> > +           case INTEL_BIGCORE_AMBERLAKE:
> > +           case INTEL_BIGCORE_COFFEELAKE:
> > +           case INTEL_BIGCORE_WHISKEYLAKE:
> > +           case INTEL_BIGCORE_KABYLAKE:
> > +               /* NB: Although the errata documents that for model == 0x8e
> > +                  (skylake client), only 0xb stepping or lower are impacted,
> > +                  the intention of the errata was to disable TSX on all client
> > +                  processors on all steppings.  Include 0xc stepping which is
> > +                  an Intel Core i7-8665U, a client mobile processor.  */
> > +               if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
> >                 break;
> > -             /* Fall through.  */
> > -           case 0x4e:
> > -           case 0x5e:
> > -             {
> > +
> >                 /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
> >                    processors listed in:
> >
> >  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
> >                  */
> > -disable_tsx:
> > +           disable_tsx:
> >                 CPU_FEATURE_UNSET (cpu_features, HLE);
> >                 CPU_FEATURE_UNSET (cpu_features, RTM);
> >                 CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> > -             }
> > -             break;
> > -           case 0x3f:
> > -             /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> > -             if (stepping >= 4)
> >                 break;
> > -             /* Fall through.  */
> > -           case 0x3c:
> > -           case 0x45:
> > -           case 0x46:
> > -             /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> > -                with stepping >= 4) to avoid TSX on kernels that weren't
> > -                updated with the latest microcode package (which disables
> > -                broken feature by default).  */
> > -             CPU_FEATURE_UNSET (cpu_features, RTM);
> > -             break;
> > +
> > +           case INTEL_BIGCORE_HASWELL:
> > +               /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> > +                  TSX.  Haswell also include other model numbers that have
> > +                  working TSX.  */
> > +               if (model == 0x3f && stepping >= 4)
> > +               break;
> > +
> > +               CPU_FEATURE_UNSET (cpu_features, RTM);
> > +               break;
> >             }
> >         }
> >
> > --
> > 2.34.1
> >
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
                   ` (4 preceding siblings ...)
  2023-05-09  3:13 ` [PATCH v5 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-05-10  0:33 ` Noah Goldstein
  2023-05-10  0:33   ` [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
                     ` (3 more replies)
  2023-05-10 22:12 ` [PATCH v7 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
                   ` (5 subsequent siblings)
  11 siblings, 4 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10  0:33 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.

Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.

The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.

Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
---
 sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
 1 file changed, 43 insertions(+), 27 deletions(-)

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index ec88945b39..4a1a5423ff 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@ handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/4 of size
+     of the chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     REP MOVSB. As well the point where the fact that non-temporal
+     stores are forced back to main memory would already occurred to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the maximum thrashing
+     capped at 1/associativity. */
+  unsigned long int non_temporal_threshold = shared / 4;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, there performance in highly parallel situations is
+     noticeably worse.  */
+  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
+    non_temporal_threshold = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features`
  2023-05-10  0:33 ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-05-10  0:33   ` Noah Goldstein
  2023-05-10 22:13     ` H.J. Lu
  2023-05-10  0:33   ` [PATCH v6 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10  0:33 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

This patch should have no affect on existing functionality.

The current code, which has a single switch for model detection and
setting prefered features, is difficult to follow/extend. The cases
use magic numbers and many microarchitectures are missing. This makes
it difficult to reason about what is implemented so far and/or
how/where to add support for new features.

This patch splits the model detection and preference setting stages so
that CPU preferences can be set based on a complete list of available
microarchitectures, rather than based on model magic numbers.
---
 sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++--------
 1 file changed, 317 insertions(+), 84 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 5bff8ec0b4..9d433f8144 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
 		     == index_arch_Fast_Copy_Backward)),
 		"Incorrect index_arch_Fast_Unaligned_Load");
 
+
+/* Intel Family-6 microarch list.  */
+enum
+{
+  /* Atom processors.  */
+  INTEL_ATOM_BONNELL,
+  INTEL_ATOM_SALTWELL,
+  INTEL_ATOM_SILVERMONT,
+  INTEL_ATOM_AIRMONT,
+  INTEL_ATOM_GOLDMONT,
+  INTEL_ATOM_GOLDMONT_PLUS,
+  INTEL_ATOM_SIERRAFOREST,
+  INTEL_ATOM_GRANDRIDGE,
+  INTEL_ATOM_TREMONT,
+
+  /* Bigcore processors.  */
+  INTEL_BIGCORE_MEROM,
+  INTEL_BIGCORE_PENRYN,
+  INTEL_BIGCORE_DUNNINGTON,
+  INTEL_BIGCORE_NEHALEM,
+  INTEL_BIGCORE_WESTMERE,
+  INTEL_BIGCORE_SANDYBRIDGE,
+  INTEL_BIGCORE_IVYBRIDGE,
+  INTEL_BIGCORE_HASWELL,
+  INTEL_BIGCORE_BROADWELL,
+  INTEL_BIGCORE_SKYLAKE,
+  INTEL_BIGCORE_AMBERLAKE,
+  INTEL_BIGCORE_COFFEELAKE,
+  INTEL_BIGCORE_WHISKEYLAKE,
+  INTEL_BIGCORE_KABYLAKE,
+  INTEL_BIGCORE_COMETLAKE,
+  INTEL_BIGCORE_SKYLAKE_AVX512,
+  INTEL_BIGCORE_CANNONLAKE,
+  INTEL_BIGCORE_CASCADELAKE,
+  INTEL_BIGCORE_COOPERLAKE,
+  INTEL_BIGCORE_ICELAKE,
+  INTEL_BIGCORE_TIGERLAKE,
+  INTEL_BIGCORE_ROCKETLAKE,
+  INTEL_BIGCORE_SAPPHIRERAPIDS,
+  INTEL_BIGCORE_RAPTORLAKE,
+  INTEL_BIGCORE_EMERALDRAPIDS,
+  INTEL_BIGCORE_METEORLAKE,
+  INTEL_BIGCORE_LUNARLAKE,
+  INTEL_BIGCORE_ARROWLAKE,
+  INTEL_BIGCORE_GRANITERAPIDS,
+
+  /* Mixed (bigcore + atom SOC).  */
+  INTEL_MIXED_LAKEFIELD,
+  INTEL_MIXED_ALDERLAKE,
+
+  /* KNL.  */
+  INTEL_KNIGHTS_MILL,
+  INTEL_KNIGHTS_LANDING,
+
+  /* Unknown.  */
+  INTEL_UNKNOWN,
+};
+
+static unsigned int
+intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
+{
+  switch (model)
+    {
+    case 0x1C:
+    case 0x26:
+      return INTEL_ATOM_BONNELL;
+    case 0x27:
+    case 0x35:
+    case 0x36:
+      return INTEL_ATOM_SALTWELL;
+    case 0x37:
+    case 0x4A:
+    case 0x4D:
+    case 0x5D:
+      return INTEL_ATOM_SILVERMONT;
+    case 0x4C:
+    case 0x5A:
+    case 0x75:
+      return INTEL_ATOM_AIRMONT;
+    case 0x5C:
+    case 0x5F:
+      return INTEL_ATOM_GOLDMONT;
+    case 0x7A:
+      return INTEL_ATOM_GOLDMONT_PLUS;
+    case 0xAF:
+      return INTEL_ATOM_SIERRAFOREST;
+    case 0xB6:
+      return INTEL_ATOM_GRANDRIDGE;
+    case 0x86:
+    case 0x96:
+    case 0x9C:
+      return INTEL_ATOM_TREMONT;
+    case 0x0F:
+    case 0x16:
+      return INTEL_BIGCORE_MEROM;
+    case 0x17:
+      return INTEL_BIGCORE_PENRYN;
+    case 0x1D:
+      return INTEL_BIGCORE_DUNNINGTON;
+    case 0x1A:
+    case 0x1E:
+    case 0x1F:
+    case 0x2E:
+      return INTEL_BIGCORE_NEHALEM;
+    case 0x25:
+    case 0x2C:
+    case 0x2F:
+      return INTEL_BIGCORE_WESTMERE;
+    case 0x2A:
+    case 0x2D:
+      return INTEL_BIGCORE_SANDYBRIDGE;
+    case 0x3A:
+    case 0x3E:
+      return INTEL_BIGCORE_IVYBRIDGE;
+    case 0x3C:
+    case 0x3F:
+    case 0x45:
+    case 0x46:
+      return INTEL_BIGCORE_HASWELL;
+    case 0x3D:
+    case 0x47:
+    case 0x4F:
+    case 0x56:
+      return INTEL_BIGCORE_BROADWELL;
+    case 0x4E:
+    case 0x5E:
+      return INTEL_BIGCORE_SKYLAKE;
+    case 0x8E:
+      switch (stepping)
+	{
+	case 0x09:
+	  return INTEL_BIGCORE_AMBERLAKE;
+	case 0x0A:
+	  return INTEL_BIGCORE_COFFEELAKE;
+	case 0x0B:
+	case 0x0C:
+	  return INTEL_BIGCORE_WHISKEYLAKE;
+	default:
+	  return INTEL_BIGCORE_KABYLAKE;
+	}
+    case 0x9E:
+      switch (stepping)
+	{
+	case 0x0A:
+	case 0x0B:
+	case 0x0C:
+	case 0x0D:
+	  return INTEL_BIGCORE_COFFEELAKE;
+	default:
+	  return INTEL_BIGCORE_KABYLAKE;
+	}
+    case 0xA5:
+    case 0xA6:
+      return INTEL_BIGCORE_COMETLAKE;
+    case 0x66:
+      return INTEL_BIGCORE_CANNONLAKE;
+    case 0x55:
+      switch (stepping)
+	{
+	case 0x06:
+	case 0x07:
+	  return INTEL_BIGCORE_CASCADELAKE;
+	case 0x0b:
+	  return INTEL_BIGCORE_COOPERLAKE;
+	default:
+	  return INTEL_BIGCORE_SKYLAKE_AVX512;
+	}
+    case 0x6A:
+    case 0x6C:
+    case 0x7D:
+    case 0x7E:
+    case 0x9D:
+      return INTEL_BIGCORE_ICELAKE;
+    case 0x8C:
+    case 0x8D:
+      return INTEL_BIGCORE_TIGERLAKE;
+    case 0xA7:
+      return INTEL_BIGCORE_ROCKETLAKE;
+    case 0x8F:
+      return INTEL_BIGCORE_SAPPHIRERAPIDS;
+    case 0xB7:
+    case 0xBA:
+    case 0xBF:
+      return INTEL_BIGCORE_RAPTORLAKE;
+    case 0xCF:
+      return INTEL_BIGCORE_EMERALDRAPIDS;
+    case 0xAA:
+    case 0xAC:
+      return INTEL_BIGCORE_METEORLAKE;
+    case 0xbd:
+      return INTEL_BIGCORE_LUNARLAKE;
+    case 0xc6:
+      return INTEL_BIGCORE_ARROWLAKE;
+    case 0xAD:
+    case 0xAE:
+      return INTEL_BIGCORE_GRANITERAPIDS;
+    case 0x8A:
+      return INTEL_MIXED_LAKEFIELD;
+    case 0x97:
+    case 0x9A:
+    case 0xBE:
+      return INTEL_MIXED_ALDERLAKE;
+    case 0x85:
+      return INTEL_KNIGHTS_MILL;
+    case 0x57:
+      return INTEL_KNIGHTS_LANDING;
+    default:
+      return INTEL_UNKNOWN;
+    }
+}
+
 static inline void
 init_cpu_features (struct cpu_features *cpu_features)
 {
@@ -453,129 +664,151 @@ init_cpu_features (struct cpu_features *cpu_features)
       if (family == 0x06)
 	{
 	  model += extended_model;
-	  switch (model)
+	  unsigned int microarch
+	      = intel_get_fam6_microarch (model, stepping);
+
+	  switch (microarch)
 	    {
-	    case 0x1c:
-	    case 0x26:
-	      /* BSF is slow on Atom.  */
+	      /* Atom / KNL tuning.  */
+	    case INTEL_ATOM_BONNELL:
+	      /* BSF is slow on Bonnell.  */
 	      cpu_features->preferred[index_arch_Slow_BSF]
-		|= bit_arch_Slow_BSF;
+		  |= bit_arch_Slow_BSF;
 	      break;
 
-	    case 0x57:
-	      /* Knights Landing.  Enable Silvermont optimizations.  */
-
-	    case 0x7a:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont Plus.  */
-
-	    case 0x5c:
-	    case 0x5f:
 	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont.  */
+		     on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
+	    case INTEL_ATOM_AIRMONT:
+	    case INTEL_ATOM_SILVERMONT:
+	    case INTEL_ATOM_GOLDMONT:
+	    case INTEL_ATOM_GOLDMONT_PLUS:
 
-	    case 0x4c:
-	    case 0x5a:
-	    case 0x75:
-	      /* Airmont is a die shrink of Silvermont.  */
+            /* Knights Landing.  Enable Silvermont optimizations.  */
+	    case INTEL_KNIGHTS_LANDING:
 
-	    case 0x37:
-	    case 0x4a:
-	    case 0x4d:
-	    case 0x5d:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Silvermont.  */
 	      cpu_features->preferred[index_arch_Fast_Unaligned_Load]
-		|= (bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
-	    case 0x86:
-	    case 0x96:
-	    case 0x9c:
+	    case INTEL_ATOM_TREMONT:
 	      /* Enable rep string instructions, unaligned load, unaligned
-	         copy, pminub and avoid SSE 4.2 on Tremont.  */
+		 copy, pminub and avoid SSE 4.2 on Tremont.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
+	      /* Default tuned KNL microarch.  */
+	    case INTEL_KNIGHTS_MILL:
+	      goto default_tuning;
+	      /* Default tuned atom microarch.  */
+	    case INTEL_ATOM_SIERRAFOREST:
+	    case INTEL_ATOM_GRANDRIDGE:
+	    case INTEL_ATOM_SALTWELL:
+	      goto default_tuning;
+
+	      /* Bigcore Tuning.  */
+	    case INTEL_UNKNOWN:
 	    default:
+	    default_tuning:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	      /* Fall through.  */
-
-	    case 0x1a:
-	    case 0x1e:
-	    case 0x1f:
-	    case 0x25:
-	    case 0x2c:
-	    case 0x2e:
-	    case 0x2f:
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop);
+		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
+
+	      /* Default tuned Bigcore microarch.  */
+	    case INTEL_BIGCORE_SANDYBRIDGE:
+	    case INTEL_BIGCORE_IVYBRIDGE:
+	    case INTEL_BIGCORE_HASWELL:
+	    case INTEL_BIGCORE_BROADWELL:
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_AMBERLAKE:
+	    case INTEL_BIGCORE_COFFEELAKE:
+	    case INTEL_BIGCORE_WHISKEYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+	    case INTEL_BIGCORE_COMETLAKE:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	    case INTEL_BIGCORE_CASCADELAKE:
+	    case INTEL_BIGCORE_COOPERLAKE:
+	    case INTEL_BIGCORE_CANNONLAKE:
+	    case INTEL_BIGCORE_ICELAKE:
+	    case INTEL_BIGCORE_TIGERLAKE:
+	    case INTEL_BIGCORE_ROCKETLAKE:
+	    case INTEL_BIGCORE_RAPTORLAKE:
+	    case INTEL_BIGCORE_METEORLAKE:
+	    case INTEL_BIGCORE_LUNARLAKE:
+	    case INTEL_BIGCORE_ARROWLAKE:
+	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
+	    case INTEL_BIGCORE_EMERALDRAPIDS:
+	    case INTEL_BIGCORE_GRANITERAPIDS:
+	      goto default_tuning;
+
+	    /* Default tuned Mixed (bigcore + atom SOC).  */
+	    case INTEL_MIXED_LAKEFIELD:
+	    case INTEL_MIXED_ALDERLAKE:
+	      goto default_tuning;
 	    }
 
-	 /* Disable TSX on some processors to avoid TSX on kernels that
-	    weren't updated with the latest microcode package (which
-	    disables broken feature by default).  */
-	 switch (model)
+	      /* Disable TSX on some processors to avoid TSX on kernels that
+		 weren't updated with the latest microcode package (which
+		 disables broken feature by default).  */
+	  switch (microarch)
 	    {
-	    case 0x55:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	      /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and
+	         Cooperlake also have model 0x55 but stepping 5/6 and 11
+	         respectively so double check the stepping to be safe. */
 	      if (stepping <= 5)
 		goto disable_tsx;
 	      break;
-	    case 0x8e:
-	      /* NB: Although the errata documents that for model == 0x8e,
-		 only 0xb stepping or lower are impacted, the intention of
-		 the errata was to disable TSX on all client processors on
-		 all steppings.  Include 0xc stepping which is an Intel
-		 Core i7-8665U, a client mobile processor.  */
-	    case 0x9e:
-	      if (stepping > 0xc)
+
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_AMBERLAKE:
+	    case INTEL_BIGCORE_COFFEELAKE:
+	    case INTEL_BIGCORE_WHISKEYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+		/* NB: Although the errata documents that for model == 0x8e
+		   (skylake client), only 0xb stepping or lower are impacted,
+		   the intention of the errata was to disable TSX on all client
+		   processors on all steppings.  Include 0xc stepping which is
+		   an Intel Core i7-8665U, a client mobile processor.  */
+		if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
 		break;
-	      /* Fall through.  */
-	    case 0x4e:
-	    case 0x5e:
-	      {
+
 		/* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
 		   processors listed in:
 
 https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
 		 */
-disable_tsx:
+	    disable_tsx:
 		CPU_FEATURE_UNSET (cpu_features, HLE);
 		CPU_FEATURE_UNSET (cpu_features, RTM);
 		CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
-	      }
-	      break;
-	    case 0x3f:
-	      /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
-	      if (stepping >= 4)
 		break;
-	      /* Fall through.  */
-	    case 0x3c:
-	    case 0x45:
-	    case 0x46:
-	      /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
-		 with stepping >= 4) to avoid TSX on kernels that weren't
-		 updated with the latest microcode package (which disables
-		 broken feature by default).  */
-	      CPU_FEATURE_UNSET (cpu_features, RTM);
-	      break;
+
+	    case INTEL_BIGCORE_HASWELL:
+		/* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
+		   TSX.  Haswell also include other model numbers that have
+		   working TSX.  */
+		if (model == 0x3f && stepping >= 4)
+		break;
+
+		CPU_FEATURE_UNSET (cpu_features, RTM);
+		break;
 	    }
 	}
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v6 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific
  2023-05-10  0:33 ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
  2023-05-10  0:33   ` [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-05-10  0:33   ` Noah Goldstein
  2023-05-10  0:33   ` [PATCH v6 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Noah Goldstein
  2023-05-10 15:55   ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` H.J. Lu
  3 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10  0:33 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Different systems prefer a different divisors.

From benchmarks[1] so far the following divisors have been found:
    ICX     : 2
    SKX     : 2
    BWD     : 8

For Intel, we are generalizing that BWD and older prefers 8 as a
divisor, and SKL and newer prefers 2. This number can be further tuned
as benchmarks are run.

[1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
---
 sysdeps/x86/cpu-features.c         | 16 +++++++++++++--
 sysdeps/x86/dl-cacheinfo.h         | 32 ++++++++++++++++++------------
 sysdeps/x86/include/cpu-features.h |  3 +++
 3 files changed, 36 insertions(+), 15 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 9d433f8144..4cc1cd9fed 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -637,6 +637,7 @@ init_cpu_features (struct cpu_features *cpu_features)
   unsigned int stepping = 0;
   enum cpu_features_kind kind;
 
+  cpu_features->cachesize_non_temporal_divisor = 4;
 #if !HAS_CPUID
   if (__get_cpuid_max (0, 0) == 0)
     {
@@ -720,8 +721,8 @@ init_cpu_features (struct cpu_features *cpu_features)
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	    case INTEL_BIGCORE_NEHALEM:
-	    case INTEL_BIGCORE_WESTMERE:
+
+	    enable_modern_features:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
@@ -730,11 +731,20 @@ init_cpu_features (struct cpu_features *cpu_features)
 		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
 
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
+	      /* Older CPUs prefer non-temporal stores at lower threshold.  */
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto enable_modern_features;
+
 	      /* Default tuned Bigcore microarch.  */
 	    case INTEL_BIGCORE_SANDYBRIDGE:
 	    case INTEL_BIGCORE_IVYBRIDGE:
 	    case INTEL_BIGCORE_HASWELL:
 	    case INTEL_BIGCORE_BROADWELL:
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto default_tuning;
+
 	    case INTEL_BIGCORE_SKYLAKE:
 	    case INTEL_BIGCORE_AMBERLAKE:
 	    case INTEL_BIGCORE_COFFEELAKE:
@@ -755,11 +765,13 @@ init_cpu_features (struct cpu_features *cpu_features)
 	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
 	    case INTEL_BIGCORE_EMERALDRAPIDS:
 	    case INTEL_BIGCORE_GRANITERAPIDS:
+	      cpu_features->cachesize_non_temporal_divisor = 2;
 	      goto default_tuning;
 
 	    /* Default tuned Mixed (bigcore + atom SOC).  */
 	    case INTEL_MIXED_LAKEFIELD:
 	    case INTEL_MIXED_ALDERLAKE:
+	      cpu_features->cachesize_non_temporal_divisor = 2;
 	      goto default_tuning;
 	    }
 
diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index 4a1a5423ff..864b00a521 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 1/4 of size
-     of the chip's cache. For most Intel and AMD processors with an
-     initial release date between 2017 and 2023, a thread's typical
-     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
-     estimate the point where non-temporal stores begin outcompeting
-     REP MOVSB. As well the point where the fact that non-temporal
-     stores are forced back to main memory would already occurred to the
-     majority of the lines in the copy. Note, concerns about the
-     entire L3 cache being evicted by the copy are mostly alleviated
-     by the fact that modern HW detects streaming patterns and
-     provides proper LRU hints so that the maximum thrashing
-     capped at 1/associativity. */
-  unsigned long int non_temporal_threshold = shared / 4;
+  unsigned long int cachesize_non_temporal_divisor
+      = cpu_features->cachesize_non_temporal_divisor;
+  if (cachesize_non_temporal_divisor <= 0)
+    cachesize_non_temporal_divisor = 4;
+
+  /* The default setting for the non_temporal threshold is [1/2, 1/8] of size
+     of the chip's cache (depending on `cachesize_non_temporal_divisor` which
+     is microarch specific. The defeault is 1/4). For most Intel and AMD
+     processors with an initial release date between 2017 and 2023, a thread's
+     typical share of the cache is from 18-64MB. Using a reasonable size
+     fraction of L3 is meant to estimate the point where non-temporal stores
+     begin outcompeting REP MOVSB. As well the point where the fact that
+     non-temporal stores are forced back to main memory would already occurred
+     to the majority of the lines in the copy. Note, concerns about the entire
+     L3 cache being evicted by the copy are mostly alleviated by the fact that
+     modern HW detects streaming patterns and provides proper LRU hints so that
+     the maximum thrashing capped at 1/associativity. */
+  unsigned long int non_temporal_threshold
+      = shared / cachesize_non_temporal_divisor;
   /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
      a higher risk of actually thrashing the cache as they don't have a HW LRU
      hint. As well, there performance in highly parallel situations is
diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
index 40b8129d6a..f5b9dd54fe 100644
--- a/sysdeps/x86/include/cpu-features.h
+++ b/sysdeps/x86/include/cpu-features.h
@@ -915,6 +915,9 @@ struct cpu_features
   unsigned long int shared_cache_size;
   /* Threshold to use non temporal store.  */
   unsigned long int non_temporal_threshold;
+  /* When no user non_temporal_threshold is specified. We default to
+     cachesize / cachesize_non_temporal_divisor.  */
+  unsigned long int cachesize_non_temporal_divisor;
   /* Threshold to use "rep movsb".  */
   unsigned long int rep_movsb_threshold;
   /* Threshold to stop using "rep movsb".  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v6 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell'
  2023-05-10  0:33 ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
  2023-05-10  0:33   ` [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
  2023-05-10  0:33   ` [PATCH v6 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
@ 2023-05-10  0:33   ` Noah Goldstein
  2023-05-10 22:04     ` H.J. Lu
  2023-05-10 15:55   ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` H.J. Lu
  3 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10  0:33 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Saltwell is just a die shrink of Bonnell, so the same
micro-architectural optimization preferences apply.
---
 sysdeps/x86/cpu-features.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 4cc1cd9fed..517b1be34c 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -678,7 +678,9 @@ init_cpu_features (struct cpu_features *cpu_features)
 	      break;
 
 	      /* Unaligned load versions are faster than SSSE3
-		     on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
+		     on Saltwell, Airmont, Silvermont, Goldmont, and Goldmont
+             Plus.  */
+	    case INTEL_ATOM_SALTWELL:
 	    case INTEL_ATOM_AIRMONT:
 	    case INTEL_ATOM_SILVERMONT:
 	    case INTEL_ATOM_GOLDMONT:
@@ -710,7 +712,6 @@ init_cpu_features (struct cpu_features *cpu_features)
 	      /* Default tuned atom microarch.  */
 	    case INTEL_ATOM_SIERRAFOREST:
 	    case INTEL_ATOM_GRANDRIDGE:
-	    case INTEL_ATOM_SALTWELL:
 	      goto default_tuning;
 
 	      /* Bigcore Tuning.  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-05-10  0:33 ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
                     ` (2 preceding siblings ...)
  2023-05-10  0:33   ` [PATCH v6 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Noah Goldstein
@ 2023-05-10 15:55   ` H.J. Lu
  2023-05-10 16:07     ` Noah Goldstein
  3 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-05-10 15:55 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Tue, May 9, 2023 at 5:33 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 4`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal thresholds and leads to using non-temporal stores in
> cases where REP MOVSB is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to main memory
> so using it at a size much smaller than L3 can place soon to be
> accessed data much further away than it otherwise could be. As well,
> modern machines are able to detect streaming patterns (especially if
> REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> in affect caps the total amount of eviction at 1/cache_associativity,
> far below meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be REP MOVSB which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> up to `sizeof_L3`.
>
> The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> such as Broadwell prefer something closer to `8`. This patch is meant
> to be followed up by another one to make the divisor cpu-specific, but
> in the meantime (and for easier backporting), this patch settles on
> `4` as a middle-ground.
>
> Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> ---
>  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
>  1 file changed, 43 insertions(+), 27 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index ec88945b39..4a1a5423ff 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -              & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +       intel_bug_no_cache_info:
> +         /* Assume that all logical threads share the highest cache
> +            level.  */
> +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +                    & 0xff);
> +
> +         /* Get per-thread size of highest level cache.  */
> +         if (shared_per_thread > 0 && threads > 0)
> +           shared_per_thread /= threads;
> +       }
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +       shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>
>        level1_icache_size
>         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>         shared = core;
> +
> +      if (shared_per_thread <= 0)
> +       shared_per_thread = shared;
>      }
>
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/4 of size
> +     of the chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> +     estimate the point where non-temporal stores begin outcompeting
> +     REP MOVSB. As well the point where the fact that non-temporal
> +     stores are forced back to main memory would already occurred to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the maximum thrashing
> +     capped at 1/associativity. */
> +  unsigned long int non_temporal_threshold = shared / 4;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, there performance in highly parallel situations is
> +     noticeably worse.  */
> +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> +    non_temporal_threshold = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>

LGTM.

BTW, this is a standalone patch and can be committed separately.

Thanks.

-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-05-10 15:55   ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` H.J. Lu
@ 2023-05-10 16:07     ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10 16:07 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Wed, May 10, 2023 at 10:56 AM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Tue, May 9, 2023 at 5:33 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > ncores_per_socket'. This patch updates that value to roughly
> > 'sizeof_L3 / 4`
> >
> > The original value (specifically dividing the `ncores_per_socket`) was
> > done to limit the amount of other threads' data a `memcpy`/`memset`
> > could evict.
> >
> > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > non-temporal thresholds and leads to using non-temporal stores in
> > cases where REP MOVSB is multiple times faster.
> >
> > Furthermore, non-temporal stores are written directly to main memory
> > so using it at a size much smaller than L3 can place soon to be
> > accessed data much further away than it otherwise could be. As well,
> > modern machines are able to detect streaming patterns (especially if
> > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > in affect caps the total amount of eviction at 1/cache_associativity,
> > far below meaningfully thrashing the entire cache.
> >
> > As best I can tell, the benchmarks that lead this small threshold
> > where done comparing non-temporal stores versus standard cacheable
> > stores. A better comparison (linked below) is to be REP MOVSB which,
> > on the measure systems, is nearly 2x faster than non-temporal stores
> > at the low-end of the previous threshold, and within 10% for over
> > 100MB copies (well past even the current threshold). In cases with a
> > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > up to `sizeof_L3`.
> >
> > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > such as Broadwell prefer something closer to `8`. This patch is meant
> > to be followed up by another one to make the divisor cpu-specific, but
> > in the meantime (and for easier backporting), this patch settles on
> > `4` as a middle-ground.
> >
> > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > stores where done using:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > Sheets results (also available in pdf on the github):
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > ---
> >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> >  1 file changed, 43 insertions(+), 27 deletions(-)
> >
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index ec88945b39..4a1a5423ff 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> >  }
> >
> >  static void
> > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> >                  long int core)
> >  {
> >    unsigned int eax;
> > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >    unsigned int family = cpu_features->basic.family;
> >    unsigned int model = cpu_features->basic.model;
> >    long int shared = *shared_ptr;
> > +  long int shared_per_thread = *shared_per_thread_ptr;
> >    unsigned int threads = *threads_ptr;
> >    bool inclusive_cache = true;
> >    bool support_count_mask = true;
> > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >        /* Try L2 otherwise.  */
> >        level  = 2;
> >        shared = core;
> > +      shared_per_thread = core;
> >        threads_l2 = 0;
> >        threads_l3 = -1;
> >      }
> > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >          }
> >        else
> >          {
> > -intel_bug_no_cache_info:
> > -          /* Assume that all logical threads share the highest cache
> > -             level.  */
> > -          threads
> > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > -              & 0xff);
> > -        }
> > -
> > -        /* Cap usage of highest cache level to the number of supported
> > -           threads.  */
> > -        if (shared > 0 && threads > 0)
> > -          shared /= threads;
> > +       intel_bug_no_cache_info:
> > +         /* Assume that all logical threads share the highest cache
> > +            level.  */
> > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > +                    & 0xff);
> > +
> > +         /* Get per-thread size of highest level cache.  */
> > +         if (shared_per_thread > 0 && threads > 0)
> > +           shared_per_thread /= threads;
> > +       }
> >      }
> >
> >    /* Account for non-inclusive L2 and L3 caches.  */
> >    if (!inclusive_cache)
> >      {
> >        if (threads_l2 > 0)
> > -        core /= threads_l2;
> > +       shared_per_thread += core / threads_l2;
> >        shared += core;
> >      }
> >
> >    *shared_ptr = shared;
> > +  *shared_per_thread_ptr = shared_per_thread;
> >    *threads_ptr = threads;
> >  }
> >
> > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    /* Find out what brand of processor.  */
> >    long int data = -1;
> >    long int shared = -1;
> > +  long int shared_per_thread = -1;
> >    long int core = -1;
> >    unsigned int threads = 0;
> >    unsigned long int level1_icache_size = -1;
> > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size
> >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level4_cache_size
> >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> >      {
> >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_amd)
> >      {
> >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        if (shared <= 0)
> >          /* No shared L3 cache.  All we have is the L2 cache.  */
> >         shared = core;
> > +
> > +      if (shared_per_thread <= 0)
> > +       shared_per_thread = shared;
> >      }
> >
> >    cpu_features->level1_icache_size = level1_icache_size;
> > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > -     thread's share of the chip's cache. For most Intel and AMD processors
> > -     with an initial release date between 2017 and 2020, a thread's typical
> > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > -     in cache after a maximum temporal copy, which will maintain
> > -     in cache a reasonable portion of the thread's stack and other
> > -     active data. If the threshold is set higher than one thread's
> > -     share of the cache, it has a substantial risk of negatively
> > -     impacting the performance of other threads running on the chip. */
> > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > +     of the chip's cache. For most Intel and AMD processors with an
> > +     initial release date between 2017 and 2023, a thread's typical
> > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > +     estimate the point where non-temporal stores begin outcompeting
> > +     REP MOVSB. As well the point where the fact that non-temporal
> > +     stores are forced back to main memory would already occurred to the
> > +     majority of the lines in the copy. Note, concerns about the
> > +     entire L3 cache being evicted by the copy are mostly alleviated
> > +     by the fact that modern HW detects streaming patterns and
> > +     provides proper LRU hints so that the maximum thrashing
> > +     capped at 1/associativity. */
> > +  unsigned long int non_temporal_threshold = shared / 4;
> > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > +     hint. As well, there performance in highly parallel situations is
> > +     noticeably worse.  */
> > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > --
> > 2.34.1
> >
>
> LGTM.
>
> BTW, this is a standalone patch and can be committed separately.
>
Yeah, Carlos wanted to reproduce the results independently and review
so waiting on that.
> Thanks.
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v6 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell'
  2023-05-10  0:33   ` [PATCH v6 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Noah Goldstein
@ 2023-05-10 22:04     ` H.J. Lu
  2023-05-10 22:12       ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-05-10 22:04 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Tue, May 9, 2023 at 5:34 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Saltwell is just a die shrink of Bonnell, so the same
> micro-architectural optimization preferences apply.
> ---
>  sysdeps/x86/cpu-features.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> index 4cc1cd9fed..517b1be34c 100644
> --- a/sysdeps/x86/cpu-features.c
> +++ b/sysdeps/x86/cpu-features.c
> @@ -678,7 +678,9 @@ init_cpu_features (struct cpu_features *cpu_features)
>               break;
>
>               /* Unaligned load versions are faster than SSSE3
> -                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> +                    on Saltwell, Airmont, Silvermont, Goldmont, and Goldmont
> +             Plus.  */
> +           case INTEL_ATOM_SALTWELL:

It should be grouped with Bonnell.

>             case INTEL_ATOM_AIRMONT:
>             case INTEL_ATOM_SILVERMONT:
>             case INTEL_ATOM_GOLDMONT:
> @@ -710,7 +712,6 @@ init_cpu_features (struct cpu_features *cpu_features)
>               /* Default tuned atom microarch.  */
>             case INTEL_ATOM_SIERRAFOREST:
>             case INTEL_ATOM_GRANDRIDGE:
> -           case INTEL_ATOM_SALTWELL:
>               goto default_tuning;
>
>               /* Bigcore Tuning.  */
> --
> 2.34.1
>


-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v7 2/4] x86: Refactor Intel `init_cpu_features`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
                   ` (5 preceding siblings ...)
  2023-05-10  0:33 ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-05-10 22:12 ` Noah Goldstein
  2023-05-10 22:12   ` [PATCH v7 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
  2023-05-10 22:12   ` [PATCH v7 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Noah Goldstein
  2023-05-12  5:10 ` [PATCH v8 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
                   ` (4 subsequent siblings)
  11 siblings, 2 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10 22:12 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

This patch should have no affect on existing functionality.

The current code, which has a single switch for model detection and
setting prefered features, is difficult to follow/extend. The cases
use magic numbers and many microarchitectures are missing. This makes
it difficult to reason about what is implemented so far and/or
how/where to add support for new features.

This patch splits the model detection and preference setting stages so
that CPU preferences can be set based on a complete list of available
microarchitectures, rather than based on model magic numbers.
---
 sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++--------
 1 file changed, 317 insertions(+), 84 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 5bff8ec0b4..9d433f8144 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
 		     == index_arch_Fast_Copy_Backward)),
 		"Incorrect index_arch_Fast_Unaligned_Load");
 
+
+/* Intel Family-6 microarch list.  */
+enum
+{
+  /* Atom processors.  */
+  INTEL_ATOM_BONNELL,
+  INTEL_ATOM_SALTWELL,
+  INTEL_ATOM_SILVERMONT,
+  INTEL_ATOM_AIRMONT,
+  INTEL_ATOM_GOLDMONT,
+  INTEL_ATOM_GOLDMONT_PLUS,
+  INTEL_ATOM_SIERRAFOREST,
+  INTEL_ATOM_GRANDRIDGE,
+  INTEL_ATOM_TREMONT,
+
+  /* Bigcore processors.  */
+  INTEL_BIGCORE_MEROM,
+  INTEL_BIGCORE_PENRYN,
+  INTEL_BIGCORE_DUNNINGTON,
+  INTEL_BIGCORE_NEHALEM,
+  INTEL_BIGCORE_WESTMERE,
+  INTEL_BIGCORE_SANDYBRIDGE,
+  INTEL_BIGCORE_IVYBRIDGE,
+  INTEL_BIGCORE_HASWELL,
+  INTEL_BIGCORE_BROADWELL,
+  INTEL_BIGCORE_SKYLAKE,
+  INTEL_BIGCORE_AMBERLAKE,
+  INTEL_BIGCORE_COFFEELAKE,
+  INTEL_BIGCORE_WHISKEYLAKE,
+  INTEL_BIGCORE_KABYLAKE,
+  INTEL_BIGCORE_COMETLAKE,
+  INTEL_BIGCORE_SKYLAKE_AVX512,
+  INTEL_BIGCORE_CANNONLAKE,
+  INTEL_BIGCORE_CASCADELAKE,
+  INTEL_BIGCORE_COOPERLAKE,
+  INTEL_BIGCORE_ICELAKE,
+  INTEL_BIGCORE_TIGERLAKE,
+  INTEL_BIGCORE_ROCKETLAKE,
+  INTEL_BIGCORE_SAPPHIRERAPIDS,
+  INTEL_BIGCORE_RAPTORLAKE,
+  INTEL_BIGCORE_EMERALDRAPIDS,
+  INTEL_BIGCORE_METEORLAKE,
+  INTEL_BIGCORE_LUNARLAKE,
+  INTEL_BIGCORE_ARROWLAKE,
+  INTEL_BIGCORE_GRANITERAPIDS,
+
+  /* Mixed (bigcore + atom SOC).  */
+  INTEL_MIXED_LAKEFIELD,
+  INTEL_MIXED_ALDERLAKE,
+
+  /* KNL.  */
+  INTEL_KNIGHTS_MILL,
+  INTEL_KNIGHTS_LANDING,
+
+  /* Unknown.  */
+  INTEL_UNKNOWN,
+};
+
+static unsigned int
+intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
+{
+  switch (model)
+    {
+    case 0x1C:
+    case 0x26:
+      return INTEL_ATOM_BONNELL;
+    case 0x27:
+    case 0x35:
+    case 0x36:
+      return INTEL_ATOM_SALTWELL;
+    case 0x37:
+    case 0x4A:
+    case 0x4D:
+    case 0x5D:
+      return INTEL_ATOM_SILVERMONT;
+    case 0x4C:
+    case 0x5A:
+    case 0x75:
+      return INTEL_ATOM_AIRMONT;
+    case 0x5C:
+    case 0x5F:
+      return INTEL_ATOM_GOLDMONT;
+    case 0x7A:
+      return INTEL_ATOM_GOLDMONT_PLUS;
+    case 0xAF:
+      return INTEL_ATOM_SIERRAFOREST;
+    case 0xB6:
+      return INTEL_ATOM_GRANDRIDGE;
+    case 0x86:
+    case 0x96:
+    case 0x9C:
+      return INTEL_ATOM_TREMONT;
+    case 0x0F:
+    case 0x16:
+      return INTEL_BIGCORE_MEROM;
+    case 0x17:
+      return INTEL_BIGCORE_PENRYN;
+    case 0x1D:
+      return INTEL_BIGCORE_DUNNINGTON;
+    case 0x1A:
+    case 0x1E:
+    case 0x1F:
+    case 0x2E:
+      return INTEL_BIGCORE_NEHALEM;
+    case 0x25:
+    case 0x2C:
+    case 0x2F:
+      return INTEL_BIGCORE_WESTMERE;
+    case 0x2A:
+    case 0x2D:
+      return INTEL_BIGCORE_SANDYBRIDGE;
+    case 0x3A:
+    case 0x3E:
+      return INTEL_BIGCORE_IVYBRIDGE;
+    case 0x3C:
+    case 0x3F:
+    case 0x45:
+    case 0x46:
+      return INTEL_BIGCORE_HASWELL;
+    case 0x3D:
+    case 0x47:
+    case 0x4F:
+    case 0x56:
+      return INTEL_BIGCORE_BROADWELL;
+    case 0x4E:
+    case 0x5E:
+      return INTEL_BIGCORE_SKYLAKE;
+    case 0x8E:
+      switch (stepping)
+	{
+	case 0x09:
+	  return INTEL_BIGCORE_AMBERLAKE;
+	case 0x0A:
+	  return INTEL_BIGCORE_COFFEELAKE;
+	case 0x0B:
+	case 0x0C:
+	  return INTEL_BIGCORE_WHISKEYLAKE;
+	default:
+	  return INTEL_BIGCORE_KABYLAKE;
+	}
+    case 0x9E:
+      switch (stepping)
+	{
+	case 0x0A:
+	case 0x0B:
+	case 0x0C:
+	case 0x0D:
+	  return INTEL_BIGCORE_COFFEELAKE;
+	default:
+	  return INTEL_BIGCORE_KABYLAKE;
+	}
+    case 0xA5:
+    case 0xA6:
+      return INTEL_BIGCORE_COMETLAKE;
+    case 0x66:
+      return INTEL_BIGCORE_CANNONLAKE;
+    case 0x55:
+      switch (stepping)
+	{
+	case 0x06:
+	case 0x07:
+	  return INTEL_BIGCORE_CASCADELAKE;
+	case 0x0b:
+	  return INTEL_BIGCORE_COOPERLAKE;
+	default:
+	  return INTEL_BIGCORE_SKYLAKE_AVX512;
+	}
+    case 0x6A:
+    case 0x6C:
+    case 0x7D:
+    case 0x7E:
+    case 0x9D:
+      return INTEL_BIGCORE_ICELAKE;
+    case 0x8C:
+    case 0x8D:
+      return INTEL_BIGCORE_TIGERLAKE;
+    case 0xA7:
+      return INTEL_BIGCORE_ROCKETLAKE;
+    case 0x8F:
+      return INTEL_BIGCORE_SAPPHIRERAPIDS;
+    case 0xB7:
+    case 0xBA:
+    case 0xBF:
+      return INTEL_BIGCORE_RAPTORLAKE;
+    case 0xCF:
+      return INTEL_BIGCORE_EMERALDRAPIDS;
+    case 0xAA:
+    case 0xAC:
+      return INTEL_BIGCORE_METEORLAKE;
+    case 0xbd:
+      return INTEL_BIGCORE_LUNARLAKE;
+    case 0xc6:
+      return INTEL_BIGCORE_ARROWLAKE;
+    case 0xAD:
+    case 0xAE:
+      return INTEL_BIGCORE_GRANITERAPIDS;
+    case 0x8A:
+      return INTEL_MIXED_LAKEFIELD;
+    case 0x97:
+    case 0x9A:
+    case 0xBE:
+      return INTEL_MIXED_ALDERLAKE;
+    case 0x85:
+      return INTEL_KNIGHTS_MILL;
+    case 0x57:
+      return INTEL_KNIGHTS_LANDING;
+    default:
+      return INTEL_UNKNOWN;
+    }
+}
+
 static inline void
 init_cpu_features (struct cpu_features *cpu_features)
 {
@@ -453,129 +664,151 @@ init_cpu_features (struct cpu_features *cpu_features)
       if (family == 0x06)
 	{
 	  model += extended_model;
-	  switch (model)
+	  unsigned int microarch
+	      = intel_get_fam6_microarch (model, stepping);
+
+	  switch (microarch)
 	    {
-	    case 0x1c:
-	    case 0x26:
-	      /* BSF is slow on Atom.  */
+	      /* Atom / KNL tuning.  */
+	    case INTEL_ATOM_BONNELL:
+	      /* BSF is slow on Bonnell.  */
 	      cpu_features->preferred[index_arch_Slow_BSF]
-		|= bit_arch_Slow_BSF;
+		  |= bit_arch_Slow_BSF;
 	      break;
 
-	    case 0x57:
-	      /* Knights Landing.  Enable Silvermont optimizations.  */
-
-	    case 0x7a:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont Plus.  */
-
-	    case 0x5c:
-	    case 0x5f:
 	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont.  */
+		     on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
+	    case INTEL_ATOM_AIRMONT:
+	    case INTEL_ATOM_SILVERMONT:
+	    case INTEL_ATOM_GOLDMONT:
+	    case INTEL_ATOM_GOLDMONT_PLUS:
 
-	    case 0x4c:
-	    case 0x5a:
-	    case 0x75:
-	      /* Airmont is a die shrink of Silvermont.  */
+            /* Knights Landing.  Enable Silvermont optimizations.  */
+	    case INTEL_KNIGHTS_LANDING:
 
-	    case 0x37:
-	    case 0x4a:
-	    case 0x4d:
-	    case 0x5d:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Silvermont.  */
 	      cpu_features->preferred[index_arch_Fast_Unaligned_Load]
-		|= (bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
-	    case 0x86:
-	    case 0x96:
-	    case 0x9c:
+	    case INTEL_ATOM_TREMONT:
 	      /* Enable rep string instructions, unaligned load, unaligned
-	         copy, pminub and avoid SSE 4.2 on Tremont.  */
+		 copy, pminub and avoid SSE 4.2 on Tremont.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
+	      /* Default tuned KNL microarch.  */
+	    case INTEL_KNIGHTS_MILL:
+	      goto default_tuning;
+	      /* Default tuned atom microarch.  */
+	    case INTEL_ATOM_SIERRAFOREST:
+	    case INTEL_ATOM_GRANDRIDGE:
+	    case INTEL_ATOM_SALTWELL:
+	      goto default_tuning;
+
+	      /* Bigcore Tuning.  */
+	    case INTEL_UNKNOWN:
 	    default:
+	    default_tuning:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	      /* Fall through.  */
-
-	    case 0x1a:
-	    case 0x1e:
-	    case 0x1f:
-	    case 0x25:
-	    case 0x2c:
-	    case 0x2e:
-	    case 0x2f:
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop);
+		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
+
+	      /* Default tuned Bigcore microarch.  */
+	    case INTEL_BIGCORE_SANDYBRIDGE:
+	    case INTEL_BIGCORE_IVYBRIDGE:
+	    case INTEL_BIGCORE_HASWELL:
+	    case INTEL_BIGCORE_BROADWELL:
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_AMBERLAKE:
+	    case INTEL_BIGCORE_COFFEELAKE:
+	    case INTEL_BIGCORE_WHISKEYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+	    case INTEL_BIGCORE_COMETLAKE:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	    case INTEL_BIGCORE_CASCADELAKE:
+	    case INTEL_BIGCORE_COOPERLAKE:
+	    case INTEL_BIGCORE_CANNONLAKE:
+	    case INTEL_BIGCORE_ICELAKE:
+	    case INTEL_BIGCORE_TIGERLAKE:
+	    case INTEL_BIGCORE_ROCKETLAKE:
+	    case INTEL_BIGCORE_RAPTORLAKE:
+	    case INTEL_BIGCORE_METEORLAKE:
+	    case INTEL_BIGCORE_LUNARLAKE:
+	    case INTEL_BIGCORE_ARROWLAKE:
+	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
+	    case INTEL_BIGCORE_EMERALDRAPIDS:
+	    case INTEL_BIGCORE_GRANITERAPIDS:
+	      goto default_tuning;
+
+	    /* Default tuned Mixed (bigcore + atom SOC).  */
+	    case INTEL_MIXED_LAKEFIELD:
+	    case INTEL_MIXED_ALDERLAKE:
+	      goto default_tuning;
 	    }
 
-	 /* Disable TSX on some processors to avoid TSX on kernels that
-	    weren't updated with the latest microcode package (which
-	    disables broken feature by default).  */
-	 switch (model)
+	      /* Disable TSX on some processors to avoid TSX on kernels that
+		 weren't updated with the latest microcode package (which
+		 disables broken feature by default).  */
+	  switch (microarch)
 	    {
-	    case 0x55:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	      /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and
+	         Cooperlake also have model 0x55 but stepping 5/6 and 11
+	         respectively so double check the stepping to be safe. */
 	      if (stepping <= 5)
 		goto disable_tsx;
 	      break;
-	    case 0x8e:
-	      /* NB: Although the errata documents that for model == 0x8e,
-		 only 0xb stepping or lower are impacted, the intention of
-		 the errata was to disable TSX on all client processors on
-		 all steppings.  Include 0xc stepping which is an Intel
-		 Core i7-8665U, a client mobile processor.  */
-	    case 0x9e:
-	      if (stepping > 0xc)
+
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_AMBERLAKE:
+	    case INTEL_BIGCORE_COFFEELAKE:
+	    case INTEL_BIGCORE_WHISKEYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+		/* NB: Although the errata documents that for model == 0x8e
+		   (skylake client), only 0xb stepping or lower are impacted,
+		   the intention of the errata was to disable TSX on all client
+		   processors on all steppings.  Include 0xc stepping which is
+		   an Intel Core i7-8665U, a client mobile processor.  */
+		if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
 		break;
-	      /* Fall through.  */
-	    case 0x4e:
-	    case 0x5e:
-	      {
+
 		/* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
 		   processors listed in:
 
 https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
 		 */
-disable_tsx:
+	    disable_tsx:
 		CPU_FEATURE_UNSET (cpu_features, HLE);
 		CPU_FEATURE_UNSET (cpu_features, RTM);
 		CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
-	      }
-	      break;
-	    case 0x3f:
-	      /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
-	      if (stepping >= 4)
 		break;
-	      /* Fall through.  */
-	    case 0x3c:
-	    case 0x45:
-	    case 0x46:
-	      /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
-		 with stepping >= 4) to avoid TSX on kernels that weren't
-		 updated with the latest microcode package (which disables
-		 broken feature by default).  */
-	      CPU_FEATURE_UNSET (cpu_features, RTM);
-	      break;
+
+	    case INTEL_BIGCORE_HASWELL:
+		/* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
+		   TSX.  Haswell also include other model numbers that have
+		   working TSX.  */
+		if (model == 0x3f && stepping >= 4)
+		break;
+
+		CPU_FEATURE_UNSET (cpu_features, RTM);
+		break;
 	    }
 	}
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v7 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific
  2023-05-10 22:12 ` [PATCH v7 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-05-10 22:12   ` Noah Goldstein
  2023-05-10 22:12   ` [PATCH v7 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Noah Goldstein
  1 sibling, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10 22:12 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Different systems prefer a different divisors.

From benchmarks[1] so far the following divisors have been found:
    ICX     : 2
    SKX     : 2
    BWD     : 8

For Intel, we are generalizing that BWD and older prefers 8 as a
divisor, and SKL and newer prefers 2. This number can be further tuned
as benchmarks are run.

[1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
---
 sysdeps/x86/cpu-features.c         | 16 +++++++++++++--
 sysdeps/x86/dl-cacheinfo.h         | 32 ++++++++++++++++++------------
 sysdeps/x86/include/cpu-features.h |  3 +++
 3 files changed, 36 insertions(+), 15 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 9d433f8144..4cc1cd9fed 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -637,6 +637,7 @@ init_cpu_features (struct cpu_features *cpu_features)
   unsigned int stepping = 0;
   enum cpu_features_kind kind;
 
+  cpu_features->cachesize_non_temporal_divisor = 4;
 #if !HAS_CPUID
   if (__get_cpuid_max (0, 0) == 0)
     {
@@ -720,8 +721,8 @@ init_cpu_features (struct cpu_features *cpu_features)
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	    case INTEL_BIGCORE_NEHALEM:
-	    case INTEL_BIGCORE_WESTMERE:
+
+	    enable_modern_features:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
@@ -730,11 +731,20 @@ init_cpu_features (struct cpu_features *cpu_features)
 		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
 
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
+	      /* Older CPUs prefer non-temporal stores at lower threshold.  */
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto enable_modern_features;
+
 	      /* Default tuned Bigcore microarch.  */
 	    case INTEL_BIGCORE_SANDYBRIDGE:
 	    case INTEL_BIGCORE_IVYBRIDGE:
 	    case INTEL_BIGCORE_HASWELL:
 	    case INTEL_BIGCORE_BROADWELL:
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto default_tuning;
+
 	    case INTEL_BIGCORE_SKYLAKE:
 	    case INTEL_BIGCORE_AMBERLAKE:
 	    case INTEL_BIGCORE_COFFEELAKE:
@@ -755,11 +765,13 @@ init_cpu_features (struct cpu_features *cpu_features)
 	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
 	    case INTEL_BIGCORE_EMERALDRAPIDS:
 	    case INTEL_BIGCORE_GRANITERAPIDS:
+	      cpu_features->cachesize_non_temporal_divisor = 2;
 	      goto default_tuning;
 
 	    /* Default tuned Mixed (bigcore + atom SOC).  */
 	    case INTEL_MIXED_LAKEFIELD:
 	    case INTEL_MIXED_ALDERLAKE:
+	      cpu_features->cachesize_non_temporal_divisor = 2;
 	      goto default_tuning;
 	    }
 
diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index 4a1a5423ff..864b00a521 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 1/4 of size
-     of the chip's cache. For most Intel and AMD processors with an
-     initial release date between 2017 and 2023, a thread's typical
-     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
-     estimate the point where non-temporal stores begin outcompeting
-     REP MOVSB. As well the point where the fact that non-temporal
-     stores are forced back to main memory would already occurred to the
-     majority of the lines in the copy. Note, concerns about the
-     entire L3 cache being evicted by the copy are mostly alleviated
-     by the fact that modern HW detects streaming patterns and
-     provides proper LRU hints so that the maximum thrashing
-     capped at 1/associativity. */
-  unsigned long int non_temporal_threshold = shared / 4;
+  unsigned long int cachesize_non_temporal_divisor
+      = cpu_features->cachesize_non_temporal_divisor;
+  if (cachesize_non_temporal_divisor <= 0)
+    cachesize_non_temporal_divisor = 4;
+
+  /* The default setting for the non_temporal threshold is [1/2, 1/8] of size
+     of the chip's cache (depending on `cachesize_non_temporal_divisor` which
+     is microarch specific. The defeault is 1/4). For most Intel and AMD
+     processors with an initial release date between 2017 and 2023, a thread's
+     typical share of the cache is from 18-64MB. Using a reasonable size
+     fraction of L3 is meant to estimate the point where non-temporal stores
+     begin outcompeting REP MOVSB. As well the point where the fact that
+     non-temporal stores are forced back to main memory would already occurred
+     to the majority of the lines in the copy. Note, concerns about the entire
+     L3 cache being evicted by the copy are mostly alleviated by the fact that
+     modern HW detects streaming patterns and provides proper LRU hints so that
+     the maximum thrashing capped at 1/associativity. */
+  unsigned long int non_temporal_threshold
+      = shared / cachesize_non_temporal_divisor;
   /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
      a higher risk of actually thrashing the cache as they don't have a HW LRU
      hint. As well, there performance in highly parallel situations is
diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
index 40b8129d6a..f5b9dd54fe 100644
--- a/sysdeps/x86/include/cpu-features.h
+++ b/sysdeps/x86/include/cpu-features.h
@@ -915,6 +915,9 @@ struct cpu_features
   unsigned long int shared_cache_size;
   /* Threshold to use non temporal store.  */
   unsigned long int non_temporal_threshold;
+  /* When no user non_temporal_threshold is specified. We default to
+     cachesize / cachesize_non_temporal_divisor.  */
+  unsigned long int cachesize_non_temporal_divisor;
   /* Threshold to use "rep movsb".  */
   unsigned long int rep_movsb_threshold;
   /* Threshold to stop using "rep movsb".  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v7 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell'
  2023-05-10 22:12 ` [PATCH v7 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
  2023-05-10 22:12   ` [PATCH v7 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
@ 2023-05-10 22:12   ` Noah Goldstein
  2023-05-12  5:12     ` Noah Goldstein
  1 sibling, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10 22:12 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Saltwell is just a die shrink of Bonnell, so the same
micro-architectural optimization preferences apply.
---
 sysdeps/x86/cpu-features.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 4cc1cd9fed..0fb02d2f39 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -672,7 +672,8 @@ init_cpu_features (struct cpu_features *cpu_features)
 	    {
 	      /* Atom / KNL tuning.  */
 	    case INTEL_ATOM_BONNELL:
-	      /* BSF is slow on Bonnell.  */
+	    case INTEL_ATOM_SALTWELL:
+	      /* BSF is slow on Bonnell/Saltwell.  */
 	      cpu_features->preferred[index_arch_Slow_BSF]
 		  |= bit_arch_Slow_BSF;
 	      break;
@@ -710,7 +711,6 @@ init_cpu_features (struct cpu_features *cpu_features)
 	      /* Default tuned atom microarch.  */
 	    case INTEL_ATOM_SIERRAFOREST:
 	    case INTEL_ATOM_GRANDRIDGE:
-	    case INTEL_ATOM_SALTWELL:
 	      goto default_tuning;
 
 	      /* Bigcore Tuning.  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v6 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell'
  2023-05-10 22:04     ` H.J. Lu
@ 2023-05-10 22:12       ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10 22:12 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Wed, May 10, 2023 at 5:04 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Tue, May 9, 2023 at 5:34 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Saltwell is just a die shrink of Bonnell, so the same
> > micro-architectural optimization preferences apply.
> > ---
> >  sysdeps/x86/cpu-features.c | 5 +++--
> >  1 file changed, 3 insertions(+), 2 deletions(-)
> >
> > diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> > index 4cc1cd9fed..517b1be34c 100644
> > --- a/sysdeps/x86/cpu-features.c
> > +++ b/sysdeps/x86/cpu-features.c
> > @@ -678,7 +678,9 @@ init_cpu_features (struct cpu_features *cpu_features)
> >               break;
> >
> >               /* Unaligned load versions are faster than SSSE3
> > -                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> > +                    on Saltwell, Airmont, Silvermont, Goldmont, and Goldmont
> > +             Plus.  */
> > +           case INTEL_ATOM_SALTWELL:
>
> It should be grouped with Bonnell.
>
Done.
> >             case INTEL_ATOM_AIRMONT:
> >             case INTEL_ATOM_SILVERMONT:
> >             case INTEL_ATOM_GOLDMONT:
> > @@ -710,7 +712,6 @@ init_cpu_features (struct cpu_features *cpu_features)
> >               /* Default tuned atom microarch.  */
> >             case INTEL_ATOM_SIERRAFOREST:
> >             case INTEL_ATOM_GRANDRIDGE:
> > -           case INTEL_ATOM_SALTWELL:
> >               goto default_tuning;
> >
> >               /* Bigcore Tuning.  */
> > --
> > 2.34.1
> >
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features`
  2023-05-10  0:33   ` [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-05-10 22:13     ` H.J. Lu
  2023-05-10 23:17       ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-05-10 22:13 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Tue, May 9, 2023 at 5:34 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> This patch should have no affect on existing functionality.
>
> The current code, which has a single switch for model detection and
> setting prefered features, is difficult to follow/extend. The cases
> use magic numbers and many microarchitectures are missing. This makes
> it difficult to reason about what is implemented so far and/or
> how/where to add support for new features.
>
> This patch splits the model detection and preference setting stages so
> that CPU preferences can be set based on a complete list of available
> microarchitectures, rather than based on model magic numbers.
> ---
>  sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++--------
>  1 file changed, 317 insertions(+), 84 deletions(-)
>
> diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> index 5bff8ec0b4..9d433f8144 100644
> --- a/sysdeps/x86/cpu-features.c
> +++ b/sysdeps/x86/cpu-features.c
> @@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
>                      == index_arch_Fast_Copy_Backward)),
>                 "Incorrect index_arch_Fast_Unaligned_Load");
>
> +
> +/* Intel Family-6 microarch list.  */
> +enum
> +{
> +  /* Atom processors.  */
> +  INTEL_ATOM_BONNELL,
> +  INTEL_ATOM_SALTWELL,
> +  INTEL_ATOM_SILVERMONT,
> +  INTEL_ATOM_AIRMONT,
> +  INTEL_ATOM_GOLDMONT,
> +  INTEL_ATOM_GOLDMONT_PLUS,
> +  INTEL_ATOM_SIERRAFOREST,
> +  INTEL_ATOM_GRANDRIDGE,
> +  INTEL_ATOM_TREMONT,
> +
> +  /* Bigcore processors.  */
> +  INTEL_BIGCORE_MEROM,
> +  INTEL_BIGCORE_PENRYN,
> +  INTEL_BIGCORE_DUNNINGTON,
> +  INTEL_BIGCORE_NEHALEM,
> +  INTEL_BIGCORE_WESTMERE,
> +  INTEL_BIGCORE_SANDYBRIDGE,
> +  INTEL_BIGCORE_IVYBRIDGE,
> +  INTEL_BIGCORE_HASWELL,
> +  INTEL_BIGCORE_BROADWELL,
> +  INTEL_BIGCORE_SKYLAKE,
> +  INTEL_BIGCORE_AMBERLAKE,
> +  INTEL_BIGCORE_COFFEELAKE,
> +  INTEL_BIGCORE_WHISKEYLAKE,
> +  INTEL_BIGCORE_KABYLAKE,
> +  INTEL_BIGCORE_COMETLAKE,
> +  INTEL_BIGCORE_SKYLAKE_AVX512,
> +  INTEL_BIGCORE_CANNONLAKE,
> +  INTEL_BIGCORE_CASCADELAKE,
> +  INTEL_BIGCORE_COOPERLAKE,
> +  INTEL_BIGCORE_ICELAKE,
> +  INTEL_BIGCORE_TIGERLAKE,
> +  INTEL_BIGCORE_ROCKETLAKE,
> +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> +  INTEL_BIGCORE_RAPTORLAKE,
> +  INTEL_BIGCORE_EMERALDRAPIDS,
> +  INTEL_BIGCORE_METEORLAKE,
> +  INTEL_BIGCORE_LUNARLAKE,
> +  INTEL_BIGCORE_ARROWLAKE,
> +  INTEL_BIGCORE_GRANITERAPIDS,
> +
> +  /* Mixed (bigcore + atom SOC).  */
> +  INTEL_MIXED_LAKEFIELD,
> +  INTEL_MIXED_ALDERLAKE,
> +
> +  /* KNL.  */
> +  INTEL_KNIGHTS_MILL,
> +  INTEL_KNIGHTS_LANDING,
> +
> +  /* Unknown.  */
> +  INTEL_UNKNOWN,
> +};
> +
> +static unsigned int
> +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
> +{
> +  switch (model)
> +    {
> +    case 0x1C:
> +    case 0x26:
> +      return INTEL_ATOM_BONNELL;
> +    case 0x27:
> +    case 0x35:
> +    case 0x36:
> +      return INTEL_ATOM_SALTWELL;
> +    case 0x37:
> +    case 0x4A:
> +    case 0x4D:
> +    case 0x5D:
> +      return INTEL_ATOM_SILVERMONT;
> +    case 0x4C:
> +    case 0x5A:
> +    case 0x75:
> +      return INTEL_ATOM_AIRMONT;
> +    case 0x5C:
> +    case 0x5F:
> +      return INTEL_ATOM_GOLDMONT;
> +    case 0x7A:
> +      return INTEL_ATOM_GOLDMONT_PLUS;
> +    case 0xAF:
> +      return INTEL_ATOM_SIERRAFOREST;
> +    case 0xB6:
> +      return INTEL_ATOM_GRANDRIDGE;
> +    case 0x86:
> +    case 0x96:
> +    case 0x9C:
> +      return INTEL_ATOM_TREMONT;
> +    case 0x0F:
> +    case 0x16:
> +      return INTEL_BIGCORE_MEROM;
> +    case 0x17:
> +      return INTEL_BIGCORE_PENRYN;
> +    case 0x1D:
> +      return INTEL_BIGCORE_DUNNINGTON;
> +    case 0x1A:
> +    case 0x1E:
> +    case 0x1F:
> +    case 0x2E:
> +      return INTEL_BIGCORE_NEHALEM;
> +    case 0x25:
> +    case 0x2C:
> +    case 0x2F:
> +      return INTEL_BIGCORE_WESTMERE;
> +    case 0x2A:
> +    case 0x2D:
> +      return INTEL_BIGCORE_SANDYBRIDGE;
> +    case 0x3A:
> +    case 0x3E:
> +      return INTEL_BIGCORE_IVYBRIDGE;
> +    case 0x3C:
> +    case 0x3F:
> +    case 0x45:
> +    case 0x46:
> +      return INTEL_BIGCORE_HASWELL;
> +    case 0x3D:
> +    case 0x47:
> +    case 0x4F:
> +    case 0x56:
> +      return INTEL_BIGCORE_BROADWELL;
> +    case 0x4E:
> +    case 0x5E:
> +      return INTEL_BIGCORE_SKYLAKE;
> +    case 0x8E:
> +      switch (stepping)
> +       {
> +       case 0x09:
> +         return INTEL_BIGCORE_AMBERLAKE;
> +       case 0x0A:
> +         return INTEL_BIGCORE_COFFEELAKE;
> +       case 0x0B:
> +       case 0x0C:
> +         return INTEL_BIGCORE_WHISKEYLAKE;
> +       default:
> +         return INTEL_BIGCORE_KABYLAKE;
> +       }
> +    case 0x9E:
> +      switch (stepping)
> +       {
> +       case 0x0A:
> +       case 0x0B:
> +       case 0x0C:
> +       case 0x0D:
> +         return INTEL_BIGCORE_COFFEELAKE;
> +       default:
> +         return INTEL_BIGCORE_KABYLAKE;
> +       }
> +    case 0xA5:
> +    case 0xA6:
> +      return INTEL_BIGCORE_COMETLAKE;

For our purpose, all these Skylake derived CPUs can
be considered Skylake.

> +    case 0x66:
> +      return INTEL_BIGCORE_CANNONLAKE;
> +    case 0x55:
> +      switch (stepping)
> +       {
> +       case 0x06:
> +       case 0x07:
> +         return INTEL_BIGCORE_CASCADELAKE;
> +       case 0x0b:
> +         return INTEL_BIGCORE_COOPERLAKE;
> +       default:
> +         return INTEL_BIGCORE_SKYLAKE_AVX512;
> +       }

All these can be considered as Skylake server.

> +    case 0x6A:
> +    case 0x6C:
> +    case 0x7D:
> +    case 0x7E:
> +    case 0x9D:
> +      return INTEL_BIGCORE_ICELAKE;
> +    case 0x8C:
> +    case 0x8D:
> +      return INTEL_BIGCORE_TIGERLAKE;
> +    case 0xA7:
> +      return INTEL_BIGCORE_ROCKETLAKE;
> +    case 0x8F:
> +      return INTEL_BIGCORE_SAPPHIRERAPIDS;
> +    case 0xB7:
> +    case 0xBA:
> +    case 0xBF:
> +      return INTEL_BIGCORE_RAPTORLAKE;
> +    case 0xCF:
> +      return INTEL_BIGCORE_EMERALDRAPIDS;
> +    case 0xAA:
> +    case 0xAC:
> +      return INTEL_BIGCORE_METEORLAKE;
> +    case 0xbd:
> +      return INTEL_BIGCORE_LUNARLAKE;
> +    case 0xc6:
> +      return INTEL_BIGCORE_ARROWLAKE;
> +    case 0xAD:
> +    case 0xAE:
> +      return INTEL_BIGCORE_GRANITERAPIDS;
> +    case 0x8A:
> +      return INTEL_MIXED_LAKEFIELD;
> +    case 0x97:
> +    case 0x9A:
> +    case 0xBE:
> +      return INTEL_MIXED_ALDERLAKE;
> +    case 0x85:
> +      return INTEL_KNIGHTS_MILL;
> +    case 0x57:
> +      return INTEL_KNIGHTS_LANDING;
> +    default:
> +      return INTEL_UNKNOWN;
> +    }
> +}
> +
>  static inline void
>  init_cpu_features (struct cpu_features *cpu_features)
>  {
> @@ -453,129 +664,151 @@ init_cpu_features (struct cpu_features *cpu_features)
>        if (family == 0x06)
>         {
>           model += extended_model;
> -         switch (model)
> +         unsigned int microarch
> +             = intel_get_fam6_microarch (model, stepping);
> +
> +         switch (microarch)
>             {
> -           case 0x1c:
> -           case 0x26:
> -             /* BSF is slow on Atom.  */
> +             /* Atom / KNL tuning.  */
> +           case INTEL_ATOM_BONNELL:
> +             /* BSF is slow on Bonnell.  */
>               cpu_features->preferred[index_arch_Slow_BSF]
> -               |= bit_arch_Slow_BSF;
> +                 |= bit_arch_Slow_BSF;
>               break;
>
> -           case 0x57:
> -             /* Knights Landing.  Enable Silvermont optimizations.  */
> -
> -           case 0x7a:
> -             /* Unaligned load versions are faster than SSSE3
> -                on Goldmont Plus.  */
> -
> -           case 0x5c:
> -           case 0x5f:
>               /* Unaligned load versions are faster than SSSE3
> -                on Goldmont.  */
> +                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> +           case INTEL_ATOM_AIRMONT:
> +           case INTEL_ATOM_SILVERMONT:
> +           case INTEL_ATOM_GOLDMONT:
> +           case INTEL_ATOM_GOLDMONT_PLUS:
>
> -           case 0x4c:
> -           case 0x5a:
> -           case 0x75:
> -             /* Airmont is a die shrink of Silvermont.  */
> +            /* Knights Landing.  Enable Silvermont optimizations.  */
> +           case INTEL_KNIGHTS_LANDING:
>
> -           case 0x37:
> -           case 0x4a:
> -           case 0x4d:
> -           case 0x5d:
> -             /* Unaligned load versions are faster than SSSE3
> -                on Silvermont.  */
>               cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> -               |= (bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop
> -                   | bit_arch_Slow_SSE4_2);
> +                 |= (bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop
> +                     | bit_arch_Slow_SSE4_2);
>               break;
>
> -           case 0x86:
> -           case 0x96:
> -           case 0x9c:
> +           case INTEL_ATOM_TREMONT:
>               /* Enable rep string instructions, unaligned load, unaligned
> -                copy, pminub and avoid SSE 4.2 on Tremont.  */
> +                copy, pminub and avoid SSE 4.2 on Tremont.  */
>               cpu_features->preferred[index_arch_Fast_Rep_String]
> -               |= (bit_arch_Fast_Rep_String
> -                   | bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop
> -                   | bit_arch_Slow_SSE4_2);
> +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop
> +                     | bit_arch_Slow_SSE4_2);
>               break;
>
> +             /* Default tuned KNL microarch.  */
> +           case INTEL_KNIGHTS_MILL:
> +             goto default_tuning;
> +             /* Default tuned atom microarch.  */
> +           case INTEL_ATOM_SIERRAFOREST:
> +           case INTEL_ATOM_GRANDRIDGE:
> +           case INTEL_ATOM_SALTWELL:

Move Salwell to Bonnell.

> +             goto default_tuning;
> +
> +             /* Bigcore Tuning.  */
> +           case INTEL_UNKNOWN:
>             default:
> +           default_tuning:
>               /* Unknown family 0x06 processors.  Assuming this is one
>                  of Core i3/i5/i7 processors if AVX is available.  */
>               if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
>                 break;
> -             /* Fall through.  */
> -
> -           case 0x1a:
> -           case 0x1e:
> -           case 0x1f:
> -           case 0x25:
> -           case 0x2c:
> -           case 0x2e:
> -           case 0x2f:
> +           case INTEL_BIGCORE_NEHALEM:
> +           case INTEL_BIGCORE_WESTMERE:
>               /* Rep string instructions, unaligned load, unaligned copy,
>                  and pminub are fast on Intel Core i3, i5 and i7.  */
>               cpu_features->preferred[index_arch_Fast_Rep_String]
> -               |= (bit_arch_Fast_Rep_String
> -                   | bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop);
> +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop);
>               break;
> +
> +             /* Default tuned Bigcore microarch.  */
> +           case INTEL_BIGCORE_SANDYBRIDGE:
> +           case INTEL_BIGCORE_IVYBRIDGE:
> +           case INTEL_BIGCORE_HASWELL:
> +           case INTEL_BIGCORE_BROADWELL:
> +           case INTEL_BIGCORE_SKYLAKE:
> +           case INTEL_BIGCORE_AMBERLAKE:
> +           case INTEL_BIGCORE_COFFEELAKE:
> +           case INTEL_BIGCORE_WHISKEYLAKE:
> +           case INTEL_BIGCORE_KABYLAKE:
> +           case INTEL_BIGCORE_COMETLAKE:
> +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> +           case INTEL_BIGCORE_CASCADELAKE:
> +           case INTEL_BIGCORE_COOPERLAKE:
> +           case INTEL_BIGCORE_CANNONLAKE:
> +           case INTEL_BIGCORE_ICELAKE:
> +           case INTEL_BIGCORE_TIGERLAKE:
> +           case INTEL_BIGCORE_ROCKETLAKE:
> +           case INTEL_BIGCORE_RAPTORLAKE:
> +           case INTEL_BIGCORE_METEORLAKE:
> +           case INTEL_BIGCORE_LUNARLAKE:
> +           case INTEL_BIGCORE_ARROWLAKE:
> +           case INTEL_BIGCORE_SAPPHIRERAPIDS:
> +           case INTEL_BIGCORE_EMERALDRAPIDS:
> +           case INTEL_BIGCORE_GRANITERAPIDS:
> +             goto default_tuning;
> +
> +           /* Default tuned Mixed (bigcore + atom SOC).  */
> +           case INTEL_MIXED_LAKEFIELD:
> +           case INTEL_MIXED_ALDERLAKE:
> +             goto default_tuning;
>             }
>
> -        /* Disable TSX on some processors to avoid TSX on kernels that
> -           weren't updated with the latest microcode package (which
> -           disables broken feature by default).  */
> -        switch (model)
> +             /* Disable TSX on some processors to avoid TSX on kernels that
> +                weren't updated with the latest microcode package (which
> +                disables broken feature by default).  */
> +         switch (microarch)
>             {
> -           case 0x55:
> +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> +             /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and
> +                Cooperlake also have model 0x55 but stepping 5/6 and 11
> +                respectively so double check the stepping to be safe. */
>               if (stepping <= 5)
>                 goto disable_tsx;
>               break;
> -           case 0x8e:
> -             /* NB: Although the errata documents that for model == 0x8e,
> -                only 0xb stepping or lower are impacted, the intention of
> -                the errata was to disable TSX on all client processors on
> -                all steppings.  Include 0xc stepping which is an Intel
> -                Core i7-8665U, a client mobile processor.  */
> -           case 0x9e:
> -             if (stepping > 0xc)
> +
> +           case INTEL_BIGCORE_SKYLAKE:
> +           case INTEL_BIGCORE_AMBERLAKE:
> +           case INTEL_BIGCORE_COFFEELAKE:
> +           case INTEL_BIGCORE_WHISKEYLAKE:
> +           case INTEL_BIGCORE_KABYLAKE:
> +               /* NB: Although the errata documents that for model == 0x8e
> +                  (skylake client), only 0xb stepping or lower are impacted,
> +                  the intention of the errata was to disable TSX on all client
> +                  processors on all steppings.  Include 0xc stepping which is
> +                  an Intel Core i7-8665U, a client mobile processor.  */
> +               if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
>                 break;
> -             /* Fall through.  */
> -           case 0x4e:
> -           case 0x5e:
> -             {
> +
>                 /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
>                    processors listed in:
>
>  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
>                  */
> -disable_tsx:
> +           disable_tsx:
>                 CPU_FEATURE_UNSET (cpu_features, HLE);
>                 CPU_FEATURE_UNSET (cpu_features, RTM);
>                 CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> -             }
> -             break;
> -           case 0x3f:
> -             /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> -             if (stepping >= 4)
>                 break;
> -             /* Fall through.  */
> -           case 0x3c:
> -           case 0x45:
> -           case 0x46:
> -             /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> -                with stepping >= 4) to avoid TSX on kernels that weren't
> -                updated with the latest microcode package (which disables
> -                broken feature by default).  */
> -             CPU_FEATURE_UNSET (cpu_features, RTM);
> -             break;
> +
> +           case INTEL_BIGCORE_HASWELL:
> +               /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> +                  TSX.  Haswell also include other model numbers that have
> +                  working TSX.  */
> +               if (model == 0x3f && stepping >= 4)
> +               break;
> +
> +               CPU_FEATURE_UNSET (cpu_features, RTM);
> +               break;
>             }
>         }
>
> --
> 2.34.1
>


-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features`
  2023-05-10 22:13     ` H.J. Lu
@ 2023-05-10 23:17       ` Noah Goldstein
  2023-05-11 21:36         ` H.J. Lu
  0 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-05-10 23:17 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Wed, May 10, 2023 at 5:14 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Tue, May 9, 2023 at 5:34 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > This patch should have no affect on existing functionality.
> >
> > The current code, which has a single switch for model detection and
> > setting prefered features, is difficult to follow/extend. The cases
> > use magic numbers and many microarchitectures are missing. This makes
> > it difficult to reason about what is implemented so far and/or
> > how/where to add support for new features.
> >
> > This patch splits the model detection and preference setting stages so
> > that CPU preferences can be set based on a complete list of available
> > microarchitectures, rather than based on model magic numbers.
> > ---
> >  sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++--------
> >  1 file changed, 317 insertions(+), 84 deletions(-)
> >
> > diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> > index 5bff8ec0b4..9d433f8144 100644
> > --- a/sysdeps/x86/cpu-features.c
> > +++ b/sysdeps/x86/cpu-features.c
> > @@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
> >                      == index_arch_Fast_Copy_Backward)),
> >                 "Incorrect index_arch_Fast_Unaligned_Load");
> >
> > +
> > +/* Intel Family-6 microarch list.  */
> > +enum
> > +{
> > +  /* Atom processors.  */
> > +  INTEL_ATOM_BONNELL,
> > +  INTEL_ATOM_SALTWELL,
> > +  INTEL_ATOM_SILVERMONT,
> > +  INTEL_ATOM_AIRMONT,
> > +  INTEL_ATOM_GOLDMONT,
> > +  INTEL_ATOM_GOLDMONT_PLUS,
> > +  INTEL_ATOM_SIERRAFOREST,
> > +  INTEL_ATOM_GRANDRIDGE,
> > +  INTEL_ATOM_TREMONT,
> > +
> > +  /* Bigcore processors.  */
> > +  INTEL_BIGCORE_MEROM,
> > +  INTEL_BIGCORE_PENRYN,
> > +  INTEL_BIGCORE_DUNNINGTON,
> > +  INTEL_BIGCORE_NEHALEM,
> > +  INTEL_BIGCORE_WESTMERE,
> > +  INTEL_BIGCORE_SANDYBRIDGE,
> > +  INTEL_BIGCORE_IVYBRIDGE,
> > +  INTEL_BIGCORE_HASWELL,
> > +  INTEL_BIGCORE_BROADWELL,
> > +  INTEL_BIGCORE_SKYLAKE,
> > +  INTEL_BIGCORE_AMBERLAKE,
> > +  INTEL_BIGCORE_COFFEELAKE,
> > +  INTEL_BIGCORE_WHISKEYLAKE,
> > +  INTEL_BIGCORE_KABYLAKE,
> > +  INTEL_BIGCORE_COMETLAKE,
> > +  INTEL_BIGCORE_SKYLAKE_AVX512,
> > +  INTEL_BIGCORE_CANNONLAKE,
> > +  INTEL_BIGCORE_CASCADELAKE,
> > +  INTEL_BIGCORE_COOPERLAKE,
> > +  INTEL_BIGCORE_ICELAKE,
> > +  INTEL_BIGCORE_TIGERLAKE,
> > +  INTEL_BIGCORE_ROCKETLAKE,
> > +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> > +  INTEL_BIGCORE_RAPTORLAKE,
> > +  INTEL_BIGCORE_EMERALDRAPIDS,
> > +  INTEL_BIGCORE_METEORLAKE,
> > +  INTEL_BIGCORE_LUNARLAKE,
> > +  INTEL_BIGCORE_ARROWLAKE,
> > +  INTEL_BIGCORE_GRANITERAPIDS,
> > +
> > +  /* Mixed (bigcore + atom SOC).  */
> > +  INTEL_MIXED_LAKEFIELD,
> > +  INTEL_MIXED_ALDERLAKE,
> > +
> > +  /* KNL.  */
> > +  INTEL_KNIGHTS_MILL,
> > +  INTEL_KNIGHTS_LANDING,
> > +
> > +  /* Unknown.  */
> > +  INTEL_UNKNOWN,
> > +};
> > +
> > +static unsigned int
> > +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
> > +{
> > +  switch (model)
> > +    {
> > +    case 0x1C:
> > +    case 0x26:
> > +      return INTEL_ATOM_BONNELL;
> > +    case 0x27:
> > +    case 0x35:
> > +    case 0x36:
> > +      return INTEL_ATOM_SALTWELL;
> > +    case 0x37:
> > +    case 0x4A:
> > +    case 0x4D:
> > +    case 0x5D:
> > +      return INTEL_ATOM_SILVERMONT;
> > +    case 0x4C:
> > +    case 0x5A:
> > +    case 0x75:
> > +      return INTEL_ATOM_AIRMONT;
> > +    case 0x5C:
> > +    case 0x5F:
> > +      return INTEL_ATOM_GOLDMONT;
> > +    case 0x7A:
> > +      return INTEL_ATOM_GOLDMONT_PLUS;
> > +    case 0xAF:
> > +      return INTEL_ATOM_SIERRAFOREST;
> > +    case 0xB6:
> > +      return INTEL_ATOM_GRANDRIDGE;
> > +    case 0x86:
> > +    case 0x96:
> > +    case 0x9C:
> > +      return INTEL_ATOM_TREMONT;
> > +    case 0x0F:
> > +    case 0x16:
> > +      return INTEL_BIGCORE_MEROM;
> > +    case 0x17:
> > +      return INTEL_BIGCORE_PENRYN;
> > +    case 0x1D:
> > +      return INTEL_BIGCORE_DUNNINGTON;
> > +    case 0x1A:
> > +    case 0x1E:
> > +    case 0x1F:
> > +    case 0x2E:
> > +      return INTEL_BIGCORE_NEHALEM;
> > +    case 0x25:
> > +    case 0x2C:
> > +    case 0x2F:
> > +      return INTEL_BIGCORE_WESTMERE;
> > +    case 0x2A:
> > +    case 0x2D:
> > +      return INTEL_BIGCORE_SANDYBRIDGE;
> > +    case 0x3A:
> > +    case 0x3E:
> > +      return INTEL_BIGCORE_IVYBRIDGE;
> > +    case 0x3C:
> > +    case 0x3F:
> > +    case 0x45:
> > +    case 0x46:
> > +      return INTEL_BIGCORE_HASWELL;
> > +    case 0x3D:
> > +    case 0x47:
> > +    case 0x4F:
> > +    case 0x56:
> > +      return INTEL_BIGCORE_BROADWELL;
> > +    case 0x4E:
> > +    case 0x5E:
> > +      return INTEL_BIGCORE_SKYLAKE;
> > +    case 0x8E:
> > +      switch (stepping)
> > +       {
> > +       case 0x09:
> > +         return INTEL_BIGCORE_AMBERLAKE;
> > +       case 0x0A:
> > +         return INTEL_BIGCORE_COFFEELAKE;
> > +       case 0x0B:
> > +       case 0x0C:
> > +         return INTEL_BIGCORE_WHISKEYLAKE;
> > +       default:
> > +         return INTEL_BIGCORE_KABYLAKE;
> > +       }
> > +    case 0x9E:
> > +      switch (stepping)
> > +       {
> > +       case 0x0A:
> > +       case 0x0B:
> > +       case 0x0C:
> > +       case 0x0D:
> > +         return INTEL_BIGCORE_COFFEELAKE;
> > +       default:
> > +         return INTEL_BIGCORE_KABYLAKE;
> > +       }
> > +    case 0xA5:
> > +    case 0xA6:
> > +      return INTEL_BIGCORE_COMETLAKE;
>
> For our purpose, all these Skylake derived CPUs can
> be considered Skylake.
>
> > +    case 0x66:
> > +      return INTEL_BIGCORE_CANNONLAKE;
> > +    case 0x55:
> > +      switch (stepping)
> > +       {
> > +       case 0x06:
> > +       case 0x07:
> > +         return INTEL_BIGCORE_CASCADELAKE;
> > +       case 0x0b:
> > +         return INTEL_BIGCORE_COOPERLAKE;
> > +       default:
> > +         return INTEL_BIGCORE_SKYLAKE_AVX512;
> > +       }
>
> All these can be considered as Skylake server.
Preference is to keep as is. Think its clearer to have
extra detail in this function so a reader is never left thinking
"why isn't this case handled". As well, the cost of distinguishing
seems very low/none, whereas in the future if there is a need
to distinguish having it already prepared seems somewhat
valuable.
>
> > +    case 0x6A:
> > +    case 0x6C:
> > +    case 0x7D:
> > +    case 0x7E:
> > +    case 0x9D:
> > +      return INTEL_BIGCORE_ICELAKE;
> > +    case 0x8C:
> > +    case 0x8D:
> > +      return INTEL_BIGCORE_TIGERLAKE;
> > +    case 0xA7:
> > +      return INTEL_BIGCORE_ROCKETLAKE;
> > +    case 0x8F:
> > +      return INTEL_BIGCORE_SAPPHIRERAPIDS;
> > +    case 0xB7:
> > +    case 0xBA:
> > +    case 0xBF:
> > +      return INTEL_BIGCORE_RAPTORLAKE;
> > +    case 0xCF:
> > +      return INTEL_BIGCORE_EMERALDRAPIDS;
> > +    case 0xAA:
> > +    case 0xAC:
> > +      return INTEL_BIGCORE_METEORLAKE;
> > +    case 0xbd:
> > +      return INTEL_BIGCORE_LUNARLAKE;
> > +    case 0xc6:
> > +      return INTEL_BIGCORE_ARROWLAKE;
> > +    case 0xAD:
> > +    case 0xAE:
> > +      return INTEL_BIGCORE_GRANITERAPIDS;
> > +    case 0x8A:
> > +      return INTEL_MIXED_LAKEFIELD;
> > +    case 0x97:
> > +    case 0x9A:
> > +    case 0xBE:
> > +      return INTEL_MIXED_ALDERLAKE;
> > +    case 0x85:
> > +      return INTEL_KNIGHTS_MILL;
> > +    case 0x57:
> > +      return INTEL_KNIGHTS_LANDING;
> > +    default:
> > +      return INTEL_UNKNOWN;
> > +    }
> > +}
> > +
> >  static inline void
> >  init_cpu_features (struct cpu_features *cpu_features)
> >  {
> > @@ -453,129 +664,151 @@ init_cpu_features (struct cpu_features *cpu_features)
> >        if (family == 0x06)
> >         {
> >           model += extended_model;
> > -         switch (model)
> > +         unsigned int microarch
> > +             = intel_get_fam6_microarch (model, stepping);
> > +
> > +         switch (microarch)
> >             {
> > -           case 0x1c:
> > -           case 0x26:
> > -             /* BSF is slow on Atom.  */
> > +             /* Atom / KNL tuning.  */
> > +           case INTEL_ATOM_BONNELL:
> > +             /* BSF is slow on Bonnell.  */
> >               cpu_features->preferred[index_arch_Slow_BSF]
> > -               |= bit_arch_Slow_BSF;
> > +                 |= bit_arch_Slow_BSF;
> >               break;
> >
> > -           case 0x57:
> > -             /* Knights Landing.  Enable Silvermont optimizations.  */
> > -
> > -           case 0x7a:
> > -             /* Unaligned load versions are faster than SSSE3
> > -                on Goldmont Plus.  */
> > -
> > -           case 0x5c:
> > -           case 0x5f:
> >               /* Unaligned load versions are faster than SSSE3
> > -                on Goldmont.  */
> > +                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> > +           case INTEL_ATOM_AIRMONT:
> > +           case INTEL_ATOM_SILVERMONT:
> > +           case INTEL_ATOM_GOLDMONT:
> > +           case INTEL_ATOM_GOLDMONT_PLUS:
> >
> > -           case 0x4c:
> > -           case 0x5a:
> > -           case 0x75:
> > -             /* Airmont is a die shrink of Silvermont.  */
> > +            /* Knights Landing.  Enable Silvermont optimizations.  */
> > +           case INTEL_KNIGHTS_LANDING:
> >
> > -           case 0x37:
> > -           case 0x4a:
> > -           case 0x4d:
> > -           case 0x5d:
> > -             /* Unaligned load versions are faster than SSSE3
> > -                on Silvermont.  */
> >               cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> > -               |= (bit_arch_Fast_Unaligned_Load
> > -                   | bit_arch_Fast_Unaligned_Copy
> > -                   | bit_arch_Prefer_PMINUB_for_stringop
> > -                   | bit_arch_Slow_SSE4_2);
> > +                 |= (bit_arch_Fast_Unaligned_Load
> > +                     | bit_arch_Fast_Unaligned_Copy
> > +                     | bit_arch_Prefer_PMINUB_for_stringop
> > +                     | bit_arch_Slow_SSE4_2);
> >               break;
> >
> > -           case 0x86:
> > -           case 0x96:
> > -           case 0x9c:
> > +           case INTEL_ATOM_TREMONT:
> >               /* Enable rep string instructions, unaligned load, unaligned
> > -                copy, pminub and avoid SSE 4.2 on Tremont.  */
> > +                copy, pminub and avoid SSE 4.2 on Tremont.  */
> >               cpu_features->preferred[index_arch_Fast_Rep_String]
> > -               |= (bit_arch_Fast_Rep_String
> > -                   | bit_arch_Fast_Unaligned_Load
> > -                   | bit_arch_Fast_Unaligned_Copy
> > -                   | bit_arch_Prefer_PMINUB_for_stringop
> > -                   | bit_arch_Slow_SSE4_2);
> > +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > +                     | bit_arch_Fast_Unaligned_Copy
> > +                     | bit_arch_Prefer_PMINUB_for_stringop
> > +                     | bit_arch_Slow_SSE4_2);
> >               break;
> >
> > +             /* Default tuned KNL microarch.  */
> > +           case INTEL_KNIGHTS_MILL:
> > +             goto default_tuning;
> > +             /* Default tuned atom microarch.  */
> > +           case INTEL_ATOM_SIERRAFOREST:
> > +           case INTEL_ATOM_GRANDRIDGE:
> > +           case INTEL_ATOM_SALTWELL:
>
> Move Salwell to Bonnell.

We where only match models 0x1c and 0x26 for the BSF
optimization before. Would prefer to keep this patch purely
refactor with no change to functionality. We already have
a follow up patch to move saltwell->bonnell.
>
> > +             goto default_tuning;
> > +
> > +             /* Bigcore Tuning.  */
> > +           case INTEL_UNKNOWN:
> >             default:
> > +           default_tuning:
> >               /* Unknown family 0x06 processors.  Assuming this is one
> >                  of Core i3/i5/i7 processors if AVX is available.  */
> >               if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
> >                 break;
> > -             /* Fall through.  */
> > -
> > -           case 0x1a:
> > -           case 0x1e:
> > -           case 0x1f:
> > -           case 0x25:
> > -           case 0x2c:
> > -           case 0x2e:
> > -           case 0x2f:
> > +           case INTEL_BIGCORE_NEHALEM:
> > +           case INTEL_BIGCORE_WESTMERE:
> >               /* Rep string instructions, unaligned load, unaligned copy,
> >                  and pminub are fast on Intel Core i3, i5 and i7.  */
> >               cpu_features->preferred[index_arch_Fast_Rep_String]
> > -               |= (bit_arch_Fast_Rep_String
> > -                   | bit_arch_Fast_Unaligned_Load
> > -                   | bit_arch_Fast_Unaligned_Copy
> > -                   | bit_arch_Prefer_PMINUB_for_stringop);
> > +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > +                     | bit_arch_Fast_Unaligned_Copy
> > +                     | bit_arch_Prefer_PMINUB_for_stringop);
> >               break;
> > +
> > +             /* Default tuned Bigcore microarch.  */
> > +           case INTEL_BIGCORE_SANDYBRIDGE:
> > +           case INTEL_BIGCORE_IVYBRIDGE:
> > +           case INTEL_BIGCORE_HASWELL:
> > +           case INTEL_BIGCORE_BROADWELL:
> > +           case INTEL_BIGCORE_SKYLAKE:
> > +           case INTEL_BIGCORE_AMBERLAKE:
> > +           case INTEL_BIGCORE_COFFEELAKE:
> > +           case INTEL_BIGCORE_WHISKEYLAKE:
> > +           case INTEL_BIGCORE_KABYLAKE:
> > +           case INTEL_BIGCORE_COMETLAKE:
> > +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> > +           case INTEL_BIGCORE_CASCADELAKE:
> > +           case INTEL_BIGCORE_COOPERLAKE:
> > +           case INTEL_BIGCORE_CANNONLAKE:
> > +           case INTEL_BIGCORE_ICELAKE:
> > +           case INTEL_BIGCORE_TIGERLAKE:
> > +           case INTEL_BIGCORE_ROCKETLAKE:
> > +           case INTEL_BIGCORE_RAPTORLAKE:
> > +           case INTEL_BIGCORE_METEORLAKE:
> > +           case INTEL_BIGCORE_LUNARLAKE:
> > +           case INTEL_BIGCORE_ARROWLAKE:
> > +           case INTEL_BIGCORE_SAPPHIRERAPIDS:
> > +           case INTEL_BIGCORE_EMERALDRAPIDS:
> > +           case INTEL_BIGCORE_GRANITERAPIDS:
> > +             goto default_tuning;
> > +
> > +           /* Default tuned Mixed (bigcore + atom SOC).  */
> > +           case INTEL_MIXED_LAKEFIELD:
> > +           case INTEL_MIXED_ALDERLAKE:
> > +             goto default_tuning;
> >             }
> >
> > -        /* Disable TSX on some processors to avoid TSX on kernels that
> > -           weren't updated with the latest microcode package (which
> > -           disables broken feature by default).  */
> > -        switch (model)
> > +             /* Disable TSX on some processors to avoid TSX on kernels that
> > +                weren't updated with the latest microcode package (which
> > +                disables broken feature by default).  */
> > +         switch (microarch)
> >             {
> > -           case 0x55:
> > +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> > +             /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and
> > +                Cooperlake also have model 0x55 but stepping 5/6 and 11
> > +                respectively so double check the stepping to be safe. */
> >               if (stepping <= 5)
> >                 goto disable_tsx;
> >               break;
> > -           case 0x8e:
> > -             /* NB: Although the errata documents that for model == 0x8e,
> > -                only 0xb stepping or lower are impacted, the intention of
> > -                the errata was to disable TSX on all client processors on
> > -                all steppings.  Include 0xc stepping which is an Intel
> > -                Core i7-8665U, a client mobile processor.  */
> > -           case 0x9e:
> > -             if (stepping > 0xc)
> > +
> > +           case INTEL_BIGCORE_SKYLAKE:
> > +           case INTEL_BIGCORE_AMBERLAKE:
> > +           case INTEL_BIGCORE_COFFEELAKE:
> > +           case INTEL_BIGCORE_WHISKEYLAKE:
> > +           case INTEL_BIGCORE_KABYLAKE:
> > +               /* NB: Although the errata documents that for model == 0x8e
> > +                  (skylake client), only 0xb stepping or lower are impacted,
> > +                  the intention of the errata was to disable TSX on all client
> > +                  processors on all steppings.  Include 0xc stepping which is
> > +                  an Intel Core i7-8665U, a client mobile processor.  */
> > +               if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
> >                 break;
> > -             /* Fall through.  */
> > -           case 0x4e:
> > -           case 0x5e:
> > -             {
> > +
> >                 /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
> >                    processors listed in:
> >
> >  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
> >                  */
> > -disable_tsx:
> > +           disable_tsx:
> >                 CPU_FEATURE_UNSET (cpu_features, HLE);
> >                 CPU_FEATURE_UNSET (cpu_features, RTM);
> >                 CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> > -             }
> > -             break;
> > -           case 0x3f:
> > -             /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> > -             if (stepping >= 4)
> >                 break;
> > -             /* Fall through.  */
> > -           case 0x3c:
> > -           case 0x45:
> > -           case 0x46:
> > -             /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> > -                with stepping >= 4) to avoid TSX on kernels that weren't
> > -                updated with the latest microcode package (which disables
> > -                broken feature by default).  */
> > -             CPU_FEATURE_UNSET (cpu_features, RTM);
> > -             break;
> > +
> > +           case INTEL_BIGCORE_HASWELL:
> > +               /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> > +                  TSX.  Haswell also include other model numbers that have
> > +                  working TSX.  */
> > +               if (model == 0x3f && stepping >= 4)
> > +               break;
> > +
> > +               CPU_FEATURE_UNSET (cpu_features, RTM);
> > +               break;
> >             }
> >         }
> >
> > --
> > 2.34.1
> >
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features`
  2023-05-10 23:17       ` Noah Goldstein
@ 2023-05-11 21:36         ` H.J. Lu
  2023-05-12  5:11           ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-05-11 21:36 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Wed, May 10, 2023 at 4:17 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Wed, May 10, 2023 at 5:14 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> >
> > On Tue, May 9, 2023 at 5:34 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > This patch should have no affect on existing functionality.
> > >
> > > The current code, which has a single switch for model detection and
> > > setting prefered features, is difficult to follow/extend. The cases
> > > use magic numbers and many microarchitectures are missing. This makes
> > > it difficult to reason about what is implemented so far and/or
> > > how/where to add support for new features.
> > >
> > > This patch splits the model detection and preference setting stages so
> > > that CPU preferences can be set based on a complete list of available
> > > microarchitectures, rather than based on model magic numbers.
> > > ---
> > >  sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++--------
> > >  1 file changed, 317 insertions(+), 84 deletions(-)
> > >
> > > diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> > > index 5bff8ec0b4..9d433f8144 100644
> > > --- a/sysdeps/x86/cpu-features.c
> > > +++ b/sysdeps/x86/cpu-features.c
> > > @@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
> > >                      == index_arch_Fast_Copy_Backward)),
> > >                 "Incorrect index_arch_Fast_Unaligned_Load");
> > >
> > > +
> > > +/* Intel Family-6 microarch list.  */
> > > +enum
> > > +{
> > > +  /* Atom processors.  */
> > > +  INTEL_ATOM_BONNELL,
> > > +  INTEL_ATOM_SALTWELL,
> > > +  INTEL_ATOM_SILVERMONT,
> > > +  INTEL_ATOM_AIRMONT,
> > > +  INTEL_ATOM_GOLDMONT,
> > > +  INTEL_ATOM_GOLDMONT_PLUS,
> > > +  INTEL_ATOM_SIERRAFOREST,
> > > +  INTEL_ATOM_GRANDRIDGE,
> > > +  INTEL_ATOM_TREMONT,
> > > +
> > > +  /* Bigcore processors.  */
> > > +  INTEL_BIGCORE_MEROM,
> > > +  INTEL_BIGCORE_PENRYN,
> > > +  INTEL_BIGCORE_DUNNINGTON,
> > > +  INTEL_BIGCORE_NEHALEM,
> > > +  INTEL_BIGCORE_WESTMERE,
> > > +  INTEL_BIGCORE_SANDYBRIDGE,
> > > +  INTEL_BIGCORE_IVYBRIDGE,
> > > +  INTEL_BIGCORE_HASWELL,
> > > +  INTEL_BIGCORE_BROADWELL,
> > > +  INTEL_BIGCORE_SKYLAKE,
> > > +  INTEL_BIGCORE_AMBERLAKE,
> > > +  INTEL_BIGCORE_COFFEELAKE,
> > > +  INTEL_BIGCORE_WHISKEYLAKE,
> > > +  INTEL_BIGCORE_KABYLAKE,
> > > +  INTEL_BIGCORE_COMETLAKE,
> > > +  INTEL_BIGCORE_SKYLAKE_AVX512,
> > > +  INTEL_BIGCORE_CANNONLAKE,
> > > +  INTEL_BIGCORE_CASCADELAKE,
> > > +  INTEL_BIGCORE_COOPERLAKE,
> > > +  INTEL_BIGCORE_ICELAKE,
> > > +  INTEL_BIGCORE_TIGERLAKE,
> > > +  INTEL_BIGCORE_ROCKETLAKE,
> > > +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> > > +  INTEL_BIGCORE_RAPTORLAKE,
> > > +  INTEL_BIGCORE_EMERALDRAPIDS,
> > > +  INTEL_BIGCORE_METEORLAKE,
> > > +  INTEL_BIGCORE_LUNARLAKE,
> > > +  INTEL_BIGCORE_ARROWLAKE,
> > > +  INTEL_BIGCORE_GRANITERAPIDS,
> > > +
> > > +  /* Mixed (bigcore + atom SOC).  */
> > > +  INTEL_MIXED_LAKEFIELD,
> > > +  INTEL_MIXED_ALDERLAKE,
> > > +
> > > +  /* KNL.  */
> > > +  INTEL_KNIGHTS_MILL,
> > > +  INTEL_KNIGHTS_LANDING,
> > > +
> > > +  /* Unknown.  */
> > > +  INTEL_UNKNOWN,
> > > +};
> > > +
> > > +static unsigned int
> > > +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
> > > +{
> > > +  switch (model)
> > > +    {
> > > +    case 0x1C:
> > > +    case 0x26:
> > > +      return INTEL_ATOM_BONNELL;
> > > +    case 0x27:
> > > +    case 0x35:
> > > +    case 0x36:
> > > +      return INTEL_ATOM_SALTWELL;
> > > +    case 0x37:
> > > +    case 0x4A:
> > > +    case 0x4D:
> > > +    case 0x5D:
> > > +      return INTEL_ATOM_SILVERMONT;
> > > +    case 0x4C:
> > > +    case 0x5A:
> > > +    case 0x75:
> > > +      return INTEL_ATOM_AIRMONT;
> > > +    case 0x5C:
> > > +    case 0x5F:
> > > +      return INTEL_ATOM_GOLDMONT;
> > > +    case 0x7A:
> > > +      return INTEL_ATOM_GOLDMONT_PLUS;
> > > +    case 0xAF:
> > > +      return INTEL_ATOM_SIERRAFOREST;
> > > +    case 0xB6:
> > > +      return INTEL_ATOM_GRANDRIDGE;
> > > +    case 0x86:
> > > +    case 0x96:
> > > +    case 0x9C:
> > > +      return INTEL_ATOM_TREMONT;
> > > +    case 0x0F:
> > > +    case 0x16:
> > > +      return INTEL_BIGCORE_MEROM;
> > > +    case 0x17:
> > > +      return INTEL_BIGCORE_PENRYN;
> > > +    case 0x1D:
> > > +      return INTEL_BIGCORE_DUNNINGTON;
> > > +    case 0x1A:
> > > +    case 0x1E:
> > > +    case 0x1F:
> > > +    case 0x2E:
> > > +      return INTEL_BIGCORE_NEHALEM;
> > > +    case 0x25:
> > > +    case 0x2C:
> > > +    case 0x2F:
> > > +      return INTEL_BIGCORE_WESTMERE;
> > > +    case 0x2A:
> > > +    case 0x2D:
> > > +      return INTEL_BIGCORE_SANDYBRIDGE;
> > > +    case 0x3A:
> > > +    case 0x3E:
> > > +      return INTEL_BIGCORE_IVYBRIDGE;
> > > +    case 0x3C:
> > > +    case 0x3F:
> > > +    case 0x45:
> > > +    case 0x46:
> > > +      return INTEL_BIGCORE_HASWELL;
> > > +    case 0x3D:
> > > +    case 0x47:
> > > +    case 0x4F:
> > > +    case 0x56:
> > > +      return INTEL_BIGCORE_BROADWELL;
> > > +    case 0x4E:
> > > +    case 0x5E:
> > > +      return INTEL_BIGCORE_SKYLAKE;
> > > +    case 0x8E:
> > > +      switch (stepping)
> > > +       {
> > > +       case 0x09:
> > > +         return INTEL_BIGCORE_AMBERLAKE;
> > > +       case 0x0A:
> > > +         return INTEL_BIGCORE_COFFEELAKE;
> > > +       case 0x0B:
> > > +       case 0x0C:
> > > +         return INTEL_BIGCORE_WHISKEYLAKE;
> > > +       default:
> > > +         return INTEL_BIGCORE_KABYLAKE;
> > > +       }
> > > +    case 0x9E:
> > > +      switch (stepping)
> > > +       {
> > > +       case 0x0A:
> > > +       case 0x0B:
> > > +       case 0x0C:
> > > +       case 0x0D:
> > > +         return INTEL_BIGCORE_COFFEELAKE;
> > > +       default:
> > > +         return INTEL_BIGCORE_KABYLAKE;
> > > +       }
> > > +    case 0xA5:
> > > +    case 0xA6:
> > > +      return INTEL_BIGCORE_COMETLAKE;
> >
> > For our purpose, all these Skylake derived CPUs can
> > be considered Skylake.
> >
> > > +    case 0x66:
> > > +      return INTEL_BIGCORE_CANNONLAKE;
> > > +    case 0x55:
> > > +      switch (stepping)
> > > +       {
> > > +       case 0x06:
> > > +       case 0x07:
> > > +         return INTEL_BIGCORE_CASCADELAKE;
> > > +       case 0x0b:
> > > +         return INTEL_BIGCORE_COOPERLAKE;
> > > +       default:
> > > +         return INTEL_BIGCORE_SKYLAKE_AVX512;
> > > +       }
> >
> > All these can be considered as Skylake server.
> Preference is to keep as is. Think its clearer to have
> extra detail in this function so a reader is never left thinking
> "why isn't this case handled". As well, the cost of distinguishing
> seems very low/none, whereas in the future if there is a need
> to distinguish having it already prepared seems somewhat
> valuable.

It serves only for documentation purpose in glibc.  We can
document them in comments or use "#if 0" to exclude them.

> >
> > > +    case 0x6A:
> > > +    case 0x6C:
> > > +    case 0x7D:
> > > +    case 0x7E:
> > > +    case 0x9D:
> > > +      return INTEL_BIGCORE_ICELAKE;
> > > +    case 0x8C:
> > > +    case 0x8D:
> > > +      return INTEL_BIGCORE_TIGERLAKE;
> > > +    case 0xA7:
> > > +      return INTEL_BIGCORE_ROCKETLAKE;
> > > +    case 0x8F:
> > > +      return INTEL_BIGCORE_SAPPHIRERAPIDS;
> > > +    case 0xB7:
> > > +    case 0xBA:
> > > +    case 0xBF:
> > > +      return INTEL_BIGCORE_RAPTORLAKE;
> > > +    case 0xCF:
> > > +      return INTEL_BIGCORE_EMERALDRAPIDS;
> > > +    case 0xAA:
> > > +    case 0xAC:
> > > +      return INTEL_BIGCORE_METEORLAKE;
> > > +    case 0xbd:
> > > +      return INTEL_BIGCORE_LUNARLAKE;
> > > +    case 0xc6:
> > > +      return INTEL_BIGCORE_ARROWLAKE;
> > > +    case 0xAD:
> > > +    case 0xAE:
> > > +      return INTEL_BIGCORE_GRANITERAPIDS;
> > > +    case 0x8A:
> > > +      return INTEL_MIXED_LAKEFIELD;
> > > +    case 0x97:
> > > +    case 0x9A:
> > > +    case 0xBE:
> > > +      return INTEL_MIXED_ALDERLAKE;
> > > +    case 0x85:
> > > +      return INTEL_KNIGHTS_MILL;
> > > +    case 0x57:
> > > +      return INTEL_KNIGHTS_LANDING;
> > > +    default:
> > > +      return INTEL_UNKNOWN;
> > > +    }
> > > +}
> > > +
> > >  static inline void
> > >  init_cpu_features (struct cpu_features *cpu_features)
> > >  {
> > > @@ -453,129 +664,151 @@ init_cpu_features (struct cpu_features *cpu_features)
> > >        if (family == 0x06)
> > >         {
> > >           model += extended_model;
> > > -         switch (model)
> > > +         unsigned int microarch
> > > +             = intel_get_fam6_microarch (model, stepping);
> > > +
> > > +         switch (microarch)
> > >             {
> > > -           case 0x1c:
> > > -           case 0x26:
> > > -             /* BSF is slow on Atom.  */
> > > +             /* Atom / KNL tuning.  */
> > > +           case INTEL_ATOM_BONNELL:
> > > +             /* BSF is slow on Bonnell.  */
> > >               cpu_features->preferred[index_arch_Slow_BSF]
> > > -               |= bit_arch_Slow_BSF;
> > > +                 |= bit_arch_Slow_BSF;
> > >               break;
> > >
> > > -           case 0x57:
> > > -             /* Knights Landing.  Enable Silvermont optimizations.  */
> > > -
> > > -           case 0x7a:
> > > -             /* Unaligned load versions are faster than SSSE3
> > > -                on Goldmont Plus.  */
> > > -
> > > -           case 0x5c:
> > > -           case 0x5f:
> > >               /* Unaligned load versions are faster than SSSE3
> > > -                on Goldmont.  */
> > > +                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> > > +           case INTEL_ATOM_AIRMONT:
> > > +           case INTEL_ATOM_SILVERMONT:
> > > +           case INTEL_ATOM_GOLDMONT:
> > > +           case INTEL_ATOM_GOLDMONT_PLUS:
> > >
> > > -           case 0x4c:
> > > -           case 0x5a:
> > > -           case 0x75:
> > > -             /* Airmont is a die shrink of Silvermont.  */
> > > +            /* Knights Landing.  Enable Silvermont optimizations.  */
> > > +           case INTEL_KNIGHTS_LANDING:
> > >
> > > -           case 0x37:
> > > -           case 0x4a:
> > > -           case 0x4d:
> > > -           case 0x5d:
> > > -             /* Unaligned load versions are faster than SSSE3
> > > -                on Silvermont.  */
> > >               cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> > > -               |= (bit_arch_Fast_Unaligned_Load
> > > -                   | bit_arch_Fast_Unaligned_Copy
> > > -                   | bit_arch_Prefer_PMINUB_for_stringop
> > > -                   | bit_arch_Slow_SSE4_2);
> > > +                 |= (bit_arch_Fast_Unaligned_Load
> > > +                     | bit_arch_Fast_Unaligned_Copy
> > > +                     | bit_arch_Prefer_PMINUB_for_stringop
> > > +                     | bit_arch_Slow_SSE4_2);
> > >               break;
> > >
> > > -           case 0x86:
> > > -           case 0x96:
> > > -           case 0x9c:
> > > +           case INTEL_ATOM_TREMONT:
> > >               /* Enable rep string instructions, unaligned load, unaligned
> > > -                copy, pminub and avoid SSE 4.2 on Tremont.  */
> > > +                copy, pminub and avoid SSE 4.2 on Tremont.  */
> > >               cpu_features->preferred[index_arch_Fast_Rep_String]
> > > -               |= (bit_arch_Fast_Rep_String
> > > -                   | bit_arch_Fast_Unaligned_Load
> > > -                   | bit_arch_Fast_Unaligned_Copy
> > > -                   | bit_arch_Prefer_PMINUB_for_stringop
> > > -                   | bit_arch_Slow_SSE4_2);
> > > +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > > +                     | bit_arch_Fast_Unaligned_Copy
> > > +                     | bit_arch_Prefer_PMINUB_for_stringop
> > > +                     | bit_arch_Slow_SSE4_2);
> > >               break;
> > >
> > > +             /* Default tuned KNL microarch.  */
> > > +           case INTEL_KNIGHTS_MILL:
> > > +             goto default_tuning;
> > > +             /* Default tuned atom microarch.  */
> > > +           case INTEL_ATOM_SIERRAFOREST:
> > > +           case INTEL_ATOM_GRANDRIDGE:
> > > +           case INTEL_ATOM_SALTWELL:
> >
> > Move Salwell to Bonnell.
>
> We where only match models 0x1c and 0x26 for the BSF
> optimization before. Would prefer to keep this patch purely
> refactor with no change to functionality. We already have
> a follow up patch to move saltwell->bonnell.

There is no need for it.

> >
> > > +             goto default_tuning;
> > > +
> > > +             /* Bigcore Tuning.  */
> > > +           case INTEL_UNKNOWN:
> > >             default:
> > > +           default_tuning:
> > >               /* Unknown family 0x06 processors.  Assuming this is one
> > >                  of Core i3/i5/i7 processors if AVX is available.  */
> > >               if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
> > >                 break;
> > > -             /* Fall through.  */
> > > -
> > > -           case 0x1a:
> > > -           case 0x1e:
> > > -           case 0x1f:
> > > -           case 0x25:
> > > -           case 0x2c:
> > > -           case 0x2e:
> > > -           case 0x2f:
> > > +           case INTEL_BIGCORE_NEHALEM:
> > > +           case INTEL_BIGCORE_WESTMERE:
> > >               /* Rep string instructions, unaligned load, unaligned copy,
> > >                  and pminub are fast on Intel Core i3, i5 and i7.  */
> > >               cpu_features->preferred[index_arch_Fast_Rep_String]
> > > -               |= (bit_arch_Fast_Rep_String
> > > -                   | bit_arch_Fast_Unaligned_Load
> > > -                   | bit_arch_Fast_Unaligned_Copy
> > > -                   | bit_arch_Prefer_PMINUB_for_stringop);
> > > +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > > +                     | bit_arch_Fast_Unaligned_Copy
> > > +                     | bit_arch_Prefer_PMINUB_for_stringop);
> > >               break;
> > > +
> > > +             /* Default tuned Bigcore microarch.  */
> > > +           case INTEL_BIGCORE_SANDYBRIDGE:
> > > +           case INTEL_BIGCORE_IVYBRIDGE:
> > > +           case INTEL_BIGCORE_HASWELL:
> > > +           case INTEL_BIGCORE_BROADWELL:
> > > +           case INTEL_BIGCORE_SKYLAKE:
> > > +           case INTEL_BIGCORE_AMBERLAKE:
> > > +           case INTEL_BIGCORE_COFFEELAKE:
> > > +           case INTEL_BIGCORE_WHISKEYLAKE:
> > > +           case INTEL_BIGCORE_KABYLAKE:
> > > +           case INTEL_BIGCORE_COMETLAKE:
> > > +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> > > +           case INTEL_BIGCORE_CASCADELAKE:
> > > +           case INTEL_BIGCORE_COOPERLAKE:
> > > +           case INTEL_BIGCORE_CANNONLAKE:
> > > +           case INTEL_BIGCORE_ICELAKE:
> > > +           case INTEL_BIGCORE_TIGERLAKE:
> > > +           case INTEL_BIGCORE_ROCKETLAKE:
> > > +           case INTEL_BIGCORE_RAPTORLAKE:
> > > +           case INTEL_BIGCORE_METEORLAKE:
> > > +           case INTEL_BIGCORE_LUNARLAKE:
> > > +           case INTEL_BIGCORE_ARROWLAKE:
> > > +           case INTEL_BIGCORE_SAPPHIRERAPIDS:
> > > +           case INTEL_BIGCORE_EMERALDRAPIDS:
> > > +           case INTEL_BIGCORE_GRANITERAPIDS:
> > > +             goto default_tuning;
> > > +
> > > +           /* Default tuned Mixed (bigcore + atom SOC).  */
> > > +           case INTEL_MIXED_LAKEFIELD:
> > > +           case INTEL_MIXED_ALDERLAKE:
> > > +             goto default_tuning;

No need for "goto default_tuning;".  The default case should
cover them.   If we want to document them, they can be put
in comments or "#if 0".

> > >             }
> > >
> > > -        /* Disable TSX on some processors to avoid TSX on kernels that
> > > -           weren't updated with the latest microcode package (which
> > > -           disables broken feature by default).  */
> > > -        switch (model)
> > > +             /* Disable TSX on some processors to avoid TSX on kernels that
> > > +                weren't updated with the latest microcode package (which
> > > +                disables broken feature by default).  */
> > > +         switch (microarch)
> > >             {
> > > -           case 0x55:
> > > +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> > > +             /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and
> > > +                Cooperlake also have model 0x55 but stepping 5/6 and 11
> > > +                respectively so double check the stepping to be safe. */
> > >               if (stepping <= 5)
> > >                 goto disable_tsx;
> > >               break;
> > > -           case 0x8e:
> > > -             /* NB: Although the errata documents that for model == 0x8e,
> > > -                only 0xb stepping or lower are impacted, the intention of
> > > -                the errata was to disable TSX on all client processors on
> > > -                all steppings.  Include 0xc stepping which is an Intel
> > > -                Core i7-8665U, a client mobile processor.  */
> > > -           case 0x9e:
> > > -             if (stepping > 0xc)
> > > +
> > > +           case INTEL_BIGCORE_SKYLAKE:
> > > +           case INTEL_BIGCORE_AMBERLAKE:
> > > +           case INTEL_BIGCORE_COFFEELAKE:
> > > +           case INTEL_BIGCORE_WHISKEYLAKE:
> > > +           case INTEL_BIGCORE_KABYLAKE:
> > > +               /* NB: Although the errata documents that for model == 0x8e
> > > +                  (skylake client), only 0xb stepping or lower are impacted,
> > > +                  the intention of the errata was to disable TSX on all client
> > > +                  processors on all steppings.  Include 0xc stepping which is
> > > +                  an Intel Core i7-8665U, a client mobile processor.  */
> > > +               if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
> > >                 break;
> > > -             /* Fall through.  */
> > > -           case 0x4e:
> > > -           case 0x5e:
> > > -             {
> > > +
> > >                 /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
> > >                    processors listed in:
> > >
> > >  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
> > >                  */
> > > -disable_tsx:
> > > +           disable_tsx:
> > >                 CPU_FEATURE_UNSET (cpu_features, HLE);
> > >                 CPU_FEATURE_UNSET (cpu_features, RTM);
> > >                 CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> > > -             }
> > > -             break;
> > > -           case 0x3f:
> > > -             /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> > > -             if (stepping >= 4)
> > >                 break;
> > > -             /* Fall through.  */
> > > -           case 0x3c:
> > > -           case 0x45:
> > > -           case 0x46:
> > > -             /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> > > -                with stepping >= 4) to avoid TSX on kernels that weren't
> > > -                updated with the latest microcode package (which disables
> > > -                broken feature by default).  */
> > > -             CPU_FEATURE_UNSET (cpu_features, RTM);
> > > -             break;
> > > +
> > > +           case INTEL_BIGCORE_HASWELL:
> > > +               /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> > > +                  TSX.  Haswell also include other model numbers that have
> > > +                  working TSX.  */
> > > +               if (model == 0x3f && stepping >= 4)
> > > +               break;
> > > +
> > > +               CPU_FEATURE_UNSET (cpu_features, RTM);
> > > +               break;
> > >             }
> > >         }
> > >
> > > --
> > > 2.34.1
> > >
> >
> >
> > --
> > H.J.



-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v8 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
                   ` (6 preceding siblings ...)
  2023-05-10 22:12 ` [PATCH v7 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-05-12  5:10 ` Noah Goldstein
  2023-05-12  5:10   ` [PATCH v8 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
  2023-05-12 22:03 ` [PATCH v8 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-05-12  5:10 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.

Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.

The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.

Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
---
 sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
 1 file changed, 43 insertions(+), 27 deletions(-)

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index ec88945b39..4a1a5423ff 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@ handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/4 of size
+     of the chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     REP MOVSB. As well the point where the fact that non-temporal
+     stores are forced back to main memory would already occurred to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the maximum thrashing
+     capped at 1/associativity. */
+  unsigned long int non_temporal_threshold = shared / 4;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, there performance in highly parallel situations is
+     noticeably worse.  */
+  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
+    non_temporal_threshold = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v8 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-12  5:10 ` [PATCH v8 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-05-12  5:10   ` Noah Goldstein
  2023-05-12 22:17     ` H.J. Lu
  0 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-05-12  5:10 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

This patch should have no affect on existing functionality.

The current code, which has a single switch for model detection and
setting prefered features, is difficult to follow/extend. The cases
use magic numbers and many microarchitectures are missing. This makes
it difficult to reason about what is implemented so far and/or
how/where to add support for new features.

This patch splits the model detection and preference setting stages so
that CPU preferences can be set based on a complete list of available
microarchitectures, rather than based on model magic numbers.
---
 sysdeps/x86/cpu-features.c | 400 +++++++++++++++++++++++++++++--------
 1 file changed, 316 insertions(+), 84 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 5bff8ec0b4..264d309dd7 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -417,6 +417,218 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
 		     == index_arch_Fast_Copy_Backward)),
 		"Incorrect index_arch_Fast_Unaligned_Load");
 
+
+/* Intel Family-6 microarch list.  */
+enum
+{
+  /* Atom processors.  */
+  INTEL_ATOM_BONNELL,
+  INTEL_ATOM_SILVERMONT,
+  INTEL_ATOM_AIRMONT,
+  INTEL_ATOM_GOLDMONT,
+  INTEL_ATOM_GOLDMONT_PLUS,
+  INTEL_ATOM_SIERRAFOREST,
+  INTEL_ATOM_GRANDRIDGE,
+  INTEL_ATOM_TREMONT,
+
+  /* Bigcore processors.  */
+  INTEL_BIGCORE_MEROM,
+  INTEL_BIGCORE_PENRYN,
+  INTEL_BIGCORE_DUNNINGTON,
+  INTEL_BIGCORE_NEHALEM,
+  INTEL_BIGCORE_WESTMERE,
+  INTEL_BIGCORE_SANDYBRIDGE,
+  INTEL_BIGCORE_IVYBRIDGE,
+  INTEL_BIGCORE_HASWELL,
+  INTEL_BIGCORE_BROADWELL,
+  INTEL_BIGCORE_SKYLAKE,
+  INTEL_BIGCORE_AMBERLAKE,
+  INTEL_BIGCORE_COFFEELAKE,
+  INTEL_BIGCORE_WHISKEYLAKE,
+  INTEL_BIGCORE_KABYLAKE,
+  INTEL_BIGCORE_COMETLAKE,
+  INTEL_BIGCORE_SKYLAKE_AVX512,
+  INTEL_BIGCORE_CANNONLAKE,
+  INTEL_BIGCORE_ICELAKE,
+  INTEL_BIGCORE_TIGERLAKE,
+  INTEL_BIGCORE_ROCKETLAKE,
+  INTEL_BIGCORE_SAPPHIRERAPIDS,
+  INTEL_BIGCORE_RAPTORLAKE,
+  INTEL_BIGCORE_EMERALDRAPIDS,
+  INTEL_BIGCORE_METEORLAKE,
+  INTEL_BIGCORE_LUNARLAKE,
+  INTEL_BIGCORE_ARROWLAKE,
+  INTEL_BIGCORE_GRANITERAPIDS,
+
+  /* Mixed (bigcore + atom SOC).  */
+  INTEL_MIXED_LAKEFIELD,
+  INTEL_MIXED_ALDERLAKE,
+
+  /* KNL.  */
+  INTEL_KNIGHTS_MILL,
+  INTEL_KNIGHTS_LANDING,
+
+  /* Unknown.  */
+  INTEL_UNKNOWN,
+};
+
+static unsigned int
+intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
+{
+  switch (model)
+    {
+    case 0x1C:
+    case 0x26:
+      return INTEL_ATOM_BONNELL;
+    case 0x27:
+    case 0x35:
+    case 0x36:
+      /* Really Saltwell, but Saltwell is just a die shrink of Bonnell
+         (microarchitecturally identical).  */
+      return INTEL_ATOM_BONNELL;
+    case 0x37:
+    case 0x4A:
+    case 0x4D:
+    case 0x5D:
+      return INTEL_ATOM_SILVERMONT;
+    case 0x4C:
+    case 0x5A:
+    case 0x75:
+      return INTEL_ATOM_AIRMONT;
+    case 0x5C:
+    case 0x5F:
+      return INTEL_ATOM_GOLDMONT;
+    case 0x7A:
+      return INTEL_ATOM_GOLDMONT_PLUS;
+    case 0xAF:
+      return INTEL_ATOM_SIERRAFOREST;
+    case 0xB6:
+      return INTEL_ATOM_GRANDRIDGE;
+    case 0x86:
+    case 0x96:
+    case 0x9C:
+      return INTEL_ATOM_TREMONT;
+    case 0x0F:
+    case 0x16:
+      return INTEL_BIGCORE_MEROM;
+    case 0x17:
+      return INTEL_BIGCORE_PENRYN;
+    case 0x1D:
+      return INTEL_BIGCORE_DUNNINGTON;
+    case 0x1A:
+    case 0x1E:
+    case 0x1F:
+    case 0x2E:
+      return INTEL_BIGCORE_NEHALEM;
+    case 0x25:
+    case 0x2C:
+    case 0x2F:
+      return INTEL_BIGCORE_WESTMERE;
+    case 0x2A:
+    case 0x2D:
+      return INTEL_BIGCORE_SANDYBRIDGE;
+    case 0x3A:
+    case 0x3E:
+      return INTEL_BIGCORE_IVYBRIDGE;
+    case 0x3C:
+    case 0x3F:
+    case 0x45:
+    case 0x46:
+      return INTEL_BIGCORE_HASWELL;
+    case 0x3D:
+    case 0x47:
+    case 0x4F:
+    case 0x56:
+      return INTEL_BIGCORE_BROADWELL;
+    case 0x4E:
+    case 0x5E:
+      return INTEL_BIGCORE_SKYLAKE;
+    case 0x8E:
+      switch (stepping)
+	{
+	case 0x09:
+	  return INTEL_BIGCORE_AMBERLAKE;
+	case 0x0A:
+	  return INTEL_BIGCORE_COFFEELAKE;
+	case 0x0B:
+	case 0x0C:
+	  return INTEL_BIGCORE_WHISKEYLAKE;
+	default:
+	  return INTEL_BIGCORE_KABYLAKE;
+	}
+    case 0x9E:
+      switch (stepping)
+	{
+	case 0x0A:
+	case 0x0B:
+	case 0x0C:
+	case 0x0D:
+	  return INTEL_BIGCORE_COFFEELAKE;
+	default:
+	  return INTEL_BIGCORE_KABYLAKE;
+	}
+    case 0xA5:
+    case 0xA6:
+      return INTEL_BIGCORE_COMETLAKE;
+    case 0x66:
+      return INTEL_BIGCORE_CANNONLAKE;
+    case 0x55:
+    /*
+     Stepping = {6, 7}
+        -> Cascadelake
+     Stepping = {11}
+        -> Cooperlake
+     else
+        -> Skylake-avx512
+
+     These are all microarchitecturally indentical, so use
+     Skylake-avx512 for all of them.
+     */
+      return INTEL_BIGCORE_SKYLAKE_AVX512;
+    case 0x6A:
+    case 0x6C:
+    case 0x7D:
+    case 0x7E:
+    case 0x9D:
+      return INTEL_BIGCORE_ICELAKE;
+    case 0x8C:
+    case 0x8D:
+      return INTEL_BIGCORE_TIGERLAKE;
+    case 0xA7:
+      return INTEL_BIGCORE_ROCKETLAKE;
+    case 0x8F:
+      return INTEL_BIGCORE_SAPPHIRERAPIDS;
+    case 0xB7:
+    case 0xBA:
+    case 0xBF:
+      return INTEL_BIGCORE_RAPTORLAKE;
+    case 0xCF:
+      return INTEL_BIGCORE_EMERALDRAPIDS;
+    case 0xAA:
+    case 0xAC:
+      return INTEL_BIGCORE_METEORLAKE;
+    case 0xbd:
+      return INTEL_BIGCORE_LUNARLAKE;
+    case 0xc6:
+      return INTEL_BIGCORE_ARROWLAKE;
+    case 0xAD:
+    case 0xAE:
+      return INTEL_BIGCORE_GRANITERAPIDS;
+    case 0x8A:
+      return INTEL_MIXED_LAKEFIELD;
+    case 0x97:
+    case 0x9A:
+    case 0xBE:
+      return INTEL_MIXED_ALDERLAKE;
+    case 0x85:
+      return INTEL_KNIGHTS_MILL;
+    case 0x57:
+      return INTEL_KNIGHTS_LANDING;
+    default:
+      return INTEL_UNKNOWN;
+    }
+}
+
 static inline void
 init_cpu_features (struct cpu_features *cpu_features)
 {
@@ -453,129 +665,149 @@ init_cpu_features (struct cpu_features *cpu_features)
       if (family == 0x06)
 	{
 	  model += extended_model;
-	  switch (model)
+	  unsigned int microarch
+	      = intel_get_fam6_microarch (model, stepping);
+
+	  switch (microarch)
 	    {
-	    case 0x1c:
-	    case 0x26:
-	      /* BSF is slow on Atom.  */
+	      /* Atom / KNL tuning.  */
+	    case INTEL_ATOM_BONNELL:
+	      /* BSF is slow on Bonnell.  */
 	      cpu_features->preferred[index_arch_Slow_BSF]
-		|= bit_arch_Slow_BSF;
+		  |= bit_arch_Slow_BSF;
 	      break;
 
-	    case 0x57:
-	      /* Knights Landing.  Enable Silvermont optimizations.  */
-
-	    case 0x7a:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont Plus.  */
-
-	    case 0x5c:
-	    case 0x5f:
 	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont.  */
+		     on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
+	    case INTEL_ATOM_AIRMONT:
+	    case INTEL_ATOM_SILVERMONT:
+	    case INTEL_ATOM_GOLDMONT:
+	    case INTEL_ATOM_GOLDMONT_PLUS:
 
-	    case 0x4c:
-	    case 0x5a:
-	    case 0x75:
-	      /* Airmont is a die shrink of Silvermont.  */
+          /* Knights Landing.  Enable Silvermont optimizations.  */
+	    case INTEL_KNIGHTS_LANDING:
 
-	    case 0x37:
-	    case 0x4a:
-	    case 0x4d:
-	    case 0x5d:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Silvermont.  */
 	      cpu_features->preferred[index_arch_Fast_Unaligned_Load]
-		|= (bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
-	    case 0x86:
-	    case 0x96:
-	    case 0x9c:
+	    case INTEL_ATOM_TREMONT:
 	      /* Enable rep string instructions, unaligned load, unaligned
-	         copy, pminub and avoid SSE 4.2 on Tremont.  */
+		 copy, pminub and avoid SSE 4.2 on Tremont.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
+	   /*
+	    Default tuned KNL microarch.
+	    case INTEL_KNIGHTS_MILL:
+        */
+
+	   /*
+	    Default tuned atom microarch.
+	    case INTEL_ATOM_SIERRAFOREST:
+	    case INTEL_ATOM_GRANDRIDGE:
+	   */
+
+	      /* Bigcore/Default Tuning.  */
 	    default:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	      /* Fall through.  */
-
-	    case 0x1a:
-	    case 0x1e:
-	    case 0x1f:
-	    case 0x25:
-	    case 0x2c:
-	    case 0x2e:
-	    case 0x2f:
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop);
+		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
+
+	   /*
+	    Default tuned Bigcore microarch.
+	    case INTEL_BIGCORE_SANDYBRIDGE:
+	    case INTEL_BIGCORE_IVYBRIDGE:
+	    case INTEL_BIGCORE_HASWELL:
+	    case INTEL_BIGCORE_BROADWELL:
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_AMBERLAKE:
+	    case INTEL_BIGCORE_COFFEELAKE:
+	    case INTEL_BIGCORE_WHISKEYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+	    case INTEL_BIGCORE_COMETLAKE:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	    case INTEL_BIGCORE_CANNONLAKE:
+	    case INTEL_BIGCORE_ICELAKE:
+	    case INTEL_BIGCORE_TIGERLAKE:
+	    case INTEL_BIGCORE_ROCKETLAKE:
+	    case INTEL_BIGCORE_RAPTORLAKE:
+	    case INTEL_BIGCORE_METEORLAKE:
+	    case INTEL_BIGCORE_LUNARLAKE:
+	    case INTEL_BIGCORE_ARROWLAKE:
+	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
+	    case INTEL_BIGCORE_EMERALDRAPIDS:
+	    case INTEL_BIGCORE_GRANITERAPIDS:
+	    */
+
+	   /*
+	    Default tuned Mixed (bigcore + atom SOC).
+	    case INTEL_MIXED_LAKEFIELD:
+	    case INTEL_MIXED_ALDERLAKE:
+	    */
 	    }
 
-	 /* Disable TSX on some processors to avoid TSX on kernels that
-	    weren't updated with the latest microcode package (which
-	    disables broken feature by default).  */
-	 switch (model)
+	      /* Disable TSX on some processors to avoid TSX on kernels that
+		 weren't updated with the latest microcode package (which
+		 disables broken feature by default).  */
+	  switch (microarch)
 	    {
-	    case 0x55:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	      /* 0x55 (Skylake-avx512) && stepping <= 5 disable TSX. */
 	      if (stepping <= 5)
 		goto disable_tsx;
 	      break;
-	    case 0x8e:
-	      /* NB: Although the errata documents that for model == 0x8e,
-		 only 0xb stepping or lower are impacted, the intention of
-		 the errata was to disable TSX on all client processors on
-		 all steppings.  Include 0xc stepping which is an Intel
-		 Core i7-8665U, a client mobile processor.  */
-	    case 0x9e:
-	      if (stepping > 0xc)
+
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_AMBERLAKE:
+	    case INTEL_BIGCORE_COFFEELAKE:
+	    case INTEL_BIGCORE_WHISKEYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+		/* NB: Although the errata documents that for model == 0x8e
+		   (skylake client), only 0xb stepping or lower are impacted,
+		   the intention of the errata was to disable TSX on all client
+		   processors on all steppings.  Include 0xc stepping which is
+		   an Intel Core i7-8665U, a client mobile processor.  */
+		if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
 		break;
-	      /* Fall through.  */
-	    case 0x4e:
-	    case 0x5e:
-	      {
+
 		/* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
 		   processors listed in:
 
 https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
 		 */
-disable_tsx:
+	    disable_tsx:
 		CPU_FEATURE_UNSET (cpu_features, HLE);
 		CPU_FEATURE_UNSET (cpu_features, RTM);
 		CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
-	      }
-	      break;
-	    case 0x3f:
-	      /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
-	      if (stepping >= 4)
 		break;
-	      /* Fall through.  */
-	    case 0x3c:
-	    case 0x45:
-	    case 0x46:
-	      /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
-		 with stepping >= 4) to avoid TSX on kernels that weren't
-		 updated with the latest microcode package (which disables
-		 broken feature by default).  */
-	      CPU_FEATURE_UNSET (cpu_features, RTM);
-	      break;
+
+	    case INTEL_BIGCORE_HASWELL:
+		/* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
+		   TSX.  Haswell also include other model numbers that have
+		   working TSX.  */
+		if (model == 0x3f && stepping >= 4)
+		break;
+
+		CPU_FEATURE_UNSET (cpu_features, RTM);
+		break;
 	    }
 	}
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features`
  2023-05-11 21:36         ` H.J. Lu
@ 2023-05-12  5:11           ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-12  5:11 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Thu, May 11, 2023 at 4:37 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Wed, May 10, 2023 at 4:17 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Wed, May 10, 2023 at 5:14 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> > >
> > > On Tue, May 9, 2023 at 5:34 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > This patch should have no affect on existing functionality.
> > > >
> > > > The current code, which has a single switch for model detection and
> > > > setting prefered features, is difficult to follow/extend. The cases
> > > > use magic numbers and many microarchitectures are missing. This makes
> > > > it difficult to reason about what is implemented so far and/or
> > > > how/where to add support for new features.
> > > >
> > > > This patch splits the model detection and preference setting stages so
> > > > that CPU preferences can be set based on a complete list of available
> > > > microarchitectures, rather than based on model magic numbers.
> > > > ---
> > > >  sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++--------
> > > >  1 file changed, 317 insertions(+), 84 deletions(-)
> > > >
> > > > diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> > > > index 5bff8ec0b4..9d433f8144 100644
> > > > --- a/sysdeps/x86/cpu-features.c
> > > > +++ b/sysdeps/x86/cpu-features.c
> > > > @@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
> > > >                      == index_arch_Fast_Copy_Backward)),
> > > >                 "Incorrect index_arch_Fast_Unaligned_Load");
> > > >
> > > > +
> > > > +/* Intel Family-6 microarch list.  */
> > > > +enum
> > > > +{
> > > > +  /* Atom processors.  */
> > > > +  INTEL_ATOM_BONNELL,
> > > > +  INTEL_ATOM_SALTWELL,
> > > > +  INTEL_ATOM_SILVERMONT,
> > > > +  INTEL_ATOM_AIRMONT,
> > > > +  INTEL_ATOM_GOLDMONT,
> > > > +  INTEL_ATOM_GOLDMONT_PLUS,
> > > > +  INTEL_ATOM_SIERRAFOREST,
> > > > +  INTEL_ATOM_GRANDRIDGE,
> > > > +  INTEL_ATOM_TREMONT,
> > > > +
> > > > +  /* Bigcore processors.  */
> > > > +  INTEL_BIGCORE_MEROM,
> > > > +  INTEL_BIGCORE_PENRYN,
> > > > +  INTEL_BIGCORE_DUNNINGTON,
> > > > +  INTEL_BIGCORE_NEHALEM,
> > > > +  INTEL_BIGCORE_WESTMERE,
> > > > +  INTEL_BIGCORE_SANDYBRIDGE,
> > > > +  INTEL_BIGCORE_IVYBRIDGE,
> > > > +  INTEL_BIGCORE_HASWELL,
> > > > +  INTEL_BIGCORE_BROADWELL,
> > > > +  INTEL_BIGCORE_SKYLAKE,
> > > > +  INTEL_BIGCORE_AMBERLAKE,
> > > > +  INTEL_BIGCORE_COFFEELAKE,
> > > > +  INTEL_BIGCORE_WHISKEYLAKE,
> > > > +  INTEL_BIGCORE_KABYLAKE,
> > > > +  INTEL_BIGCORE_COMETLAKE,
> > > > +  INTEL_BIGCORE_SKYLAKE_AVX512,
> > > > +  INTEL_BIGCORE_CANNONLAKE,
> > > > +  INTEL_BIGCORE_CASCADELAKE,
> > > > +  INTEL_BIGCORE_COOPERLAKE,
> > > > +  INTEL_BIGCORE_ICELAKE,
> > > > +  INTEL_BIGCORE_TIGERLAKE,
> > > > +  INTEL_BIGCORE_ROCKETLAKE,
> > > > +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> > > > +  INTEL_BIGCORE_RAPTORLAKE,
> > > > +  INTEL_BIGCORE_EMERALDRAPIDS,
> > > > +  INTEL_BIGCORE_METEORLAKE,
> > > > +  INTEL_BIGCORE_LUNARLAKE,
> > > > +  INTEL_BIGCORE_ARROWLAKE,
> > > > +  INTEL_BIGCORE_GRANITERAPIDS,
> > > > +
> > > > +  /* Mixed (bigcore + atom SOC).  */
> > > > +  INTEL_MIXED_LAKEFIELD,
> > > > +  INTEL_MIXED_ALDERLAKE,
> > > > +
> > > > +  /* KNL.  */
> > > > +  INTEL_KNIGHTS_MILL,
> > > > +  INTEL_KNIGHTS_LANDING,
> > > > +
> > > > +  /* Unknown.  */
> > > > +  INTEL_UNKNOWN,
> > > > +};
> > > > +
> > > > +static unsigned int
> > > > +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
> > > > +{
> > > > +  switch (model)
> > > > +    {
> > > > +    case 0x1C:
> > > > +    case 0x26:
> > > > +      return INTEL_ATOM_BONNELL;
> > > > +    case 0x27:
> > > > +    case 0x35:
> > > > +    case 0x36:
> > > > +      return INTEL_ATOM_SALTWELL;
> > > > +    case 0x37:
> > > > +    case 0x4A:
> > > > +    case 0x4D:
> > > > +    case 0x5D:
> > > > +      return INTEL_ATOM_SILVERMONT;
> > > > +    case 0x4C:
> > > > +    case 0x5A:
> > > > +    case 0x75:
> > > > +      return INTEL_ATOM_AIRMONT;
> > > > +    case 0x5C:
> > > > +    case 0x5F:
> > > > +      return INTEL_ATOM_GOLDMONT;
> > > > +    case 0x7A:
> > > > +      return INTEL_ATOM_GOLDMONT_PLUS;
> > > > +    case 0xAF:
> > > > +      return INTEL_ATOM_SIERRAFOREST;
> > > > +    case 0xB6:
> > > > +      return INTEL_ATOM_GRANDRIDGE;
> > > > +    case 0x86:
> > > > +    case 0x96:
> > > > +    case 0x9C:
> > > > +      return INTEL_ATOM_TREMONT;
> > > > +    case 0x0F:
> > > > +    case 0x16:
> > > > +      return INTEL_BIGCORE_MEROM;
> > > > +    case 0x17:
> > > > +      return INTEL_BIGCORE_PENRYN;
> > > > +    case 0x1D:
> > > > +      return INTEL_BIGCORE_DUNNINGTON;
> > > > +    case 0x1A:
> > > > +    case 0x1E:
> > > > +    case 0x1F:
> > > > +    case 0x2E:
> > > > +      return INTEL_BIGCORE_NEHALEM;
> > > > +    case 0x25:
> > > > +    case 0x2C:
> > > > +    case 0x2F:
> > > > +      return INTEL_BIGCORE_WESTMERE;
> > > > +    case 0x2A:
> > > > +    case 0x2D:
> > > > +      return INTEL_BIGCORE_SANDYBRIDGE;
> > > > +    case 0x3A:
> > > > +    case 0x3E:
> > > > +      return INTEL_BIGCORE_IVYBRIDGE;
> > > > +    case 0x3C:
> > > > +    case 0x3F:
> > > > +    case 0x45:
> > > > +    case 0x46:
> > > > +      return INTEL_BIGCORE_HASWELL;
> > > > +    case 0x3D:
> > > > +    case 0x47:
> > > > +    case 0x4F:
> > > > +    case 0x56:
> > > > +      return INTEL_BIGCORE_BROADWELL;
> > > > +    case 0x4E:
> > > > +    case 0x5E:
> > > > +      return INTEL_BIGCORE_SKYLAKE;
> > > > +    case 0x8E:
> > > > +      switch (stepping)
> > > > +       {
> > > > +       case 0x09:
> > > > +         return INTEL_BIGCORE_AMBERLAKE;
> > > > +       case 0x0A:
> > > > +         return INTEL_BIGCORE_COFFEELAKE;
> > > > +       case 0x0B:
> > > > +       case 0x0C:
> > > > +         return INTEL_BIGCORE_WHISKEYLAKE;
> > > > +       default:
> > > > +         return INTEL_BIGCORE_KABYLAKE;
> > > > +       }
> > > > +    case 0x9E:
> > > > +      switch (stepping)
> > > > +       {
> > > > +       case 0x0A:
> > > > +       case 0x0B:
> > > > +       case 0x0C:
> > > > +       case 0x0D:
> > > > +         return INTEL_BIGCORE_COFFEELAKE;
> > > > +       default:
> > > > +         return INTEL_BIGCORE_KABYLAKE;
> > > > +       }
> > > > +    case 0xA5:
> > > > +    case 0xA6:
> > > > +      return INTEL_BIGCORE_COMETLAKE;
> > >
> > > For our purpose, all these Skylake derived CPUs can
> > > be considered Skylake.
> > >
> > > > +    case 0x66:
> > > > +      return INTEL_BIGCORE_CANNONLAKE;
> > > > +    case 0x55:
> > > > +      switch (stepping)
> > > > +       {
> > > > +       case 0x06:
> > > > +       case 0x07:
> > > > +         return INTEL_BIGCORE_CASCADELAKE;
> > > > +       case 0x0b:
> > > > +         return INTEL_BIGCORE_COOPERLAKE;
> > > > +       default:
> > > > +         return INTEL_BIGCORE_SKYLAKE_AVX512;
> > > > +       }
> > >
> > > All these can be considered as Skylake server.
> > Preference is to keep as is. Think its clearer to have
> > extra detail in this function so a reader is never left thinking
> > "why isn't this case handled". As well, the cost of distinguishing
> > seems very low/none, whereas in the future if there is a need
> > to distinguish having it already prepared seems somewhat
> > valuable.
>
> It serves only for documentation purpose in glibc.  We can
> document them in comments or use "#if 0" to exclude them.
>

Okay. Moved to comment (just b.c `if 0` is ugly imo).
> > >
> > > > +    case 0x6A:
> > > > +    case 0x6C:
> > > > +    case 0x7D:
> > > > +    case 0x7E:
> > > > +    case 0x9D:
> > > > +      return INTEL_BIGCORE_ICELAKE;
> > > > +    case 0x8C:
> > > > +    case 0x8D:
> > > > +      return INTEL_BIGCORE_TIGERLAKE;
> > > > +    case 0xA7:
> > > > +      return INTEL_BIGCORE_ROCKETLAKE;
> > > > +    case 0x8F:
> > > > +      return INTEL_BIGCORE_SAPPHIRERAPIDS;
> > > > +    case 0xB7:
> > > > +    case 0xBA:
> > > > +    case 0xBF:
> > > > +      return INTEL_BIGCORE_RAPTORLAKE;
> > > > +    case 0xCF:
> > > > +      return INTEL_BIGCORE_EMERALDRAPIDS;
> > > > +    case 0xAA:
> > > > +    case 0xAC:
> > > > +      return INTEL_BIGCORE_METEORLAKE;
> > > > +    case 0xbd:
> > > > +      return INTEL_BIGCORE_LUNARLAKE;
> > > > +    case 0xc6:
> > > > +      return INTEL_BIGCORE_ARROWLAKE;
> > > > +    case 0xAD:
> > > > +    case 0xAE:
> > > > +      return INTEL_BIGCORE_GRANITERAPIDS;
> > > > +    case 0x8A:
> > > > +      return INTEL_MIXED_LAKEFIELD;
> > > > +    case 0x97:
> > > > +    case 0x9A:
> > > > +    case 0xBE:
> > > > +      return INTEL_MIXED_ALDERLAKE;
> > > > +    case 0x85:
> > > > +      return INTEL_KNIGHTS_MILL;
> > > > +    case 0x57:
> > > > +      return INTEL_KNIGHTS_LANDING;
> > > > +    default:
> > > > +      return INTEL_UNKNOWN;
> > > > +    }
> > > > +}
> > > > +
> > > >  static inline void
> > > >  init_cpu_features (struct cpu_features *cpu_features)
> > > >  {
> > > > @@ -453,129 +664,151 @@ init_cpu_features (struct cpu_features *cpu_features)
> > > >        if (family == 0x06)
> > > >         {
> > > >           model += extended_model;
> > > > -         switch (model)
> > > > +         unsigned int microarch
> > > > +             = intel_get_fam6_microarch (model, stepping);
> > > > +
> > > > +         switch (microarch)
> > > >             {
> > > > -           case 0x1c:
> > > > -           case 0x26:
> > > > -             /* BSF is slow on Atom.  */
> > > > +             /* Atom / KNL tuning.  */
> > > > +           case INTEL_ATOM_BONNELL:
> > > > +             /* BSF is slow on Bonnell.  */
> > > >               cpu_features->preferred[index_arch_Slow_BSF]
> > > > -               |= bit_arch_Slow_BSF;
> > > > +                 |= bit_arch_Slow_BSF;
> > > >               break;
> > > >
> > > > -           case 0x57:
> > > > -             /* Knights Landing.  Enable Silvermont optimizations.  */
> > > > -
> > > > -           case 0x7a:
> > > > -             /* Unaligned load versions are faster than SSSE3
> > > > -                on Goldmont Plus.  */
> > > > -
> > > > -           case 0x5c:
> > > > -           case 0x5f:
> > > >               /* Unaligned load versions are faster than SSSE3
> > > > -                on Goldmont.  */
> > > > +                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> > > > +           case INTEL_ATOM_AIRMONT:
> > > > +           case INTEL_ATOM_SILVERMONT:
> > > > +           case INTEL_ATOM_GOLDMONT:
> > > > +           case INTEL_ATOM_GOLDMONT_PLUS:
> > > >
> > > > -           case 0x4c:
> > > > -           case 0x5a:
> > > > -           case 0x75:
> > > > -             /* Airmont is a die shrink of Silvermont.  */
> > > > +            /* Knights Landing.  Enable Silvermont optimizations.  */
> > > > +           case INTEL_KNIGHTS_LANDING:
> > > >
> > > > -           case 0x37:
> > > > -           case 0x4a:
> > > > -           case 0x4d:
> > > > -           case 0x5d:
> > > > -             /* Unaligned load versions are faster than SSSE3
> > > > -                on Silvermont.  */
> > > >               cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> > > > -               |= (bit_arch_Fast_Unaligned_Load
> > > > -                   | bit_arch_Fast_Unaligned_Copy
> > > > -                   | bit_arch_Prefer_PMINUB_for_stringop
> > > > -                   | bit_arch_Slow_SSE4_2);
> > > > +                 |= (bit_arch_Fast_Unaligned_Load
> > > > +                     | bit_arch_Fast_Unaligned_Copy
> > > > +                     | bit_arch_Prefer_PMINUB_for_stringop
> > > > +                     | bit_arch_Slow_SSE4_2);
> > > >               break;
> > > >
> > > > -           case 0x86:
> > > > -           case 0x96:
> > > > -           case 0x9c:
> > > > +           case INTEL_ATOM_TREMONT:
> > > >               /* Enable rep string instructions, unaligned load, unaligned
> > > > -                copy, pminub and avoid SSE 4.2 on Tremont.  */
> > > > +                copy, pminub and avoid SSE 4.2 on Tremont.  */
> > > >               cpu_features->preferred[index_arch_Fast_Rep_String]
> > > > -               |= (bit_arch_Fast_Rep_String
> > > > -                   | bit_arch_Fast_Unaligned_Load
> > > > -                   | bit_arch_Fast_Unaligned_Copy
> > > > -                   | bit_arch_Prefer_PMINUB_for_stringop
> > > > -                   | bit_arch_Slow_SSE4_2);
> > > > +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > > > +                     | bit_arch_Fast_Unaligned_Copy
> > > > +                     | bit_arch_Prefer_PMINUB_for_stringop
> > > > +                     | bit_arch_Slow_SSE4_2);
> > > >               break;
> > > >
> > > > +             /* Default tuned KNL microarch.  */
> > > > +           case INTEL_KNIGHTS_MILL:
> > > > +             goto default_tuning;
> > > > +             /* Default tuned atom microarch.  */
> > > > +           case INTEL_ATOM_SIERRAFOREST:
> > > > +           case INTEL_ATOM_GRANDRIDGE:
> > > > +           case INTEL_ATOM_SALTWELL:
> > >
> > > Move Salwell to Bonnell.
> >
> > We where only match models 0x1c and 0x26 for the BSF
> > optimization before. Would prefer to keep this patch purely
> > refactor with no change to functionality. We already have
> > a follow up patch to move saltwell->bonnell.
>
> There is no need for it.
>
Okay. Done in V8.
> > >
> > > > +             goto default_tuning;
> > > > +
> > > > +             /* Bigcore Tuning.  */
> > > > +           case INTEL_UNKNOWN:
> > > >             default:
> > > > +           default_tuning:
> > > >               /* Unknown family 0x06 processors.  Assuming this is one
> > > >                  of Core i3/i5/i7 processors if AVX is available.  */
> > > >               if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
> > > >                 break;
> > > > -             /* Fall through.  */
> > > > -
> > > > -           case 0x1a:
> > > > -           case 0x1e:
> > > > -           case 0x1f:
> > > > -           case 0x25:
> > > > -           case 0x2c:
> > > > -           case 0x2e:
> > > > -           case 0x2f:
> > > > +           case INTEL_BIGCORE_NEHALEM:
> > > > +           case INTEL_BIGCORE_WESTMERE:
> > > >               /* Rep string instructions, unaligned load, unaligned copy,
> > > >                  and pminub are fast on Intel Core i3, i5 and i7.  */
> > > >               cpu_features->preferred[index_arch_Fast_Rep_String]
> > > > -               |= (bit_arch_Fast_Rep_String
> > > > -                   | bit_arch_Fast_Unaligned_Load
> > > > -                   | bit_arch_Fast_Unaligned_Copy
> > > > -                   | bit_arch_Prefer_PMINUB_for_stringop);
> > > > +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > > > +                     | bit_arch_Fast_Unaligned_Copy
> > > > +                     | bit_arch_Prefer_PMINUB_for_stringop);
> > > >               break;
> > > > +
> > > > +             /* Default tuned Bigcore microarch.  */
> > > > +           case INTEL_BIGCORE_SANDYBRIDGE:
> > > > +           case INTEL_BIGCORE_IVYBRIDGE:
> > > > +           case INTEL_BIGCORE_HASWELL:
> > > > +           case INTEL_BIGCORE_BROADWELL:
> > > > +           case INTEL_BIGCORE_SKYLAKE:
> > > > +           case INTEL_BIGCORE_AMBERLAKE:
> > > > +           case INTEL_BIGCORE_COFFEELAKE:
> > > > +           case INTEL_BIGCORE_WHISKEYLAKE:
> > > > +           case INTEL_BIGCORE_KABYLAKE:
> > > > +           case INTEL_BIGCORE_COMETLAKE:
> > > > +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> > > > +           case INTEL_BIGCORE_CASCADELAKE:
> > > > +           case INTEL_BIGCORE_COOPERLAKE:
> > > > +           case INTEL_BIGCORE_CANNONLAKE:
> > > > +           case INTEL_BIGCORE_ICELAKE:
> > > > +           case INTEL_BIGCORE_TIGERLAKE:
> > > > +           case INTEL_BIGCORE_ROCKETLAKE:
> > > > +           case INTEL_BIGCORE_RAPTORLAKE:
> > > > +           case INTEL_BIGCORE_METEORLAKE:
> > > > +           case INTEL_BIGCORE_LUNARLAKE:
> > > > +           case INTEL_BIGCORE_ARROWLAKE:
> > > > +           case INTEL_BIGCORE_SAPPHIRERAPIDS:
> > > > +           case INTEL_BIGCORE_EMERALDRAPIDS:
> > > > +           case INTEL_BIGCORE_GRANITERAPIDS:
> > > > +             goto default_tuning;
> > > > +
> > > > +           /* Default tuned Mixed (bigcore + atom SOC).  */
> > > > +           case INTEL_MIXED_LAKEFIELD:
> > > > +           case INTEL_MIXED_ALDERLAKE:
> > > > +             goto default_tuning;
>
> No need for "goto default_tuning;".  The default case should
> cover them.   If we want to document them, they can be put
> in comments or "#if 0".
>

Done.
> > > >             }
> > > >
> > > > -        /* Disable TSX on some processors to avoid TSX on kernels that
> > > > -           weren't updated with the latest microcode package (which
> > > > -           disables broken feature by default).  */
> > > > -        switch (model)
> > > > +             /* Disable TSX on some processors to avoid TSX on kernels that
> > > > +                weren't updated with the latest microcode package (which
> > > > +                disables broken feature by default).  */
> > > > +         switch (microarch)
> > > >             {
> > > > -           case 0x55:
> > > > +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> > > > +             /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and
> > > > +                Cooperlake also have model 0x55 but stepping 5/6 and 11
> > > > +                respectively so double check the stepping to be safe. */
> > > >               if (stepping <= 5)
> > > >                 goto disable_tsx;
> > > >               break;
> > > > -           case 0x8e:
> > > > -             /* NB: Although the errata documents that for model == 0x8e,
> > > > -                only 0xb stepping or lower are impacted, the intention of
> > > > -                the errata was to disable TSX on all client processors on
> > > > -                all steppings.  Include 0xc stepping which is an Intel
> > > > -                Core i7-8665U, a client mobile processor.  */
> > > > -           case 0x9e:
> > > > -             if (stepping > 0xc)
> > > > +
> > > > +           case INTEL_BIGCORE_SKYLAKE:
> > > > +           case INTEL_BIGCORE_AMBERLAKE:
> > > > +           case INTEL_BIGCORE_COFFEELAKE:
> > > > +           case INTEL_BIGCORE_WHISKEYLAKE:
> > > > +           case INTEL_BIGCORE_KABYLAKE:
> > > > +               /* NB: Although the errata documents that for model == 0x8e
> > > > +                  (skylake client), only 0xb stepping or lower are impacted,
> > > > +                  the intention of the errata was to disable TSX on all client
> > > > +                  processors on all steppings.  Include 0xc stepping which is
> > > > +                  an Intel Core i7-8665U, a client mobile processor.  */
> > > > +               if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
> > > >                 break;
> > > > -             /* Fall through.  */
> > > > -           case 0x4e:
> > > > -           case 0x5e:
> > > > -             {
> > > > +
> > > >                 /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
> > > >                    processors listed in:
> > > >
> > > >  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
> > > >                  */
> > > > -disable_tsx:
> > > > +           disable_tsx:
> > > >                 CPU_FEATURE_UNSET (cpu_features, HLE);
> > > >                 CPU_FEATURE_UNSET (cpu_features, RTM);
> > > >                 CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> > > > -             }
> > > > -             break;
> > > > -           case 0x3f:
> > > > -             /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> > > > -             if (stepping >= 4)
> > > >                 break;
> > > > -             /* Fall through.  */
> > > > -           case 0x3c:
> > > > -           case 0x45:
> > > > -           case 0x46:
> > > > -             /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> > > > -                with stepping >= 4) to avoid TSX on kernels that weren't
> > > > -                updated with the latest microcode package (which disables
> > > > -                broken feature by default).  */
> > > > -             CPU_FEATURE_UNSET (cpu_features, RTM);
> > > > -             break;
> > > > +
> > > > +           case INTEL_BIGCORE_HASWELL:
> > > > +               /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> > > > +                  TSX.  Haswell also include other model numbers that have
> > > > +                  working TSX.  */
> > > > +               if (model == 0x3f && stepping >= 4)
> > > > +               break;
> > > > +
> > > > +               CPU_FEATURE_UNSET (cpu_features, RTM);
> > > > +               break;
> > > >             }
> > > >         }
> > > >
> > > > --
> > > > 2.34.1
> > > >
> > >
> > >
> > > --
> > > H.J.
>
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v7 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell'
  2023-05-10 22:12   ` [PATCH v7 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Noah Goldstein
@ 2023-05-12  5:12     ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-12  5:12 UTC (permalink / raw)
  To: libc-alpha; +Cc: hjl.tools, carlos

On Wed, May 10, 2023 at 5:12 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Saltwell is just a die shrink of Bonnell, so the same
> micro-architectural optimization preferences apply.
> ---
>  sysdeps/x86/cpu-features.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> index 4cc1cd9fed..0fb02d2f39 100644
> --- a/sysdeps/x86/cpu-features.c
> +++ b/sysdeps/x86/cpu-features.c
> @@ -672,7 +672,8 @@ init_cpu_features (struct cpu_features *cpu_features)
>             {
>               /* Atom / KNL tuning.  */
>             case INTEL_ATOM_BONNELL:
> -             /* BSF is slow on Bonnell.  */
> +           case INTEL_ATOM_SALTWELL:
> +             /* BSF is slow on Bonnell/Saltwell.  */
>               cpu_features->preferred[index_arch_Slow_BSF]
>                   |= bit_arch_Slow_BSF;
>               break;
> @@ -710,7 +711,6 @@ init_cpu_features (struct cpu_features *cpu_features)
>               /* Default tuned atom microarch.  */
>             case INTEL_ATOM_SIERRAFOREST:
>             case INTEL_ATOM_GRANDRIDGE:
> -           case INTEL_ATOM_SALTWELL:
>               goto default_tuning;
>
>               /* Bigcore Tuning.  */
> --
> 2.34.1
>

Abandoning this patch. Its not longer needed as we just use the same def for
bonnell and saltwell now.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v8 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
                   ` (7 preceding siblings ...)
  2023-05-12  5:10 ` [PATCH v8 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-05-12 22:03 ` Noah Goldstein
  2023-05-13  5:19 ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-12 22:03 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Different systems prefer a different divisors.

From benchmarks[1] so far the following divisors have been found:
    ICX     : 2
    SKX     : 2
    BWD     : 8

For Intel, we are generalizing that BWD and older prefers 8 as a
divisor, and SKL and newer prefers 2. This number can be further tuned
as benchmarks are run.

[1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
---
 sysdeps/x86/cpu-features.c         | 27 +++++++++++++++++--------
 sysdeps/x86/dl-cacheinfo.h         | 32 ++++++++++++++++++------------
 sysdeps/x86/include/cpu-features.h |  3 +++
 3 files changed, 41 insertions(+), 21 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 264d309dd7..3ec7e6f2df 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -638,6 +638,7 @@ init_cpu_features (struct cpu_features *cpu_features)
   unsigned int stepping = 0;
   enum cpu_features_kind kind;
 
+  cpu_features->cachesize_non_temporal_divisor = 4;
 #if !HAS_CPUID
   if (__get_cpuid_max (0, 0) == 0)
     {
@@ -717,12 +718,13 @@ init_cpu_features (struct cpu_features *cpu_features)
 
 	      /* Bigcore/Default Tuning.  */
 	    default:
+	    default_tuning:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	    case INTEL_BIGCORE_NEHALEM:
-	    case INTEL_BIGCORE_WESTMERE:
+
+	    enable_modern_features:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
@@ -731,12 +733,20 @@ init_cpu_features (struct cpu_features *cpu_features)
 		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
 
-	   /*
-	    Default tuned Bigcore microarch.
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
+	      /* Older CPUs prefer non-temporal stores at lower threshold.  */
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto enable_modern_features;
+
+	      /* Default tuned Bigcore microarch.  */
 	    case INTEL_BIGCORE_SANDYBRIDGE:
 	    case INTEL_BIGCORE_IVYBRIDGE:
 	    case INTEL_BIGCORE_HASWELL:
 	    case INTEL_BIGCORE_BROADWELL:
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto default_tuning;
+
 	    case INTEL_BIGCORE_SKYLAKE:
 	    case INTEL_BIGCORE_AMBERLAKE:
 	    case INTEL_BIGCORE_COFFEELAKE:
@@ -755,13 +765,14 @@ init_cpu_features (struct cpu_features *cpu_features)
 	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
 	    case INTEL_BIGCORE_EMERALDRAPIDS:
 	    case INTEL_BIGCORE_GRANITERAPIDS:
-	    */
+	      cpu_features->cachesize_non_temporal_divisor = 2;
+	      goto default_tuning;
 
-	   /*
-	    Default tuned Mixed (bigcore + atom SOC).
+	      /* Default tuned Mixed (bigcore + atom SOC). */
 	    case INTEL_MIXED_LAKEFIELD:
 	    case INTEL_MIXED_ALDERLAKE:
-	    */
+	      cpu_features->cachesize_non_temporal_divisor = 2;
+	      goto default_tuning;
 	    }
 
 	      /* Disable TSX on some processors to avoid TSX on kernels that
diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index 4a1a5423ff..864b00a521 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 1/4 of size
-     of the chip's cache. For most Intel and AMD processors with an
-     initial release date between 2017 and 2023, a thread's typical
-     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
-     estimate the point where non-temporal stores begin outcompeting
-     REP MOVSB. As well the point where the fact that non-temporal
-     stores are forced back to main memory would already occurred to the
-     majority of the lines in the copy. Note, concerns about the
-     entire L3 cache being evicted by the copy are mostly alleviated
-     by the fact that modern HW detects streaming patterns and
-     provides proper LRU hints so that the maximum thrashing
-     capped at 1/associativity. */
-  unsigned long int non_temporal_threshold = shared / 4;
+  unsigned long int cachesize_non_temporal_divisor
+      = cpu_features->cachesize_non_temporal_divisor;
+  if (cachesize_non_temporal_divisor <= 0)
+    cachesize_non_temporal_divisor = 4;
+
+  /* The default setting for the non_temporal threshold is [1/2, 1/8] of size
+     of the chip's cache (depending on `cachesize_non_temporal_divisor` which
+     is microarch specific. The defeault is 1/4). For most Intel and AMD
+     processors with an initial release date between 2017 and 2023, a thread's
+     typical share of the cache is from 18-64MB. Using a reasonable size
+     fraction of L3 is meant to estimate the point where non-temporal stores
+     begin outcompeting REP MOVSB. As well the point where the fact that
+     non-temporal stores are forced back to main memory would already occurred
+     to the majority of the lines in the copy. Note, concerns about the entire
+     L3 cache being evicted by the copy are mostly alleviated by the fact that
+     modern HW detects streaming patterns and provides proper LRU hints so that
+     the maximum thrashing capped at 1/associativity. */
+  unsigned long int non_temporal_threshold
+      = shared / cachesize_non_temporal_divisor;
   /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
      a higher risk of actually thrashing the cache as they don't have a HW LRU
      hint. As well, there performance in highly parallel situations is
diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
index 40b8129d6a..f5b9dd54fe 100644
--- a/sysdeps/x86/include/cpu-features.h
+++ b/sysdeps/x86/include/cpu-features.h
@@ -915,6 +915,9 @@ struct cpu_features
   unsigned long int shared_cache_size;
   /* Threshold to use non temporal store.  */
   unsigned long int non_temporal_threshold;
+  /* When no user non_temporal_threshold is specified. We default to
+     cachesize / cachesize_non_temporal_divisor.  */
+  unsigned long int cachesize_non_temporal_divisor;
   /* Threshold to use "rep movsb".  */
   unsigned long int rep_movsb_threshold;
   /* Threshold to stop using "rep movsb".  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v8 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-12  5:10   ` [PATCH v8 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-05-12 22:17     ` H.J. Lu
  2023-05-13  5:18       ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: H.J. Lu @ 2023-05-12 22:17 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Thu, May 11, 2023 at 10:11 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> This patch should have no affect on existing functionality.
>
> The current code, which has a single switch for model detection and
> setting prefered features, is difficult to follow/extend. The cases
> use magic numbers and many microarchitectures are missing. This makes
> it difficult to reason about what is implemented so far and/or
> how/where to add support for new features.
>
> This patch splits the model detection and preference setting stages so
> that CPU preferences can be set based on a complete list of available
> microarchitectures, rather than based on model magic numbers.
> ---
>  sysdeps/x86/cpu-features.c | 400 +++++++++++++++++++++++++++++--------
>  1 file changed, 316 insertions(+), 84 deletions(-)
>
> diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> index 5bff8ec0b4..264d309dd7 100644
> --- a/sysdeps/x86/cpu-features.c
> +++ b/sysdeps/x86/cpu-features.c
> @@ -417,6 +417,218 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
>                      == index_arch_Fast_Copy_Backward)),
>                 "Incorrect index_arch_Fast_Unaligned_Load");
>
> +
> +/* Intel Family-6 microarch list.  */
> +enum
> +{
> +  /* Atom processors.  */
> +  INTEL_ATOM_BONNELL,
> +  INTEL_ATOM_SILVERMONT,
> +  INTEL_ATOM_AIRMONT,
> +  INTEL_ATOM_GOLDMONT,
> +  INTEL_ATOM_GOLDMONT_PLUS,
> +  INTEL_ATOM_SIERRAFOREST,
> +  INTEL_ATOM_GRANDRIDGE,
> +  INTEL_ATOM_TREMONT,
> +
> +  /* Bigcore processors.  */
> +  INTEL_BIGCORE_MEROM,
> +  INTEL_BIGCORE_PENRYN,
> +  INTEL_BIGCORE_DUNNINGTON,
> +  INTEL_BIGCORE_NEHALEM,
> +  INTEL_BIGCORE_WESTMERE,
> +  INTEL_BIGCORE_SANDYBRIDGE,
> +  INTEL_BIGCORE_IVYBRIDGE,
> +  INTEL_BIGCORE_HASWELL,
> +  INTEL_BIGCORE_BROADWELL,
> +  INTEL_BIGCORE_SKYLAKE,
> +  INTEL_BIGCORE_AMBERLAKE,
> +  INTEL_BIGCORE_COFFEELAKE,
> +  INTEL_BIGCORE_WHISKEYLAKE,
> +  INTEL_BIGCORE_KABYLAKE,
> +  INTEL_BIGCORE_COMETLAKE,
> +  INTEL_BIGCORE_SKYLAKE_AVX512,
> +  INTEL_BIGCORE_CANNONLAKE,
> +  INTEL_BIGCORE_ICELAKE,
> +  INTEL_BIGCORE_TIGERLAKE,
> +  INTEL_BIGCORE_ROCKETLAKE,
> +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> +  INTEL_BIGCORE_RAPTORLAKE,
> +  INTEL_BIGCORE_EMERALDRAPIDS,
> +  INTEL_BIGCORE_METEORLAKE,
> +  INTEL_BIGCORE_LUNARLAKE,
> +  INTEL_BIGCORE_ARROWLAKE,
> +  INTEL_BIGCORE_GRANITERAPIDS,
> +
> +  /* Mixed (bigcore + atom SOC).  */
> +  INTEL_MIXED_LAKEFIELD,
> +  INTEL_MIXED_ALDERLAKE,
> +
> +  /* KNL.  */
> +  INTEL_KNIGHTS_MILL,
> +  INTEL_KNIGHTS_LANDING,
> +
> +  /* Unknown.  */
> +  INTEL_UNKNOWN,
> +};
> +
> +static unsigned int
> +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
> +{
> +  switch (model)
> +    {
> +    case 0x1C:
> +    case 0x26:
> +      return INTEL_ATOM_BONNELL;
> +    case 0x27:
> +    case 0x35:
> +    case 0x36:
> +      /* Really Saltwell, but Saltwell is just a die shrink of Bonnell
> +         (microarchitecturally identical).  */
> +      return INTEL_ATOM_BONNELL;
> +    case 0x37:
> +    case 0x4A:
> +    case 0x4D:
> +    case 0x5D:
> +      return INTEL_ATOM_SILVERMONT;
> +    case 0x4C:
> +    case 0x5A:
> +    case 0x75:
> +      return INTEL_ATOM_AIRMONT;
> +    case 0x5C:
> +    case 0x5F:
> +      return INTEL_ATOM_GOLDMONT;
> +    case 0x7A:
> +      return INTEL_ATOM_GOLDMONT_PLUS;
> +    case 0xAF:
> +      return INTEL_ATOM_SIERRAFOREST;
> +    case 0xB6:
> +      return INTEL_ATOM_GRANDRIDGE;
> +    case 0x86:
> +    case 0x96:
> +    case 0x9C:
> +      return INTEL_ATOM_TREMONT;
> +    case 0x0F:
> +    case 0x16:
> +      return INTEL_BIGCORE_MEROM;
> +    case 0x17:
> +      return INTEL_BIGCORE_PENRYN;
> +    case 0x1D:
> +      return INTEL_BIGCORE_DUNNINGTON;
> +    case 0x1A:
> +    case 0x1E:
> +    case 0x1F:
> +    case 0x2E:
> +      return INTEL_BIGCORE_NEHALEM;
> +    case 0x25:
> +    case 0x2C:
> +    case 0x2F:
> +      return INTEL_BIGCORE_WESTMERE;
> +    case 0x2A:
> +    case 0x2D:
> +      return INTEL_BIGCORE_SANDYBRIDGE;
> +    case 0x3A:
> +    case 0x3E:
> +      return INTEL_BIGCORE_IVYBRIDGE;
> +    case 0x3C:
> +    case 0x3F:
> +    case 0x45:
> +    case 0x46:
> +      return INTEL_BIGCORE_HASWELL;
> +    case 0x3D:
> +    case 0x47:
> +    case 0x4F:
> +    case 0x56:
> +      return INTEL_BIGCORE_BROADWELL;
> +    case 0x4E:
> +    case 0x5E:
> +      return INTEL_BIGCORE_SKYLAKE;
> +    case 0x8E:
> +      switch (stepping)
> +       {
> +       case 0x09:
> +         return INTEL_BIGCORE_AMBERLAKE;
> +       case 0x0A:
> +         return INTEL_BIGCORE_COFFEELAKE;
> +       case 0x0B:
> +       case 0x0C:
> +         return INTEL_BIGCORE_WHISKEYLAKE;
> +       default:
> +         return INTEL_BIGCORE_KABYLAKE;
> +       }
> +    case 0x9E:
> +      switch (stepping)
> +       {
> +       case 0x0A:
> +       case 0x0B:
> +       case 0x0C:
> +       case 0x0D:
> +         return INTEL_BIGCORE_COFFEELAKE;
> +       default:
> +         return INTEL_BIGCORE_KABYLAKE;
> +       }

All these stepping checks can be put in comments.

> +    case 0xA5:
> +    case 0xA6:
> +      return INTEL_BIGCORE_COMETLAKE;
> +    case 0x66:
> +      return INTEL_BIGCORE_CANNONLAKE;
> +    case 0x55:
> +    /*
> +     Stepping = {6, 7}
> +        -> Cascadelake
> +     Stepping = {11}
> +        -> Cooperlake
> +     else
> +        -> Skylake-avx512
> +
> +     These are all microarchitecturally indentical, so use
> +     Skylake-avx512 for all of them.
> +     */
> +      return INTEL_BIGCORE_SKYLAKE_AVX512;
> +    case 0x6A:
> +    case 0x6C:
> +    case 0x7D:
> +    case 0x7E:
> +    case 0x9D:
> +      return INTEL_BIGCORE_ICELAKE;
> +    case 0x8C:
> +    case 0x8D:
> +      return INTEL_BIGCORE_TIGERLAKE;
> +    case 0xA7:
> +      return INTEL_BIGCORE_ROCKETLAKE;
> +    case 0x8F:
> +      return INTEL_BIGCORE_SAPPHIRERAPIDS;
> +    case 0xB7:
> +    case 0xBA:
> +    case 0xBF:
> +      return INTEL_BIGCORE_RAPTORLAKE;
> +    case 0xCF:
> +      return INTEL_BIGCORE_EMERALDRAPIDS;
> +    case 0xAA:
> +    case 0xAC:
> +      return INTEL_BIGCORE_METEORLAKE;
> +    case 0xbd:
> +      return INTEL_BIGCORE_LUNARLAKE;
> +    case 0xc6:
> +      return INTEL_BIGCORE_ARROWLAKE;
> +    case 0xAD:
> +    case 0xAE:
> +      return INTEL_BIGCORE_GRANITERAPIDS;
> +    case 0x8A:
> +      return INTEL_MIXED_LAKEFIELD;
> +    case 0x97:
> +    case 0x9A:
> +    case 0xBE:
> +      return INTEL_MIXED_ALDERLAKE;
> +    case 0x85:
> +      return INTEL_KNIGHTS_MILL;
> +    case 0x57:
> +      return INTEL_KNIGHTS_LANDING;
> +    default:
> +      return INTEL_UNKNOWN;
> +    }
> +}
> +
>  static inline void
>  init_cpu_features (struct cpu_features *cpu_features)
>  {
> @@ -453,129 +665,149 @@ init_cpu_features (struct cpu_features *cpu_features)
>        if (family == 0x06)
>         {
>           model += extended_model;
> -         switch (model)
> +         unsigned int microarch
> +             = intel_get_fam6_microarch (model, stepping);
> +
> +         switch (microarch)
>             {
> -           case 0x1c:
> -           case 0x26:
> -             /* BSF is slow on Atom.  */
> +             /* Atom / KNL tuning.  */
> +           case INTEL_ATOM_BONNELL:
> +             /* BSF is slow on Bonnell.  */
>               cpu_features->preferred[index_arch_Slow_BSF]
> -               |= bit_arch_Slow_BSF;
> +                 |= bit_arch_Slow_BSF;
>               break;
>
> -           case 0x57:
> -             /* Knights Landing.  Enable Silvermont optimizations.  */
> -
> -           case 0x7a:
> -             /* Unaligned load versions are faster than SSSE3
> -                on Goldmont Plus.  */
> -
> -           case 0x5c:
> -           case 0x5f:
>               /* Unaligned load versions are faster than SSSE3
> -                on Goldmont.  */
> +                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> +           case INTEL_ATOM_AIRMONT:
> +           case INTEL_ATOM_SILVERMONT:
> +           case INTEL_ATOM_GOLDMONT:
> +           case INTEL_ATOM_GOLDMONT_PLUS:
>
> -           case 0x4c:
> -           case 0x5a:
> -           case 0x75:
> -             /* Airmont is a die shrink of Silvermont.  */
> +          /* Knights Landing.  Enable Silvermont optimizations.  */
> +           case INTEL_KNIGHTS_LANDING:
>
> -           case 0x37:
> -           case 0x4a:
> -           case 0x4d:
> -           case 0x5d:
> -             /* Unaligned load versions are faster than SSSE3
> -                on Silvermont.  */
>               cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> -               |= (bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop
> -                   | bit_arch_Slow_SSE4_2);
> +                 |= (bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop
> +                     | bit_arch_Slow_SSE4_2);
>               break;
>
> -           case 0x86:
> -           case 0x96:
> -           case 0x9c:
> +           case INTEL_ATOM_TREMONT:
>               /* Enable rep string instructions, unaligned load, unaligned
> -                copy, pminub and avoid SSE 4.2 on Tremont.  */
> +                copy, pminub and avoid SSE 4.2 on Tremont.  */
>               cpu_features->preferred[index_arch_Fast_Rep_String]
> -               |= (bit_arch_Fast_Rep_String
> -                   | bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop
> -                   | bit_arch_Slow_SSE4_2);
> +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop
> +                     | bit_arch_Slow_SSE4_2);
>               break;
>
> +          /*
> +           Default tuned KNL microarch.
                                       KNM
> +           case INTEL_KNIGHTS_MILL:
> +        */
> +
> +          /*
> +           Default tuned atom microarch.
> +           case INTEL_ATOM_SIERRAFOREST:
> +           case INTEL_ATOM_GRANDRIDGE:
> +          */
> +
> +             /* Bigcore/Default Tuning.  */
>             default:
>               /* Unknown family 0x06 processors.  Assuming this is one
>                  of Core i3/i5/i7 processors if AVX is available.  */
>               if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
>                 break;
> -             /* Fall through.  */
> -
> -           case 0x1a:
> -           case 0x1e:
> -           case 0x1f:
> -           case 0x25:
> -           case 0x2c:
> -           case 0x2e:
> -           case 0x2f:
> +           case INTEL_BIGCORE_NEHALEM:
> +           case INTEL_BIGCORE_WESTMERE:
>
>               /* Rep string instructions, unaligned load, unaligned copy,
>                  and pminub are fast on Intel Core i3, i5 and i7.  */
>               cpu_features->preferred[index_arch_Fast_Rep_String]
> -               |= (bit_arch_Fast_Rep_String
> -                   | bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop);
> +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop);
>               break;
> +
> +          /*
> +           Default tuned Bigcore microarch.
> +           case INTEL_BIGCORE_SANDYBRIDGE:
> +           case INTEL_BIGCORE_IVYBRIDGE:
> +           case INTEL_BIGCORE_HASWELL:
> +           case INTEL_BIGCORE_BROADWELL:
> +           case INTEL_BIGCORE_SKYLAKE:
> +           case INTEL_BIGCORE_AMBERLAKE:
> +           case INTEL_BIGCORE_COFFEELAKE:
> +           case INTEL_BIGCORE_WHISKEYLAKE:
> +           case INTEL_BIGCORE_KABYLAKE:
> +           case INTEL_BIGCORE_COMETLAKE:
> +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> +           case INTEL_BIGCORE_CANNONLAKE:
> +           case INTEL_BIGCORE_ICELAKE:
> +           case INTEL_BIGCORE_TIGERLAKE:
> +           case INTEL_BIGCORE_ROCKETLAKE:
> +           case INTEL_BIGCORE_RAPTORLAKE:
> +           case INTEL_BIGCORE_METEORLAKE:
> +           case INTEL_BIGCORE_LUNARLAKE:
> +           case INTEL_BIGCORE_ARROWLAKE:
> +           case INTEL_BIGCORE_SAPPHIRERAPIDS:
> +           case INTEL_BIGCORE_EMERALDRAPIDS:
> +           case INTEL_BIGCORE_GRANITERAPIDS:
> +           */
> +
> +          /*
> +           Default tuned Mixed (bigcore + atom SOC).
> +           case INTEL_MIXED_LAKEFIELD:
> +           case INTEL_MIXED_ALDERLAKE:
> +           */
>             }
>
> -        /* Disable TSX on some processors to avoid TSX on kernels that
> -           weren't updated with the latest microcode package (which
> -           disables broken feature by default).  */
> -        switch (model)
> +             /* Disable TSX on some processors to avoid TSX on kernels that
> +                weren't updated with the latest microcode package (which
> +                disables broken feature by default).  */
> +         switch (microarch)
>             {
> -           case 0x55:
> +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> +             /* 0x55 (Skylake-avx512) && stepping <= 5 disable TSX. */
>               if (stepping <= 5)
>                 goto disable_tsx;
>               break;
> -           case 0x8e:
> -             /* NB: Although the errata documents that for model == 0x8e,
> -                only 0xb stepping or lower are impacted, the intention of
> -                the errata was to disable TSX on all client processors on
> -                all steppings.  Include 0xc stepping which is an Intel
> -                Core i7-8665U, a client mobile processor.  */
> -           case 0x9e:
> -             if (stepping > 0xc)
> +
> +           case INTEL_BIGCORE_SKYLAKE:
> +           case INTEL_BIGCORE_AMBERLAKE:
> +           case INTEL_BIGCORE_COFFEELAKE:
> +           case INTEL_BIGCORE_WHISKEYLAKE:
> +           case INTEL_BIGCORE_KABYLAKE:
> +               /* NB: Although the errata documents that for model == 0x8e
> +                  (skylake client), only 0xb stepping or lower are impacted,
> +                  the intention of the errata was to disable TSX on all client
> +                  processors on all steppings.  Include 0xc stepping which is
> +                  an Intel Core i7-8665U, a client mobile processor.  */
> +               if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
>                 break;
> -             /* Fall through.  */
> -           case 0x4e:
> -           case 0x5e:
> -             {
> +
>                 /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
>                    processors listed in:
>
>  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
>                  */
> -disable_tsx:
> +           disable_tsx:
>                 CPU_FEATURE_UNSET (cpu_features, HLE);
>                 CPU_FEATURE_UNSET (cpu_features, RTM);
>                 CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> -             }
> -             break;
> -           case 0x3f:
> -             /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> -             if (stepping >= 4)
>                 break;
> -             /* Fall through.  */
> -           case 0x3c:
> -           case 0x45:
> -           case 0x46:
> -             /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> -                with stepping >= 4) to avoid TSX on kernels that weren't
> -                updated with the latest microcode package (which disables
> -                broken feature by default).  */
> -             CPU_FEATURE_UNSET (cpu_features, RTM);
> -             break;
> +
> +           case INTEL_BIGCORE_HASWELL:
> +               /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> +                  TSX.  Haswell also include other model numbers that have
> +                  working TSX.  */
> +               if (model == 0x3f && stepping >= 4)
> +               break;
> +
> +               CPU_FEATURE_UNSET (cpu_features, RTM);
> +               break;
>             }
>         }
>
> --
> 2.34.1
>


-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v8 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-12 22:17     ` H.J. Lu
@ 2023-05-13  5:18       ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-13  5:18 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha, carlos

On Fri, May 12, 2023 at 5:18 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Thu, May 11, 2023 at 10:11 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > This patch should have no affect on existing functionality.
> >
> > The current code, which has a single switch for model detection and
> > setting prefered features, is difficult to follow/extend. The cases
> > use magic numbers and many microarchitectures are missing. This makes
> > it difficult to reason about what is implemented so far and/or
> > how/where to add support for new features.
> >
> > This patch splits the model detection and preference setting stages so
> > that CPU preferences can be set based on a complete list of available
> > microarchitectures, rather than based on model magic numbers.
> > ---
> >  sysdeps/x86/cpu-features.c | 400 +++++++++++++++++++++++++++++--------
> >  1 file changed, 316 insertions(+), 84 deletions(-)
> >
> > diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> > index 5bff8ec0b4..264d309dd7 100644
> > --- a/sysdeps/x86/cpu-features.c
> > +++ b/sysdeps/x86/cpu-features.c
> > @@ -417,6 +417,218 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
> >                      == index_arch_Fast_Copy_Backward)),
> >                 "Incorrect index_arch_Fast_Unaligned_Load");
> >
> > +
> > +/* Intel Family-6 microarch list.  */
> > +enum
> > +{
> > +  /* Atom processors.  */
> > +  INTEL_ATOM_BONNELL,
> > +  INTEL_ATOM_SILVERMONT,
> > +  INTEL_ATOM_AIRMONT,
> > +  INTEL_ATOM_GOLDMONT,
> > +  INTEL_ATOM_GOLDMONT_PLUS,
> > +  INTEL_ATOM_SIERRAFOREST,
> > +  INTEL_ATOM_GRANDRIDGE,
> > +  INTEL_ATOM_TREMONT,
> > +
> > +  /* Bigcore processors.  */
> > +  INTEL_BIGCORE_MEROM,
> > +  INTEL_BIGCORE_PENRYN,
> > +  INTEL_BIGCORE_DUNNINGTON,
> > +  INTEL_BIGCORE_NEHALEM,
> > +  INTEL_BIGCORE_WESTMERE,
> > +  INTEL_BIGCORE_SANDYBRIDGE,
> > +  INTEL_BIGCORE_IVYBRIDGE,
> > +  INTEL_BIGCORE_HASWELL,
> > +  INTEL_BIGCORE_BROADWELL,
> > +  INTEL_BIGCORE_SKYLAKE,
> > +  INTEL_BIGCORE_AMBERLAKE,
> > +  INTEL_BIGCORE_COFFEELAKE,
> > +  INTEL_BIGCORE_WHISKEYLAKE,
> > +  INTEL_BIGCORE_KABYLAKE,
> > +  INTEL_BIGCORE_COMETLAKE,
> > +  INTEL_BIGCORE_SKYLAKE_AVX512,
> > +  INTEL_BIGCORE_CANNONLAKE,
> > +  INTEL_BIGCORE_ICELAKE,
> > +  INTEL_BIGCORE_TIGERLAKE,
> > +  INTEL_BIGCORE_ROCKETLAKE,
> > +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> > +  INTEL_BIGCORE_RAPTORLAKE,
> > +  INTEL_BIGCORE_EMERALDRAPIDS,
> > +  INTEL_BIGCORE_METEORLAKE,
> > +  INTEL_BIGCORE_LUNARLAKE,
> > +  INTEL_BIGCORE_ARROWLAKE,
> > +  INTEL_BIGCORE_GRANITERAPIDS,
> > +
> > +  /* Mixed (bigcore + atom SOC).  */
> > +  INTEL_MIXED_LAKEFIELD,
> > +  INTEL_MIXED_ALDERLAKE,
> > +
> > +  /* KNL.  */
> > +  INTEL_KNIGHTS_MILL,
> > +  INTEL_KNIGHTS_LANDING,
> > +
> > +  /* Unknown.  */
> > +  INTEL_UNKNOWN,
> > +};
> > +
> > +static unsigned int
> > +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
> > +{
> > +  switch (model)
> > +    {
> > +    case 0x1C:
> > +    case 0x26:
> > +      return INTEL_ATOM_BONNELL;
> > +    case 0x27:
> > +    case 0x35:
> > +    case 0x36:
> > +      /* Really Saltwell, but Saltwell is just a die shrink of Bonnell
> > +         (microarchitecturally identical).  */
> > +      return INTEL_ATOM_BONNELL;
> > +    case 0x37:
> > +    case 0x4A:
> > +    case 0x4D:
> > +    case 0x5D:
> > +      return INTEL_ATOM_SILVERMONT;
> > +    case 0x4C:
> > +    case 0x5A:
> > +    case 0x75:
> > +      return INTEL_ATOM_AIRMONT;
> > +    case 0x5C:
> > +    case 0x5F:
> > +      return INTEL_ATOM_GOLDMONT;
> > +    case 0x7A:
> > +      return INTEL_ATOM_GOLDMONT_PLUS;
> > +    case 0xAF:
> > +      return INTEL_ATOM_SIERRAFOREST;
> > +    case 0xB6:
> > +      return INTEL_ATOM_GRANDRIDGE;
> > +    case 0x86:
> > +    case 0x96:
> > +    case 0x9C:
> > +      return INTEL_ATOM_TREMONT;
> > +    case 0x0F:
> > +    case 0x16:
> > +      return INTEL_BIGCORE_MEROM;
> > +    case 0x17:
> > +      return INTEL_BIGCORE_PENRYN;
> > +    case 0x1D:
> > +      return INTEL_BIGCORE_DUNNINGTON;
> > +    case 0x1A:
> > +    case 0x1E:
> > +    case 0x1F:
> > +    case 0x2E:
> > +      return INTEL_BIGCORE_NEHALEM;
> > +    case 0x25:
> > +    case 0x2C:
> > +    case 0x2F:
> > +      return INTEL_BIGCORE_WESTMERE;
> > +    case 0x2A:
> > +    case 0x2D:
> > +      return INTEL_BIGCORE_SANDYBRIDGE;
> > +    case 0x3A:
> > +    case 0x3E:
> > +      return INTEL_BIGCORE_IVYBRIDGE;
> > +    case 0x3C:
> > +    case 0x3F:
> > +    case 0x45:
> > +    case 0x46:
> > +      return INTEL_BIGCORE_HASWELL;
> > +    case 0x3D:
> > +    case 0x47:
> > +    case 0x4F:
> > +    case 0x56:
> > +      return INTEL_BIGCORE_BROADWELL;
> > +    case 0x4E:
> > +    case 0x5E:
> > +      return INTEL_BIGCORE_SKYLAKE;
> > +    case 0x8E:
> > +      switch (stepping)
> > +       {
> > +       case 0x09:
> > +         return INTEL_BIGCORE_AMBERLAKE;
> > +       case 0x0A:
> > +         return INTEL_BIGCORE_COFFEELAKE;
> > +       case 0x0B:
> > +       case 0x0C:
> > +         return INTEL_BIGCORE_WHISKEYLAKE;
> > +       default:
> > +         return INTEL_BIGCORE_KABYLAKE;
> > +       }
> > +    case 0x9E:
> > +      switch (stepping)
> > +       {
> > +       case 0x0A:
> > +       case 0x0B:
> > +       case 0x0C:
> > +       case 0x0D:
> > +         return INTEL_BIGCORE_COFFEELAKE;
> > +       default:
> > +         return INTEL_BIGCORE_KABYLAKE;
> > +       }
>
> All these stepping checks can be put in comments.
Okay. Should we drop commetlake / cannonlake / kabylake identifiers as well
and only use skylake/skylake-avx512?
>
> > +    case 0xA5:
> > +    case 0xA6:
> > +      return INTEL_BIGCORE_COMETLAKE;
> > +    case 0x66:
> > +      return INTEL_BIGCORE_CANNONLAKE;
> > +    case 0x55:
> > +    /*
> > +     Stepping = {6, 7}
> > +        -> Cascadelake
> > +     Stepping = {11}
> > +        -> Cooperlake
> > +     else
> > +        -> Skylake-avx512
> > +
> > +     These are all microarchitecturally indentical, so use
> > +     Skylake-avx512 for all of them.
> > +     */
> > +      return INTEL_BIGCORE_SKYLAKE_AVX512;
> > +    case 0x6A:
> > +    case 0x6C:
> > +    case 0x7D:
> > +    case 0x7E:
> > +    case 0x9D:
> > +      return INTEL_BIGCORE_ICELAKE;
> > +    case 0x8C:
> > +    case 0x8D:
> > +      return INTEL_BIGCORE_TIGERLAKE;
> > +    case 0xA7:
> > +      return INTEL_BIGCORE_ROCKETLAKE;
> > +    case 0x8F:
> > +      return INTEL_BIGCORE_SAPPHIRERAPIDS;
> > +    case 0xB7:
> > +    case 0xBA:
> > +    case 0xBF:
> > +      return INTEL_BIGCORE_RAPTORLAKE;
> > +    case 0xCF:
> > +      return INTEL_BIGCORE_EMERALDRAPIDS;
> > +    case 0xAA:
> > +    case 0xAC:
> > +      return INTEL_BIGCORE_METEORLAKE;
> > +    case 0xbd:
> > +      return INTEL_BIGCORE_LUNARLAKE;
> > +    case 0xc6:
> > +      return INTEL_BIGCORE_ARROWLAKE;
> > +    case 0xAD:
> > +    case 0xAE:
> > +      return INTEL_BIGCORE_GRANITERAPIDS;
> > +    case 0x8A:
> > +      return INTEL_MIXED_LAKEFIELD;
> > +    case 0x97:
> > +    case 0x9A:
> > +    case 0xBE:
> > +      return INTEL_MIXED_ALDERLAKE;
> > +    case 0x85:
> > +      return INTEL_KNIGHTS_MILL;
> > +    case 0x57:
> > +      return INTEL_KNIGHTS_LANDING;
> > +    default:
> > +      return INTEL_UNKNOWN;
> > +    }
> > +}
> > +
> >  static inline void
> >  init_cpu_features (struct cpu_features *cpu_features)
> >  {
> > @@ -453,129 +665,149 @@ init_cpu_features (struct cpu_features *cpu_features)
> >        if (family == 0x06)
> >         {
> >           model += extended_model;
> > -         switch (model)
> > +         unsigned int microarch
> > +             = intel_get_fam6_microarch (model, stepping);
> > +
> > +         switch (microarch)
> >             {
> > -           case 0x1c:
> > -           case 0x26:
> > -             /* BSF is slow on Atom.  */
> > +             /* Atom / KNL tuning.  */
> > +           case INTEL_ATOM_BONNELL:
> > +             /* BSF is slow on Bonnell.  */
> >               cpu_features->preferred[index_arch_Slow_BSF]
> > -               |= bit_arch_Slow_BSF;
> > +                 |= bit_arch_Slow_BSF;
> >               break;
> >
> > -           case 0x57:
> > -             /* Knights Landing.  Enable Silvermont optimizations.  */
> > -
> > -           case 0x7a:
> > -             /* Unaligned load versions are faster than SSSE3
> > -                on Goldmont Plus.  */
> > -
> > -           case 0x5c:
> > -           case 0x5f:
> >               /* Unaligned load versions are faster than SSSE3
> > -                on Goldmont.  */
> > +                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> > +           case INTEL_ATOM_AIRMONT:
> > +           case INTEL_ATOM_SILVERMONT:
> > +           case INTEL_ATOM_GOLDMONT:
> > +           case INTEL_ATOM_GOLDMONT_PLUS:
> >
> > -           case 0x4c:
> > -           case 0x5a:
> > -           case 0x75:
> > -             /* Airmont is a die shrink of Silvermont.  */
> > +          /* Knights Landing.  Enable Silvermont optimizations.  */
> > +           case INTEL_KNIGHTS_LANDING:
> >
> > -           case 0x37:
> > -           case 0x4a:
> > -           case 0x4d:
> > -           case 0x5d:
> > -             /* Unaligned load versions are faster than SSSE3
> > -                on Silvermont.  */
> >               cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> > -               |= (bit_arch_Fast_Unaligned_Load
> > -                   | bit_arch_Fast_Unaligned_Copy
> > -                   | bit_arch_Prefer_PMINUB_for_stringop
> > -                   | bit_arch_Slow_SSE4_2);
> > +                 |= (bit_arch_Fast_Unaligned_Load
> > +                     | bit_arch_Fast_Unaligned_Copy
> > +                     | bit_arch_Prefer_PMINUB_for_stringop
> > +                     | bit_arch_Slow_SSE4_2);
> >               break;
> >
> > -           case 0x86:
> > -           case 0x96:
> > -           case 0x9c:
> > +           case INTEL_ATOM_TREMONT:
> >               /* Enable rep string instructions, unaligned load, unaligned
> > -                copy, pminub and avoid SSE 4.2 on Tremont.  */
> > +                copy, pminub and avoid SSE 4.2 on Tremont.  */
> >               cpu_features->preferred[index_arch_Fast_Rep_String]
> > -               |= (bit_arch_Fast_Rep_String
> > -                   | bit_arch_Fast_Unaligned_Load
> > -                   | bit_arch_Fast_Unaligned_Copy
> > -                   | bit_arch_Prefer_PMINUB_for_stringop
> > -                   | bit_arch_Slow_SSE4_2);
> > +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > +                     | bit_arch_Fast_Unaligned_Copy
> > +                     | bit_arch_Prefer_PMINUB_for_stringop
> > +                     | bit_arch_Slow_SSE4_2);
> >               break;
> >
> > +          /*
> > +           Default tuned KNL microarch.
>                                        KNM
Err, trying to refer to the whole "Knights_..." lineup. Changed to "Knights".
> > +           case INTEL_KNIGHTS_MILL:
> > +        */
> > +
> > +          /*
> > +           Default tuned atom microarch.
> > +           case INTEL_ATOM_SIERRAFOREST:
> > +           case INTEL_ATOM_GRANDRIDGE:
> > +          */
> > +
> > +             /* Bigcore/Default Tuning.  */
> >             default:
> >               /* Unknown family 0x06 processors.  Assuming this is one
> >                  of Core i3/i5/i7 processors if AVX is available.  */
> >               if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
> >                 break;
> > -             /* Fall through.  */
> > -
> > -           case 0x1a:
> > -           case 0x1e:
> > -           case 0x1f:
> > -           case 0x25:
> > -           case 0x2c:
> > -           case 0x2e:
> > -           case 0x2f:
> > +           case INTEL_BIGCORE_NEHALEM:
> > +           case INTEL_BIGCORE_WESTMERE:
> >
> >               /* Rep string instructions, unaligned load, unaligned copy,
> >                  and pminub are fast on Intel Core i3, i5 and i7.  */
> >               cpu_features->preferred[index_arch_Fast_Rep_String]
> > -               |= (bit_arch_Fast_Rep_String
> > -                   | bit_arch_Fast_Unaligned_Load
> > -                   | bit_arch_Fast_Unaligned_Copy
> > -                   | bit_arch_Prefer_PMINUB_for_stringop);
> > +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > +                     | bit_arch_Fast_Unaligned_Copy
> > +                     | bit_arch_Prefer_PMINUB_for_stringop);
> >               break;
> > +
> > +          /*
> > +           Default tuned Bigcore microarch.
> > +           case INTEL_BIGCORE_SANDYBRIDGE:
> > +           case INTEL_BIGCORE_IVYBRIDGE:
> > +           case INTEL_BIGCORE_HASWELL:
> > +           case INTEL_BIGCORE_BROADWELL:
> > +           case INTEL_BIGCORE_SKYLAKE:
> > +           case INTEL_BIGCORE_AMBERLAKE:
> > +           case INTEL_BIGCORE_COFFEELAKE:
> > +           case INTEL_BIGCORE_WHISKEYLAKE:
> > +           case INTEL_BIGCORE_KABYLAKE:
> > +           case INTEL_BIGCORE_COMETLAKE:
> > +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> > +           case INTEL_BIGCORE_CANNONLAKE:
> > +           case INTEL_BIGCORE_ICELAKE:
> > +           case INTEL_BIGCORE_TIGERLAKE:
> > +           case INTEL_BIGCORE_ROCKETLAKE:
> > +           case INTEL_BIGCORE_RAPTORLAKE:
> > +           case INTEL_BIGCORE_METEORLAKE:
> > +           case INTEL_BIGCORE_LUNARLAKE:
> > +           case INTEL_BIGCORE_ARROWLAKE:
> > +           case INTEL_BIGCORE_SAPPHIRERAPIDS:
> > +           case INTEL_BIGCORE_EMERALDRAPIDS:
> > +           case INTEL_BIGCORE_GRANITERAPIDS:
> > +           */
> > +
> > +          /*
> > +           Default tuned Mixed (bigcore + atom SOC).
> > +           case INTEL_MIXED_LAKEFIELD:
> > +           case INTEL_MIXED_ALDERLAKE:
> > +           */
> >             }
> >
> > -        /* Disable TSX on some processors to avoid TSX on kernels that
> > -           weren't updated with the latest microcode package (which
> > -           disables broken feature by default).  */
> > -        switch (model)
> > +             /* Disable TSX on some processors to avoid TSX on kernels that
> > +                weren't updated with the latest microcode package (which
> > +                disables broken feature by default).  */
> > +         switch (microarch)
> >             {
> > -           case 0x55:
> > +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> > +             /* 0x55 (Skylake-avx512) && stepping <= 5 disable TSX. */
> >               if (stepping <= 5)
> >                 goto disable_tsx;
> >               break;
> > -           case 0x8e:
> > -             /* NB: Although the errata documents that for model == 0x8e,
> > -                only 0xb stepping or lower are impacted, the intention of
> > -                the errata was to disable TSX on all client processors on
> > -                all steppings.  Include 0xc stepping which is an Intel
> > -                Core i7-8665U, a client mobile processor.  */
> > -           case 0x9e:
> > -             if (stepping > 0xc)
> > +
> > +           case INTEL_BIGCORE_SKYLAKE:
> > +           case INTEL_BIGCORE_AMBERLAKE:
> > +           case INTEL_BIGCORE_COFFEELAKE:
> > +           case INTEL_BIGCORE_WHISKEYLAKE:
> > +           case INTEL_BIGCORE_KABYLAKE:
> > +               /* NB: Although the errata documents that for model == 0x8e
> > +                  (skylake client), only 0xb stepping or lower are impacted,
> > +                  the intention of the errata was to disable TSX on all client
> > +                  processors on all steppings.  Include 0xc stepping which is
> > +                  an Intel Core i7-8665U, a client mobile processor.  */
> > +               if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
> >                 break;
> > -             /* Fall through.  */
> > -           case 0x4e:
> > -           case 0x5e:
> > -             {
> > +
> >                 /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
> >                    processors listed in:
> >
> >  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
> >                  */
> > -disable_tsx:
> > +           disable_tsx:
> >                 CPU_FEATURE_UNSET (cpu_features, HLE);
> >                 CPU_FEATURE_UNSET (cpu_features, RTM);
> >                 CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> > -             }
> > -             break;
> > -           case 0x3f:
> > -             /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> > -             if (stepping >= 4)
> >                 break;
> > -             /* Fall through.  */
> > -           case 0x3c:
> > -           case 0x45:
> > -           case 0x46:
> > -             /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> > -                with stepping >= 4) to avoid TSX on kernels that weren't
> > -                updated with the latest microcode package (which disables
> > -                broken feature by default).  */
> > -             CPU_FEATURE_UNSET (cpu_features, RTM);
> > -             break;
> > +
> > +           case INTEL_BIGCORE_HASWELL:
> > +               /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> > +                  TSX.  Haswell also include other model numbers that have
> > +                  working TSX.  */
> > +               if (model == 0x3f && stepping >= 4)
> > +               break;
> > +
> > +               CPU_FEATURE_UNSET (cpu_features, RTM);
> > +               break;
> >             }
> >         }
> >
> > --
> > 2.34.1
> >
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
                   ` (8 preceding siblings ...)
  2023-05-12 22:03 ` [PATCH v8 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
@ 2023-05-13  5:19 ` Noah Goldstein
  2023-05-13  5:19   ` [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
                     ` (3 more replies)
  2023-05-27 18:46 ` [PATCH v10 " Noah Goldstein
  2023-06-07 18:18 ` [PATCH v11 " Noah Goldstein
  11 siblings, 4 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-13  5:19 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.

Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.

The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.

Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
---
 sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
 1 file changed, 43 insertions(+), 27 deletions(-)

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index ec88945b39..4a1a5423ff 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@ handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/4 of size
+     of the chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     REP MOVSB. As well the point where the fact that non-temporal
+     stores are forced back to main memory would already occurred to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the maximum thrashing
+     capped at 1/associativity. */
+  unsigned long int non_temporal_threshold = shared / 4;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, there performance in highly parallel situations is
+     noticeably worse.  */
+  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
+    non_temporal_threshold = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-13  5:19 ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-05-13  5:19   ` Noah Goldstein
  2023-05-15 20:57     ` H.J. Lu
  2023-05-26  3:34     ` DJ Delorie
  2023-05-13  5:19   ` [PATCH v9 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
                     ` (2 subsequent siblings)
  3 siblings, 2 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-13  5:19 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

This patch should have no affect on existing functionality.

The current code, which has a single switch for model detection and
setting prefered features, is difficult to follow/extend. The cases
use magic numbers and many microarchitectures are missing. This makes
it difficult to reason about what is implemented so far and/or
how/where to add support for new features.

This patch splits the model detection and preference setting stages so
that CPU preferences can be set based on a complete list of available
microarchitectures, rather than based on model magic numbers.
---
 sysdeps/x86/cpu-features.c | 391 +++++++++++++++++++++++++++++--------
 1 file changed, 307 insertions(+), 84 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 5bff8ec0b4..29b8c8c133 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -417,6 +417,215 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
 		     == index_arch_Fast_Copy_Backward)),
 		"Incorrect index_arch_Fast_Unaligned_Load");
 
+
+/* Intel Family-6 microarch list.  */
+enum
+{
+  /* Atom processors.  */
+  INTEL_ATOM_BONNELL,
+  INTEL_ATOM_SILVERMONT,
+  INTEL_ATOM_AIRMONT,
+  INTEL_ATOM_GOLDMONT,
+  INTEL_ATOM_GOLDMONT_PLUS,
+  INTEL_ATOM_SIERRAFOREST,
+  INTEL_ATOM_GRANDRIDGE,
+  INTEL_ATOM_TREMONT,
+
+  /* Bigcore processors.  */
+  INTEL_BIGCORE_MEROM,
+  INTEL_BIGCORE_PENRYN,
+  INTEL_BIGCORE_DUNNINGTON,
+  INTEL_BIGCORE_NEHALEM,
+  INTEL_BIGCORE_WESTMERE,
+  INTEL_BIGCORE_SANDYBRIDGE,
+  INTEL_BIGCORE_IVYBRIDGE,
+  INTEL_BIGCORE_HASWELL,
+  INTEL_BIGCORE_BROADWELL,
+  INTEL_BIGCORE_SKYLAKE,
+  INTEL_BIGCORE_KABYLAKE,
+  INTEL_BIGCORE_COMETLAKE,
+  INTEL_BIGCORE_SKYLAKE_AVX512,
+  INTEL_BIGCORE_CANNONLAKE,
+  INTEL_BIGCORE_ICELAKE,
+  INTEL_BIGCORE_TIGERLAKE,
+  INTEL_BIGCORE_ROCKETLAKE,
+  INTEL_BIGCORE_SAPPHIRERAPIDS,
+  INTEL_BIGCORE_RAPTORLAKE,
+  INTEL_BIGCORE_EMERALDRAPIDS,
+  INTEL_BIGCORE_METEORLAKE,
+  INTEL_BIGCORE_LUNARLAKE,
+  INTEL_BIGCORE_ARROWLAKE,
+  INTEL_BIGCORE_GRANITERAPIDS,
+
+  /* Mixed (bigcore + atom SOC).  */
+  INTEL_MIXED_LAKEFIELD,
+  INTEL_MIXED_ALDERLAKE,
+
+  /* KNL.  */
+  INTEL_KNIGHTS_MILL,
+  INTEL_KNIGHTS_LANDING,
+
+  /* Unknown.  */
+  INTEL_UNKNOWN,
+};
+
+static unsigned int
+intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
+{
+  switch (model)
+    {
+    case 0x1C:
+    case 0x26:
+      return INTEL_ATOM_BONNELL;
+    case 0x27:
+    case 0x35:
+    case 0x36:
+      /* Really Saltwell, but Saltwell is just a die shrink of Bonnell
+         (microarchitecturally identical).  */
+      return INTEL_ATOM_BONNELL;
+    case 0x37:
+    case 0x4A:
+    case 0x4D:
+    case 0x5D:
+      return INTEL_ATOM_SILVERMONT;
+    case 0x4C:
+    case 0x5A:
+    case 0x75:
+      return INTEL_ATOM_AIRMONT;
+    case 0x5C:
+    case 0x5F:
+      return INTEL_ATOM_GOLDMONT;
+    case 0x7A:
+      return INTEL_ATOM_GOLDMONT_PLUS;
+    case 0xAF:
+      return INTEL_ATOM_SIERRAFOREST;
+    case 0xB6:
+      return INTEL_ATOM_GRANDRIDGE;
+    case 0x86:
+    case 0x96:
+    case 0x9C:
+      return INTEL_ATOM_TREMONT;
+    case 0x0F:
+    case 0x16:
+      return INTEL_BIGCORE_MEROM;
+    case 0x17:
+      return INTEL_BIGCORE_PENRYN;
+    case 0x1D:
+      return INTEL_BIGCORE_DUNNINGTON;
+    case 0x1A:
+    case 0x1E:
+    case 0x1F:
+    case 0x2E:
+      return INTEL_BIGCORE_NEHALEM;
+    case 0x25:
+    case 0x2C:
+    case 0x2F:
+      return INTEL_BIGCORE_WESTMERE;
+    case 0x2A:
+    case 0x2D:
+      return INTEL_BIGCORE_SANDYBRIDGE;
+    case 0x3A:
+    case 0x3E:
+      return INTEL_BIGCORE_IVYBRIDGE;
+    case 0x3C:
+    case 0x3F:
+    case 0x45:
+    case 0x46:
+      return INTEL_BIGCORE_HASWELL;
+    case 0x3D:
+    case 0x47:
+    case 0x4F:
+    case 0x56:
+      return INTEL_BIGCORE_BROADWELL;
+    case 0x4E:
+    case 0x5E:
+      return INTEL_BIGCORE_SKYLAKE;
+    case 0x8E:
+    /*
+     Stepping = {9}
+        -> Amberlake
+     Stepping = {10}
+        -> Coffeelake
+     Stepping = {11, 12}
+        -> Whiskeylake
+     else
+        -> Kabylake
+
+     All of these are derivatives of Kabylake (Skylake client).
+     */
+	  return INTEL_BIGCORE_KABYLAKE;
+    case 0x9E:
+    /*
+     Stepping = {10, 11, 12, 13}
+        -> Coffeelake
+     else
+        -> Kabylake
+
+     Coffeelake is a derivatives of Kabylake (Skylake client).
+     */
+	  return INTEL_BIGCORE_KABYLAKE;
+    case 0xA5:
+    case 0xA6:
+      return INTEL_BIGCORE_COMETLAKE;
+    case 0x66:
+      return INTEL_BIGCORE_CANNONLAKE;
+    case 0x55:
+    /*
+     Stepping = {6, 7}
+        -> Cascadelake
+     Stepping = {11}
+        -> Cooperlake
+     else
+        -> Skylake-avx512
+
+     These are all microarchitecturally indentical, so use
+     Skylake-avx512 for all of them.
+     */
+      return INTEL_BIGCORE_SKYLAKE_AVX512;
+    case 0x6A:
+    case 0x6C:
+    case 0x7D:
+    case 0x7E:
+    case 0x9D:
+      return INTEL_BIGCORE_ICELAKE;
+    case 0x8C:
+    case 0x8D:
+      return INTEL_BIGCORE_TIGERLAKE;
+    case 0xA7:
+      return INTEL_BIGCORE_ROCKETLAKE;
+    case 0x8F:
+      return INTEL_BIGCORE_SAPPHIRERAPIDS;
+    case 0xB7:
+    case 0xBA:
+    case 0xBF:
+      return INTEL_BIGCORE_RAPTORLAKE;
+    case 0xCF:
+      return INTEL_BIGCORE_EMERALDRAPIDS;
+    case 0xAA:
+    case 0xAC:
+      return INTEL_BIGCORE_METEORLAKE;
+    case 0xbd:
+      return INTEL_BIGCORE_LUNARLAKE;
+    case 0xc6:
+      return INTEL_BIGCORE_ARROWLAKE;
+    case 0xAD:
+    case 0xAE:
+      return INTEL_BIGCORE_GRANITERAPIDS;
+    case 0x8A:
+      return INTEL_MIXED_LAKEFIELD;
+    case 0x97:
+    case 0x9A:
+    case 0xBE:
+      return INTEL_MIXED_ALDERLAKE;
+    case 0x85:
+      return INTEL_KNIGHTS_MILL;
+    case 0x57:
+      return INTEL_KNIGHTS_LANDING;
+    default:
+      return INTEL_UNKNOWN;
+    }
+}
+
 static inline void
 init_cpu_features (struct cpu_features *cpu_features)
 {
@@ -453,129 +662,143 @@ init_cpu_features (struct cpu_features *cpu_features)
       if (family == 0x06)
 	{
 	  model += extended_model;
-	  switch (model)
+	  unsigned int microarch
+	      = intel_get_fam6_microarch (model, stepping);
+
+	  switch (microarch)
 	    {
-	    case 0x1c:
-	    case 0x26:
-	      /* BSF is slow on Atom.  */
+	      /* Atom / KNL tuning.  */
+	    case INTEL_ATOM_BONNELL:
+	      /* BSF is slow on Bonnell.  */
 	      cpu_features->preferred[index_arch_Slow_BSF]
-		|= bit_arch_Slow_BSF;
+		  |= bit_arch_Slow_BSF;
 	      break;
 
-	    case 0x57:
-	      /* Knights Landing.  Enable Silvermont optimizations.  */
-
-	    case 0x7a:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont Plus.  */
-
-	    case 0x5c:
-	    case 0x5f:
 	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont.  */
+		     on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
+	    case INTEL_ATOM_AIRMONT:
+	    case INTEL_ATOM_SILVERMONT:
+	    case INTEL_ATOM_GOLDMONT:
+	    case INTEL_ATOM_GOLDMONT_PLUS:
 
-	    case 0x4c:
-	    case 0x5a:
-	    case 0x75:
-	      /* Airmont is a die shrink of Silvermont.  */
+          /* Knights Landing.  Enable Silvermont optimizations.  */
+	    case INTEL_KNIGHTS_LANDING:
 
-	    case 0x37:
-	    case 0x4a:
-	    case 0x4d:
-	    case 0x5d:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Silvermont.  */
 	      cpu_features->preferred[index_arch_Fast_Unaligned_Load]
-		|= (bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
-	    case 0x86:
-	    case 0x96:
-	    case 0x9c:
+	    case INTEL_ATOM_TREMONT:
 	      /* Enable rep string instructions, unaligned load, unaligned
-	         copy, pminub and avoid SSE 4.2 on Tremont.  */
+		 copy, pminub and avoid SSE 4.2 on Tremont.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
+	   /*
+	    Default tuned Knights microarch.
+	    case INTEL_KNIGHTS_MILL:
+        */
+
+	   /*
+	    Default tuned atom microarch.
+	    case INTEL_ATOM_SIERRAFOREST:
+	    case INTEL_ATOM_GRANDRIDGE:
+	   */
+
+	      /* Bigcore/Default Tuning.  */
 	    default:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	      /* Fall through.  */
-
-	    case 0x1a:
-	    case 0x1e:
-	    case 0x1f:
-	    case 0x25:
-	    case 0x2c:
-	    case 0x2e:
-	    case 0x2f:
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop);
+		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
+
+	   /*
+	    Default tuned Bigcore microarch.
+	    case INTEL_BIGCORE_SANDYBRIDGE:
+	    case INTEL_BIGCORE_IVYBRIDGE:
+	    case INTEL_BIGCORE_HASWELL:
+	    case INTEL_BIGCORE_BROADWELL:
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+	    case INTEL_BIGCORE_COMETLAKE:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	    case INTEL_BIGCORE_CANNONLAKE:
+	    case INTEL_BIGCORE_ICELAKE:
+	    case INTEL_BIGCORE_TIGERLAKE:
+	    case INTEL_BIGCORE_ROCKETLAKE:
+	    case INTEL_BIGCORE_RAPTORLAKE:
+	    case INTEL_BIGCORE_METEORLAKE:
+	    case INTEL_BIGCORE_LUNARLAKE:
+	    case INTEL_BIGCORE_ARROWLAKE:
+	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
+	    case INTEL_BIGCORE_EMERALDRAPIDS:
+	    case INTEL_BIGCORE_GRANITERAPIDS:
+	    */
+
+	   /*
+	    Default tuned Mixed (bigcore + atom SOC).
+	    case INTEL_MIXED_LAKEFIELD:
+	    case INTEL_MIXED_ALDERLAKE:
+	    */
 	    }
 
-	 /* Disable TSX on some processors to avoid TSX on kernels that
-	    weren't updated with the latest microcode package (which
-	    disables broken feature by default).  */
-	 switch (model)
+	      /* Disable TSX on some processors to avoid TSX on kernels that
+		 weren't updated with the latest microcode package (which
+		 disables broken feature by default).  */
+	  switch (microarch)
 	    {
-	    case 0x55:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	      /* 0x55 (Skylake-avx512) && stepping <= 5 disable TSX. */
 	      if (stepping <= 5)
 		goto disable_tsx;
 	      break;
-	    case 0x8e:
-	      /* NB: Although the errata documents that for model == 0x8e,
-		 only 0xb stepping or lower are impacted, the intention of
-		 the errata was to disable TSX on all client processors on
-		 all steppings.  Include 0xc stepping which is an Intel
-		 Core i7-8665U, a client mobile processor.  */
-	    case 0x9e:
-	      if (stepping > 0xc)
+
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+		/* NB: Although the errata documents that for model == 0x8e
+		   (skylake client), only 0xb stepping or lower are impacted,
+		   the intention of the errata was to disable TSX on all client
+		   processors on all steppings.  Include 0xc stepping which is
+		   an Intel Core i7-8665U, a client mobile processor.  */
+		if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
 		break;
-	      /* Fall through.  */
-	    case 0x4e:
-	    case 0x5e:
-	      {
+
 		/* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
 		   processors listed in:
 
 https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
 		 */
-disable_tsx:
+	    disable_tsx:
 		CPU_FEATURE_UNSET (cpu_features, HLE);
 		CPU_FEATURE_UNSET (cpu_features, RTM);
 		CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
-	      }
-	      break;
-	    case 0x3f:
-	      /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
-	      if (stepping >= 4)
 		break;
-	      /* Fall through.  */
-	    case 0x3c:
-	    case 0x45:
-	    case 0x46:
-	      /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
-		 with stepping >= 4) to avoid TSX on kernels that weren't
-		 updated with the latest microcode package (which disables
-		 broken feature by default).  */
-	      CPU_FEATURE_UNSET (cpu_features, RTM);
-	      break;
+
+	    case INTEL_BIGCORE_HASWELL:
+		/* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
+		   TSX.  Haswell also include other model numbers that have
+		   working TSX.  */
+		if (model == 0x3f && stepping >= 4)
+		break;
+
+		CPU_FEATURE_UNSET (cpu_features, RTM);
+		break;
 	    }
 	}
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v9 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific
  2023-05-13  5:19 ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
  2023-05-13  5:19   ` [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-05-13  5:19   ` Noah Goldstein
  2023-05-26  3:34     ` DJ Delorie
  2023-05-15 18:29   ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
  2023-05-26  3:34   ` DJ Delorie
  3 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-05-13  5:19 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Different systems prefer a different divisors.

From benchmarks[1] so far the following divisors have been found:
    ICX     : 2
    SKX     : 2
    BWD     : 8

For Intel, we are generalizing that BWD and older prefers 8 as a
divisor, and SKL and newer prefers 2. This number can be further tuned
as benchmarks are run.

[1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
---
 sysdeps/x86/cpu-features.c         | 27 +++++++++++++++++--------
 sysdeps/x86/dl-cacheinfo.h         | 32 ++++++++++++++++++------------
 sysdeps/x86/include/cpu-features.h |  3 +++
 3 files changed, 41 insertions(+), 21 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 29b8c8c133..ba789d6fc1 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -635,6 +635,7 @@ init_cpu_features (struct cpu_features *cpu_features)
   unsigned int stepping = 0;
   enum cpu_features_kind kind;
 
+  cpu_features->cachesize_non_temporal_divisor = 4;
 #if !HAS_CPUID
   if (__get_cpuid_max (0, 0) == 0)
     {
@@ -714,12 +715,13 @@ init_cpu_features (struct cpu_features *cpu_features)
 
 	      /* Bigcore/Default Tuning.  */
 	    default:
+	    default_tuning:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	    case INTEL_BIGCORE_NEHALEM:
-	    case INTEL_BIGCORE_WESTMERE:
+
+	    enable_modern_features:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
@@ -728,12 +730,20 @@ init_cpu_features (struct cpu_features *cpu_features)
 		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
 
-	   /*
-	    Default tuned Bigcore microarch.
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
+	      /* Older CPUs prefer non-temporal stores at lower threshold.  */
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto enable_modern_features;
+
+	      /* Default tuned Bigcore microarch.  */
 	    case INTEL_BIGCORE_SANDYBRIDGE:
 	    case INTEL_BIGCORE_IVYBRIDGE:
 	    case INTEL_BIGCORE_HASWELL:
 	    case INTEL_BIGCORE_BROADWELL:
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto default_tuning;
+
 	    case INTEL_BIGCORE_SKYLAKE:
 	    case INTEL_BIGCORE_KABYLAKE:
 	    case INTEL_BIGCORE_COMETLAKE:
@@ -749,13 +759,14 @@ init_cpu_features (struct cpu_features *cpu_features)
 	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
 	    case INTEL_BIGCORE_EMERALDRAPIDS:
 	    case INTEL_BIGCORE_GRANITERAPIDS:
-	    */
+	      cpu_features->cachesize_non_temporal_divisor = 2;
+	      goto default_tuning;
 
-	   /*
-	    Default tuned Mixed (bigcore + atom SOC).
+	      /* Default tuned Mixed (bigcore + atom SOC). */
 	    case INTEL_MIXED_LAKEFIELD:
 	    case INTEL_MIXED_ALDERLAKE:
-	    */
+	      cpu_features->cachesize_non_temporal_divisor = 2;
+	      goto default_tuning;
 	    }
 
 	      /* Disable TSX on some processors to avoid TSX on kernels that
diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index 4a1a5423ff..864b00a521 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 1/4 of size
-     of the chip's cache. For most Intel and AMD processors with an
-     initial release date between 2017 and 2023, a thread's typical
-     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
-     estimate the point where non-temporal stores begin outcompeting
-     REP MOVSB. As well the point where the fact that non-temporal
-     stores are forced back to main memory would already occurred to the
-     majority of the lines in the copy. Note, concerns about the
-     entire L3 cache being evicted by the copy are mostly alleviated
-     by the fact that modern HW detects streaming patterns and
-     provides proper LRU hints so that the maximum thrashing
-     capped at 1/associativity. */
-  unsigned long int non_temporal_threshold = shared / 4;
+  unsigned long int cachesize_non_temporal_divisor
+      = cpu_features->cachesize_non_temporal_divisor;
+  if (cachesize_non_temporal_divisor <= 0)
+    cachesize_non_temporal_divisor = 4;
+
+  /* The default setting for the non_temporal threshold is [1/2, 1/8] of size
+     of the chip's cache (depending on `cachesize_non_temporal_divisor` which
+     is microarch specific. The defeault is 1/4). For most Intel and AMD
+     processors with an initial release date between 2017 and 2023, a thread's
+     typical share of the cache is from 18-64MB. Using a reasonable size
+     fraction of L3 is meant to estimate the point where non-temporal stores
+     begin outcompeting REP MOVSB. As well the point where the fact that
+     non-temporal stores are forced back to main memory would already occurred
+     to the majority of the lines in the copy. Note, concerns about the entire
+     L3 cache being evicted by the copy are mostly alleviated by the fact that
+     modern HW detects streaming patterns and provides proper LRU hints so that
+     the maximum thrashing capped at 1/associativity. */
+  unsigned long int non_temporal_threshold
+      = shared / cachesize_non_temporal_divisor;
   /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
      a higher risk of actually thrashing the cache as they don't have a HW LRU
      hint. As well, there performance in highly parallel situations is
diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
index 40b8129d6a..f5b9dd54fe 100644
--- a/sysdeps/x86/include/cpu-features.h
+++ b/sysdeps/x86/include/cpu-features.h
@@ -915,6 +915,9 @@ struct cpu_features
   unsigned long int shared_cache_size;
   /* Threshold to use non temporal store.  */
   unsigned long int non_temporal_threshold;
+  /* When no user non_temporal_threshold is specified. We default to
+     cachesize / cachesize_non_temporal_divisor.  */
+  unsigned long int cachesize_non_temporal_divisor;
   /* Threshold to use "rep movsb".  */
   unsigned long int rep_movsb_threshold;
   /* Threshold to stop using "rep movsb".  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-05-13  5:19 ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
  2023-05-13  5:19   ` [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
  2023-05-13  5:19   ` [PATCH v9 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
@ 2023-05-15 18:29   ` Noah Goldstein
  2023-05-17 12:00     ` Carlos O'Donell
  2023-05-26  3:34   ` DJ Delorie
  3 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-05-15 18:29 UTC (permalink / raw)
  To: libc-alpha; +Cc: hjl.tools, carlos

On Sat, May 13, 2023 at 12:19 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 4`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal thresholds and leads to using non-temporal stores in
> cases where REP MOVSB is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to main memory
> so using it at a size much smaller than L3 can place soon to be
> accessed data much further away than it otherwise could be. As well,
> modern machines are able to detect streaming patterns (especially if
> REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> in affect caps the total amount of eviction at 1/cache_associativity,
> far below meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be REP MOVSB which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> up to `sizeof_L3`.
>
> The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> such as Broadwell prefer something closer to `8`. This patch is meant
> to be followed up by another one to make the divisor cpu-specific, but
> in the meantime (and for easier backporting), this patch settles on
> `4` as a middle-ground.
>
> Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> ---
>  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
>  1 file changed, 43 insertions(+), 27 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index ec88945b39..4a1a5423ff 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -              & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +       intel_bug_no_cache_info:
> +         /* Assume that all logical threads share the highest cache
> +            level.  */
> +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +                    & 0xff);
> +
> +         /* Get per-thread size of highest level cache.  */
> +         if (shared_per_thread > 0 && threads > 0)
> +           shared_per_thread /= threads;
> +       }
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +       shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>
>        level1_icache_size
>         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>         shared = core;
> +
> +      if (shared_per_thread <= 0)
> +       shared_per_thread = shared;
>      }
>
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/4 of size
> +     of the chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> +     estimate the point where non-temporal stores begin outcompeting
> +     REP MOVSB. As well the point where the fact that non-temporal
> +     stores are forced back to main memory would already occurred to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the maximum thrashing
> +     capped at 1/associativity. */
> +  unsigned long int non_temporal_threshold = shared / 4;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, there performance in highly parallel situations is
> +     noticeably worse.  */
> +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> +    non_temporal_threshold = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>

Carlos, any update on reproducing this on ICX?

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-13  5:19   ` [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-05-15 20:57     ` H.J. Lu
  2023-05-26  3:34     ` DJ Delorie
  1 sibling, 0 replies; 76+ messages in thread
From: H.J. Lu @ 2023-05-15 20:57 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, carlos

On Fri, May 12, 2023 at 10:19 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> This patch should have no affect on existing functionality.
>
> The current code, which has a single switch for model detection and
> setting prefered features, is difficult to follow/extend. The cases
> use magic numbers and many microarchitectures are missing. This makes
> it difficult to reason about what is implemented so far and/or
> how/where to add support for new features.
>
> This patch splits the model detection and preference setting stages so
> that CPU preferences can be set based on a complete list of available
> microarchitectures, rather than based on model magic numbers.
> ---
>  sysdeps/x86/cpu-features.c | 391 +++++++++++++++++++++++++++++--------
>  1 file changed, 307 insertions(+), 84 deletions(-)
>
> diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> index 5bff8ec0b4..29b8c8c133 100644
> --- a/sysdeps/x86/cpu-features.c
> +++ b/sysdeps/x86/cpu-features.c
> @@ -417,6 +417,215 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
>                      == index_arch_Fast_Copy_Backward)),
>                 "Incorrect index_arch_Fast_Unaligned_Load");
>
> +
> +/* Intel Family-6 microarch list.  */
> +enum
> +{
> +  /* Atom processors.  */
> +  INTEL_ATOM_BONNELL,
> +  INTEL_ATOM_SILVERMONT,
> +  INTEL_ATOM_AIRMONT,
> +  INTEL_ATOM_GOLDMONT,
> +  INTEL_ATOM_GOLDMONT_PLUS,
> +  INTEL_ATOM_SIERRAFOREST,
> +  INTEL_ATOM_GRANDRIDGE,
> +  INTEL_ATOM_TREMONT,
> +
> +  /* Bigcore processors.  */
> +  INTEL_BIGCORE_MEROM,
> +  INTEL_BIGCORE_PENRYN,
> +  INTEL_BIGCORE_DUNNINGTON,
> +  INTEL_BIGCORE_NEHALEM,
> +  INTEL_BIGCORE_WESTMERE,
> +  INTEL_BIGCORE_SANDYBRIDGE,
> +  INTEL_BIGCORE_IVYBRIDGE,
> +  INTEL_BIGCORE_HASWELL,
> +  INTEL_BIGCORE_BROADWELL,
> +  INTEL_BIGCORE_SKYLAKE,
> +  INTEL_BIGCORE_KABYLAKE,
> +  INTEL_BIGCORE_COMETLAKE,
> +  INTEL_BIGCORE_SKYLAKE_AVX512,
> +  INTEL_BIGCORE_CANNONLAKE,
> +  INTEL_BIGCORE_ICELAKE,
> +  INTEL_BIGCORE_TIGERLAKE,
> +  INTEL_BIGCORE_ROCKETLAKE,
> +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> +  INTEL_BIGCORE_RAPTORLAKE,
> +  INTEL_BIGCORE_EMERALDRAPIDS,
> +  INTEL_BIGCORE_METEORLAKE,
> +  INTEL_BIGCORE_LUNARLAKE,
> +  INTEL_BIGCORE_ARROWLAKE,
> +  INTEL_BIGCORE_GRANITERAPIDS,
> +
> +  /* Mixed (bigcore + atom SOC).  */
> +  INTEL_MIXED_LAKEFIELD,
> +  INTEL_MIXED_ALDERLAKE,
> +
> +  /* KNL.  */
> +  INTEL_KNIGHTS_MILL,
> +  INTEL_KNIGHTS_LANDING,
> +
> +  /* Unknown.  */
> +  INTEL_UNKNOWN,
> +};
> +
> +static unsigned int
> +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
> +{
> +  switch (model)
> +    {
> +    case 0x1C:
> +    case 0x26:
> +      return INTEL_ATOM_BONNELL;
> +    case 0x27:
> +    case 0x35:
> +    case 0x36:
> +      /* Really Saltwell, but Saltwell is just a die shrink of Bonnell
> +         (microarchitecturally identical).  */
> +      return INTEL_ATOM_BONNELL;
> +    case 0x37:
> +    case 0x4A:
> +    case 0x4D:
> +    case 0x5D:
> +      return INTEL_ATOM_SILVERMONT;
> +    case 0x4C:
> +    case 0x5A:
> +    case 0x75:
> +      return INTEL_ATOM_AIRMONT;
> +    case 0x5C:
> +    case 0x5F:
> +      return INTEL_ATOM_GOLDMONT;
> +    case 0x7A:
> +      return INTEL_ATOM_GOLDMONT_PLUS;
> +    case 0xAF:
> +      return INTEL_ATOM_SIERRAFOREST;
> +    case 0xB6:
> +      return INTEL_ATOM_GRANDRIDGE;
> +    case 0x86:
> +    case 0x96:
> +    case 0x9C:
> +      return INTEL_ATOM_TREMONT;
> +    case 0x0F:
> +    case 0x16:
> +      return INTEL_BIGCORE_MEROM;
> +    case 0x17:
> +      return INTEL_BIGCORE_PENRYN;
> +    case 0x1D:
> +      return INTEL_BIGCORE_DUNNINGTON;
> +    case 0x1A:
> +    case 0x1E:
> +    case 0x1F:
> +    case 0x2E:
> +      return INTEL_BIGCORE_NEHALEM;
> +    case 0x25:
> +    case 0x2C:
> +    case 0x2F:
> +      return INTEL_BIGCORE_WESTMERE;
> +    case 0x2A:
> +    case 0x2D:
> +      return INTEL_BIGCORE_SANDYBRIDGE;
> +    case 0x3A:
> +    case 0x3E:
> +      return INTEL_BIGCORE_IVYBRIDGE;
> +    case 0x3C:
> +    case 0x3F:
> +    case 0x45:
> +    case 0x46:
> +      return INTEL_BIGCORE_HASWELL;
> +    case 0x3D:
> +    case 0x47:
> +    case 0x4F:
> +    case 0x56:
> +      return INTEL_BIGCORE_BROADWELL;
> +    case 0x4E:
> +    case 0x5E:
> +      return INTEL_BIGCORE_SKYLAKE;
> +    case 0x8E:
> +    /*
> +     Stepping = {9}
> +        -> Amberlake
> +     Stepping = {10}
> +        -> Coffeelake
> +     Stepping = {11, 12}
> +        -> Whiskeylake
> +     else
> +        -> Kabylake
> +
> +     All of these are derivatives of Kabylake (Skylake client).
> +     */
> +         return INTEL_BIGCORE_KABYLAKE;
> +    case 0x9E:
> +    /*
> +     Stepping = {10, 11, 12, 13}
> +        -> Coffeelake
> +     else
> +        -> Kabylake
> +
> +     Coffeelake is a derivatives of Kabylake (Skylake client).
> +     */
> +         return INTEL_BIGCORE_KABYLAKE;
> +    case 0xA5:
> +    case 0xA6:
> +      return INTEL_BIGCORE_COMETLAKE;
> +    case 0x66:
> +      return INTEL_BIGCORE_CANNONLAKE;
> +    case 0x55:
> +    /*
> +     Stepping = {6, 7}
> +        -> Cascadelake
> +     Stepping = {11}
> +        -> Cooperlake
> +     else
> +        -> Skylake-avx512
> +
> +     These are all microarchitecturally indentical, so use
> +     Skylake-avx512 for all of them.
> +     */
> +      return INTEL_BIGCORE_SKYLAKE_AVX512;
> +    case 0x6A:
> +    case 0x6C:
> +    case 0x7D:
> +    case 0x7E:
> +    case 0x9D:
> +      return INTEL_BIGCORE_ICELAKE;
> +    case 0x8C:
> +    case 0x8D:
> +      return INTEL_BIGCORE_TIGERLAKE;
> +    case 0xA7:
> +      return INTEL_BIGCORE_ROCKETLAKE;
> +    case 0x8F:
> +      return INTEL_BIGCORE_SAPPHIRERAPIDS;
> +    case 0xB7:
> +    case 0xBA:
> +    case 0xBF:
> +      return INTEL_BIGCORE_RAPTORLAKE;
> +    case 0xCF:
> +      return INTEL_BIGCORE_EMERALDRAPIDS;
> +    case 0xAA:
> +    case 0xAC:
> +      return INTEL_BIGCORE_METEORLAKE;
> +    case 0xbd:
> +      return INTEL_BIGCORE_LUNARLAKE;
> +    case 0xc6:
> +      return INTEL_BIGCORE_ARROWLAKE;
> +    case 0xAD:
> +    case 0xAE:
> +      return INTEL_BIGCORE_GRANITERAPIDS;
> +    case 0x8A:
> +      return INTEL_MIXED_LAKEFIELD;
> +    case 0x97:
> +    case 0x9A:
> +    case 0xBE:
> +      return INTEL_MIXED_ALDERLAKE;
> +    case 0x85:
> +      return INTEL_KNIGHTS_MILL;
> +    case 0x57:
> +      return INTEL_KNIGHTS_LANDING;
> +    default:
> +      return INTEL_UNKNOWN;
> +    }
> +}
> +
>  static inline void
>  init_cpu_features (struct cpu_features *cpu_features)
>  {
> @@ -453,129 +662,143 @@ init_cpu_features (struct cpu_features *cpu_features)
>        if (family == 0x06)
>         {
>           model += extended_model;
> -         switch (model)
> +         unsigned int microarch
> +             = intel_get_fam6_microarch (model, stepping);
> +
> +         switch (microarch)
>             {
> -           case 0x1c:
> -           case 0x26:
> -             /* BSF is slow on Atom.  */
> +             /* Atom / KNL tuning.  */
> +           case INTEL_ATOM_BONNELL:
> +             /* BSF is slow on Bonnell.  */
>               cpu_features->preferred[index_arch_Slow_BSF]
> -               |= bit_arch_Slow_BSF;
> +                 |= bit_arch_Slow_BSF;
>               break;
>
> -           case 0x57:
> -             /* Knights Landing.  Enable Silvermont optimizations.  */
> -
> -           case 0x7a:
> -             /* Unaligned load versions are faster than SSSE3
> -                on Goldmont Plus.  */
> -
> -           case 0x5c:
> -           case 0x5f:
>               /* Unaligned load versions are faster than SSSE3
> -                on Goldmont.  */
> +                    on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> +           case INTEL_ATOM_AIRMONT:
> +           case INTEL_ATOM_SILVERMONT:
> +           case INTEL_ATOM_GOLDMONT:
> +           case INTEL_ATOM_GOLDMONT_PLUS:
>
> -           case 0x4c:
> -           case 0x5a:
> -           case 0x75:
> -             /* Airmont is a die shrink of Silvermont.  */
> +          /* Knights Landing.  Enable Silvermont optimizations.  */
> +           case INTEL_KNIGHTS_LANDING:
>
> -           case 0x37:
> -           case 0x4a:
> -           case 0x4d:
> -           case 0x5d:
> -             /* Unaligned load versions are faster than SSSE3
> -                on Silvermont.  */
>               cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> -               |= (bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop
> -                   | bit_arch_Slow_SSE4_2);
> +                 |= (bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop
> +                     | bit_arch_Slow_SSE4_2);
>               break;
>
> -           case 0x86:
> -           case 0x96:
> -           case 0x9c:
> +           case INTEL_ATOM_TREMONT:
>               /* Enable rep string instructions, unaligned load, unaligned
> -                copy, pminub and avoid SSE 4.2 on Tremont.  */
> +                copy, pminub and avoid SSE 4.2 on Tremont.  */
>               cpu_features->preferred[index_arch_Fast_Rep_String]
> -               |= (bit_arch_Fast_Rep_String
> -                   | bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop
> -                   | bit_arch_Slow_SSE4_2);
> +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop
> +                     | bit_arch_Slow_SSE4_2);
>               break;
>
> +          /*
> +           Default tuned Knights microarch.
> +           case INTEL_KNIGHTS_MILL:
> +        */
> +
> +          /*
> +           Default tuned atom microarch.
> +           case INTEL_ATOM_SIERRAFOREST:
> +           case INTEL_ATOM_GRANDRIDGE:
> +          */
> +
> +             /* Bigcore/Default Tuning.  */
>             default:
>               /* Unknown family 0x06 processors.  Assuming this is one
>                  of Core i3/i5/i7 processors if AVX is available.  */
>               if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
>                 break;
> -             /* Fall through.  */
> -
> -           case 0x1a:
> -           case 0x1e:
> -           case 0x1f:
> -           case 0x25:
> -           case 0x2c:
> -           case 0x2e:
> -           case 0x2f:
> +           case INTEL_BIGCORE_NEHALEM:
> +           case INTEL_BIGCORE_WESTMERE:
>               /* Rep string instructions, unaligned load, unaligned copy,
>                  and pminub are fast on Intel Core i3, i5 and i7.  */
>               cpu_features->preferred[index_arch_Fast_Rep_String]
> -               |= (bit_arch_Fast_Rep_String
> -                   | bit_arch_Fast_Unaligned_Load
> -                   | bit_arch_Fast_Unaligned_Copy
> -                   | bit_arch_Prefer_PMINUB_for_stringop);
> +                 |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> +                     | bit_arch_Fast_Unaligned_Copy
> +                     | bit_arch_Prefer_PMINUB_for_stringop);
>               break;
> +
> +          /*
> +           Default tuned Bigcore microarch.
> +           case INTEL_BIGCORE_SANDYBRIDGE:
> +           case INTEL_BIGCORE_IVYBRIDGE:
> +           case INTEL_BIGCORE_HASWELL:
> +           case INTEL_BIGCORE_BROADWELL:
> +           case INTEL_BIGCORE_SKYLAKE:
> +           case INTEL_BIGCORE_KABYLAKE:
> +           case INTEL_BIGCORE_COMETLAKE:
> +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> +           case INTEL_BIGCORE_CANNONLAKE:
> +           case INTEL_BIGCORE_ICELAKE:
> +           case INTEL_BIGCORE_TIGERLAKE:
> +           case INTEL_BIGCORE_ROCKETLAKE:
> +           case INTEL_BIGCORE_RAPTORLAKE:
> +           case INTEL_BIGCORE_METEORLAKE:
> +           case INTEL_BIGCORE_LUNARLAKE:
> +           case INTEL_BIGCORE_ARROWLAKE:
> +           case INTEL_BIGCORE_SAPPHIRERAPIDS:
> +           case INTEL_BIGCORE_EMERALDRAPIDS:
> +           case INTEL_BIGCORE_GRANITERAPIDS:
> +           */
> +
> +          /*
> +           Default tuned Mixed (bigcore + atom SOC).
> +           case INTEL_MIXED_LAKEFIELD:
> +           case INTEL_MIXED_ALDERLAKE:
> +           */
>             }
>
> -        /* Disable TSX on some processors to avoid TSX on kernels that
> -           weren't updated with the latest microcode package (which
> -           disables broken feature by default).  */
> -        switch (model)
> +             /* Disable TSX on some processors to avoid TSX on kernels that
> +                weren't updated with the latest microcode package (which
> +                disables broken feature by default).  */
> +         switch (microarch)
>             {
> -           case 0x55:
> +           case INTEL_BIGCORE_SKYLAKE_AVX512:
> +             /* 0x55 (Skylake-avx512) && stepping <= 5 disable TSX. */
>               if (stepping <= 5)
>                 goto disable_tsx;
>               break;
> -           case 0x8e:
> -             /* NB: Although the errata documents that for model == 0x8e,
> -                only 0xb stepping or lower are impacted, the intention of
> -                the errata was to disable TSX on all client processors on
> -                all steppings.  Include 0xc stepping which is an Intel
> -                Core i7-8665U, a client mobile processor.  */
> -           case 0x9e:
> -             if (stepping > 0xc)
> +
> +           case INTEL_BIGCORE_SKYLAKE:
> +           case INTEL_BIGCORE_KABYLAKE:
> +               /* NB: Although the errata documents that for model == 0x8e
> +                  (skylake client), only 0xb stepping or lower are impacted,
> +                  the intention of the errata was to disable TSX on all client
> +                  processors on all steppings.  Include 0xc stepping which is
> +                  an Intel Core i7-8665U, a client mobile processor.  */
> +               if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
>                 break;
> -             /* Fall through.  */
> -           case 0x4e:
> -           case 0x5e:
> -             {
> +
>                 /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
>                    processors listed in:
>
>  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
>                  */
> -disable_tsx:
> +           disable_tsx:
>                 CPU_FEATURE_UNSET (cpu_features, HLE);
>                 CPU_FEATURE_UNSET (cpu_features, RTM);
>                 CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> -             }
> -             break;
> -           case 0x3f:
> -             /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> -             if (stepping >= 4)
>                 break;
> -             /* Fall through.  */
> -           case 0x3c:
> -           case 0x45:
> -           case 0x46:
> -             /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> -                with stepping >= 4) to avoid TSX on kernels that weren't
> -                updated with the latest microcode package (which disables
> -                broken feature by default).  */
> -             CPU_FEATURE_UNSET (cpu_features, RTM);
> -             break;
> +
> +           case INTEL_BIGCORE_HASWELL:
> +               /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> +                  TSX.  Haswell also include other model numbers that have
> +                  working TSX.  */
> +               if (model == 0x3f && stepping >= 4)
> +               break;
> +
> +               CPU_FEATURE_UNSET (cpu_features, RTM);
> +               break;
>             }
>         }
>
> --
> 2.34.1
>

LGTM.

Thanks.

-- 
H.J.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-05-15 18:29   ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-05-17 12:00     ` Carlos O'Donell
  0 siblings, 0 replies; 76+ messages in thread
From: Carlos O'Donell @ 2023-05-17 12:00 UTC (permalink / raw)
  To: Noah Goldstein, libc-alpha; +Cc: hjl.tools, carlos

On 5/15/23 14:29, Noah Goldstein via Libc-alpha wrote:
> Carlos, any update on reproducing this on ICX?
 
Not yet. DJ is helping me test this on a variety of pieces of hardware.

We're also looking at STREAMS benchmark results to make sure we get no
worse on this since it models certain workloads well.

I think we should have more data by the end of the week.

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-05-13  5:19 ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
                     ` (2 preceding siblings ...)
  2023-05-15 18:29   ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-05-26  3:34   ` DJ Delorie
  3 siblings, 0 replies; 76+ messages in thread
From: DJ Delorie @ 2023-05-26  3:34 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, goldstein.w.n, hjl.tools, carlos



Ignoring the benchmarks for now, technical review...

LGTM.
Reviewed-by: DJ Delorie <dj@redhat.com>


>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)

Adding a new parameter, ok.

> +  long int shared_per_thread = *shared_per_thread_ptr;

No NULL check, but all callers pass address-of-variable, so OK.

> +      shared_per_thread = core;

Ok.

> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -	       & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +	intel_bug_no_cache_info:
> +	  /* Assume that all logical threads share the highest cache
> +	     level.  */
> +	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +		     & 0xff);
> +
> +	  /* Get per-thread size of highest level cache.  */
> +	  if (shared_per_thread > 0 && threads > 0)
> +	    shared_per_thread /= threads;
> +	}
>      }

Mostly whitespace, but real change is to modify shared_per_thread
instead of shared.  Ok.

>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +	shared_per_thread += core / threads_l2;
>        shared += core;

shared is still unscaled, shared_per_thread has the scaled version.  Ok.

>      }
>  
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;

Ok.

> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;

Ok.

> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;

Ok.

> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);

Ok.

>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;

Ok.

> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);

Ok.

>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;

Ok.

> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>  	shared = core;
> +
> +      if (shared_per_thread <= 0)
> +	shared_per_thread = shared;

Reasonable default, ok.

> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>  
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;

> +  /* The default setting for the non_temporal threshold is 1/4 of size
> +     of the chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> +     estimate the point where non-temporal stores begin outcompeting
> +     REP MOVSB. As well the point where the fact that non-temporal
> +     stores are forced back to main memory would already occurred to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the maximum thrashing
> +     capped at 1/associativity. */
> +  unsigned long int non_temporal_threshold = shared / 4;

Changing from 3/4 of a scaled amount, to 1/4 of the total amount.  Ok.

> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, there performance in highly parallel situations is
> +     noticeably worse.  */
> +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> +    non_temporal_threshold = shared_per_thread * 3 / 4;

This is the same as the old default, so OK.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-13  5:19   ` [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
  2023-05-15 20:57     ` H.J. Lu
@ 2023-05-26  3:34     ` DJ Delorie
  2023-05-27 18:46       ` Noah Goldstein
  1 sibling, 1 reply; 76+ messages in thread
From: DJ Delorie @ 2023-05-26  3:34 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, goldstein.w.n, hjl.tools, carlos


LGTM with a few nits about comments that I don't really care about, but
mention for completeness.

Reviewed-by: DJ Delorie <dj@redhat.com>

Noah Goldstein via Libc-alpha <libc-alpha@sourceware.org> writes:
> This patch should have no affect on existing functionality.

> +/* Intel Family-6 microarch list.  */
> +enum
> +{
> +  /* Atom processors.  */
> +  INTEL_ATOM_BONNELL,
> +  INTEL_ATOM_SILVERMONT,
> +  INTEL_ATOM_AIRMONT,
> +  INTEL_ATOM_GOLDMONT,
> +  INTEL_ATOM_GOLDMONT_PLUS,
> +  INTEL_ATOM_SIERRAFOREST,
> +  INTEL_ATOM_GRANDRIDGE,
> +  INTEL_ATOM_TREMONT,
> +
> +  /* Bigcore processors.  */
> +  INTEL_BIGCORE_MEROM,
> +  INTEL_BIGCORE_PENRYN,
> +  INTEL_BIGCORE_DUNNINGTON,
> +  INTEL_BIGCORE_NEHALEM,
> +  INTEL_BIGCORE_WESTMERE,
> +  INTEL_BIGCORE_SANDYBRIDGE,
> +  INTEL_BIGCORE_IVYBRIDGE,
> +  INTEL_BIGCORE_HASWELL,
> +  INTEL_BIGCORE_BROADWELL,
> +  INTEL_BIGCORE_SKYLAKE,
> +  INTEL_BIGCORE_KABYLAKE,
> +  INTEL_BIGCORE_COMETLAKE,
> +  INTEL_BIGCORE_SKYLAKE_AVX512,
> +  INTEL_BIGCORE_CANNONLAKE,
> +  INTEL_BIGCORE_ICELAKE,
> +  INTEL_BIGCORE_TIGERLAKE,
> +  INTEL_BIGCORE_ROCKETLAKE,
> +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> +  INTEL_BIGCORE_RAPTORLAKE,
> +  INTEL_BIGCORE_EMERALDRAPIDS,
> +  INTEL_BIGCORE_METEORLAKE,
> +  INTEL_BIGCORE_LUNARLAKE,
> +  INTEL_BIGCORE_ARROWLAKE,
> +  INTEL_BIGCORE_GRANITERAPIDS,
> +
> +  /* Mixed (bigcore + atom SOC).  */
> +  INTEL_MIXED_LAKEFIELD,
> +  INTEL_MIXED_ALDERLAKE,
> +
> +  /* KNL.  */

(Honestly, not a useful comment... ;-)

> +  INTEL_KNIGHTS_MILL,
> +  INTEL_KNIGHTS_LANDING,
> +
> +  /* Unknown.  */
> +  INTEL_UNKNOWN,
> +};

Ok.

> +static unsigned int
> +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)

stepping is never used, could it be marked notused or something?

> +{
> +  switch (model)
> +    {
 . . .
> +    }
> +}

Ok.


> @@ -453,129 +662,143 @@ init_cpu_features (struct cpu_features *cpu_features)
>        if (family == 0x06)
>  	{
>  	  model += extended_model;
> -	  switch (model)
> +	  unsigned int microarch
> +	      = intel_get_fam6_microarch (model, stepping);
> +
> +	  switch (microarch)

Ok.

>  	    {
> -	    case 0x1c:
> -	    case 0x26:
> -	      /* BSF is slow on Atom.  */
> +	      /* Atom / KNL tuning.  */
> +	    case INTEL_ATOM_BONNELL:
> +	      /* BSF is slow on Bonnell.  */

Ok.

> -	    case 0x57:
> -	      /* Knights Landing.  Enable Silvermont optimizations.  */
> -
> -	    case 0x7a:
> -	      /* Unaligned load versions are faster than SSSE3
> -		 on Goldmont Plus.  */
> -
> -	    case 0x5c:
> -	    case 0x5f:
>  	      /* Unaligned load versions are faster than SSSE3
> -		 on Goldmont.  */
> +		     on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> +	    case INTEL_ATOM_AIRMONT:
> +	    case INTEL_ATOM_SILVERMONT:
> +	    case INTEL_ATOM_GOLDMONT:
> +	    case INTEL_ATOM_GOLDMONT_PLUS:

Ok.

> -	    case 0x4c:
> -	    case 0x5a:
> -	    case 0x75:
> -	      /* Airmont is a die shrink of Silvermont.  */
> +          /* Knights Landing.  Enable Silvermont optimizations.  */
> +	    case INTEL_KNIGHTS_LANDING:
>  
> -	    case 0x37:
> -	    case 0x4a:
> -	    case 0x4d:
> -	    case 0x5d:
> -	      /* Unaligned load versions are faster than SSSE3
> -		 on Silvermont.  */

Ok.

>  	      cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> -		|= (bit_arch_Fast_Unaligned_Load
> -		    | bit_arch_Fast_Unaligned_Copy
> -		    | bit_arch_Prefer_PMINUB_for_stringop
> -		    | bit_arch_Slow_SSE4_2);
> +		  |= (bit_arch_Fast_Unaligned_Load
> +		      | bit_arch_Fast_Unaligned_Copy
> +		      | bit_arch_Prefer_PMINUB_for_stringop
> +		      | bit_arch_Slow_SSE4_2);
>  	      break;

Ok.

> -	    case 0x86:
> -	    case 0x96:
> -	    case 0x9c:
> +	    case INTEL_ATOM_TREMONT:

Ok.

>  	      cpu_features->preferred[index_arch_Fast_Rep_String]
> -		|= (bit_arch_Fast_Rep_String
> -		    | bit_arch_Fast_Unaligned_Load
> -		    | bit_arch_Fast_Unaligned_Copy
> -		    | bit_arch_Prefer_PMINUB_for_stringop
> -		    | bit_arch_Slow_SSE4_2);
> +		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load

Minor nit: I think the one-per-line is easier to follow.  Ok though.

> +		      | bit_arch_Fast_Unaligned_Copy
> +		      | bit_arch_Prefer_PMINUB_for_stringop
> +		      | bit_arch_Slow_SSE4_2);

Ok.

>  
> +	   /*
> +	    Default tuned Knights microarch.
> +	    case INTEL_KNIGHTS_MILL:
> +        */
> +
> +	   /*
> +	    Default tuned atom microarch.
> +	    case INTEL_ATOM_SIERRAFOREST:
> +	    case INTEL_ATOM_GRANDRIDGE:
> +	   */
> +
> +	      /* Bigcore/Default Tuning.  */
>  	    default:

Ok.

>  	      /* Unknown family 0x06 processors.  Assuming this is one
>  		 of Core i3/i5/i7 processors if AVX is available.  */
>  	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
>  		break;

> -	      /* Fall through.  */

This should stay, as we're still falling through.

> -	    case 0x1a:
> -	    case 0x1e:
> -	    case 0x1f:
> -	    case 0x25:
> -	    case 0x2c:
> -	    case 0x2e:
> -	    case 0x2f:
> +	    case INTEL_BIGCORE_NEHALEM:
> +	    case INTEL_BIGCORE_WESTMERE:

Ok.

>  	      /* Rep string instructions, unaligned load, unaligned copy,
>  		 and pminub are fast on Intel Core i3, i5 and i7.  */
>  	      cpu_features->preferred[index_arch_Fast_Rep_String]
> -		|= (bit_arch_Fast_Rep_String
> -		    | bit_arch_Fast_Unaligned_Load
> -		    | bit_arch_Fast_Unaligned_Copy
> -		    | bit_arch_Prefer_PMINUB_for_stringop);
> +		  |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> +		      | bit_arch_Fast_Unaligned_Copy
> +		      | bit_arch_Prefer_PMINUB_for_stringop);

Same nit as before.

>  	      break;
> +
> +	   /*
> +	    Default tuned Bigcore microarch.

This is a bit confusing, because these cases are not actually active
here.  It seems to mean "these CPUs will end up here due to logic above"
but why not make that explicit, either by saying so, or activating these
cases?
[future me: they're used in 3/3, so ok]

> +	    case INTEL_BIGCORE_SANDYBRIDGE:
> +	    case INTEL_BIGCORE_IVYBRIDGE:
> +	    case INTEL_BIGCORE_HASWELL:
> +	    case INTEL_BIGCORE_BROADWELL:
> +	    case INTEL_BIGCORE_SKYLAKE:
> +	    case INTEL_BIGCORE_KABYLAKE:
> +	    case INTEL_BIGCORE_COMETLAKE:
> +	    case INTEL_BIGCORE_SKYLAKE_AVX512:
> +	    case INTEL_BIGCORE_CANNONLAKE:
> +	    case INTEL_BIGCORE_ICELAKE:
> +	    case INTEL_BIGCORE_TIGERLAKE:
> +	    case INTEL_BIGCORE_ROCKETLAKE:
> +	    case INTEL_BIGCORE_RAPTORLAKE:
> +	    case INTEL_BIGCORE_METEORLAKE:
> +	    case INTEL_BIGCORE_LUNARLAKE:
> +	    case INTEL_BIGCORE_ARROWLAKE:
> +	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
> +	    case INTEL_BIGCORE_EMERALDRAPIDS:
> +	    case INTEL_BIGCORE_GRANITERAPIDS:
> +	    */
> +
> +	   /*
> +	    Default tuned Mixed (bigcore + atom SOC).
> +	    case INTEL_MIXED_LAKEFIELD:
> +	    case INTEL_MIXED_ALDERLAKE:
> +	    */
>  	    }

Ok.

> -	 /* Disable TSX on some processors to avoid TSX on kernels that
> -	    weren't updated with the latest microcode package (which
> -	    disables broken feature by default).  */
> -	 switch (model)
> +	      /* Disable TSX on some processors to avoid TSX on kernels that
> +		 weren't updated with the latest microcode package (which
> +		 disables broken feature by default).  */
> +	  switch (microarch)

Ok.

> -	    case 0x55:
> +	    case INTEL_BIGCORE_SKYLAKE_AVX512:
> +	      /* 0x55 (Skylake-avx512) && stepping <= 5 disable TSX. */
>  	      if (stepping <= 5)
>  		goto disable_tsx;

Ok.

> -	    case 0x8e:
> -	      /* NB: Although the errata documents that for model == 0x8e,
> -		 only 0xb stepping or lower are impacted, the intention of
> -		 the errata was to disable TSX on all client processors on
> -		 all steppings.  Include 0xc stepping which is an Intel
> -		 Core i7-8665U, a client mobile processor.  */
> -	    case 0x9e:
> -	      if (stepping > 0xc)
> +
> +	    case INTEL_BIGCORE_SKYLAKE:
> +	    case INTEL_BIGCORE_KABYLAKE:
> +		/* NB: Although the errata documents that for model == 0x8e
> +		   (skylake client), only 0xb stepping or lower are impacted,
> +		   the intention of the errata was to disable TSX on all client
> +		   processors on all steppings.  Include 0xc stepping which is
> +		   an Intel Core i7-8665U, a client mobile processor.  */
> +		if ((model == 0x8e || model == 0x9e) && stepping > 0xc)

Could have just used INTEL_BIGCORE_KABYLAKE instead of model numbers
here, but ok.

>  		break;
> -	      /* Fall through.  */
> -	    case 0x4e:
> -	    case 0x5e:
> -	      {
> +
Ok.

>  		   processors listed in:
>  
>  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
>  		 */
> -disable_tsx:
> +	    disable_tsx:
>  		CPU_FEATURE_UNSET (cpu_features, HLE);
>  		CPU_FEATURE_UNSET (cpu_features, RTM);
>  		CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> -	      }
> -	      break;

Matches brace removal above.  Ok.

> -	    case 0x3f:
> -	      /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> -	      if (stepping >= 4)
>  		break;
> -	      /* Fall through.  */
> -	    case 0x3c:
> -	    case 0x45:
> -	    case 0x46:
> -	      /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> -		 with stepping >= 4) to avoid TSX on kernels that weren't
> -		 updated with the latest microcode package (which disables
> -		 broken feature by default).  */
> -	      CPU_FEATURE_UNSET (cpu_features, RTM);
> -	      break;
> +
> +	    case INTEL_BIGCORE_HASWELL:
> +		/* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> +		   TSX.  Haswell also include other model numbers that have
> +		   working TSX.  */
> +		if (model == 0x3f && stepping >= 4)
> +		break;
> +
> +		CPU_FEATURE_UNSET (cpu_features, RTM);
> +		break;
>  	    }
>  	}

Ok.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v9 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific
  2023-05-13  5:19   ` [PATCH v9 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
@ 2023-05-26  3:34     ` DJ Delorie
  2023-05-27 18:46       ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: DJ Delorie @ 2023-05-26  3:34 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, goldstein.w.n, hjl.tools, carlos


One question about upgradability, one comment nit that I don't care
about but include for completeness.

Noah Goldstein via Libc-alpha <libc-alpha@sourceware.org> writes:
> Different systems prefer a different divisors.
>
>>From benchmarks[1] so far the following divisors have been found:
>     ICX     : 2
>     SKX     : 2
>     BWD     : 8
>
> For Intel, we are generalizing that BWD and older prefers 8 as a
> divisor, and SKL and newer prefers 2. This number can be further tuned
> as benchmarks are run.
>
> [1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
> ---
>  sysdeps/x86/cpu-features.c         | 27 +++++++++++++++++--------
>  sysdeps/x86/dl-cacheinfo.h         | 32 ++++++++++++++++++------------
>  sysdeps/x86/include/cpu-features.h |  3 +++
>  3 files changed, 41 insertions(+), 21 deletions(-)
>

> diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> index 40b8129d6a..f5b9dd54fe 100644
> --- a/sysdeps/x86/include/cpu-features.h
> +++ b/sysdeps/x86/include/cpu-features.h
> @@ -915,6 +915,9 @@ struct cpu_features
>    unsigned long int shared_cache_size;
>    /* Threshold to use non temporal store.  */
>    unsigned long int non_temporal_threshold;
> +  /* When no user non_temporal_threshold is specified. We default to
> +     cachesize / cachesize_non_temporal_divisor.  */
> +  unsigned long int cachesize_non_temporal_divisor;
>    /* Threshold to use "rep movsb".  */
>    unsigned long int rep_movsb_threshold;
>    /* Threshold to stop using "rep movsb".  */

This adds a new field to "struct cpu_features".  Is this structure
something that is shared between ld.so and libc.so ?  I.e. tunables
related?  If so, does this field need to be added to the end of the
struct, so as to not cause problems during an upgrade (when we have an
old ld.so and a new libc.so)?

> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index 4a1a5423ff..864b00a521 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>  
> -  /* The default setting for the non_temporal threshold is 1/4 of size
> -     of the chip's cache. For most Intel and AMD processors with an
> -     initial release date between 2017 and 2023, a thread's typical
> -     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> -     estimate the point where non-temporal stores begin outcompeting
> -     REP MOVSB. As well the point where the fact that non-temporal
> -     stores are forced back to main memory would already occurred to the
> -     majority of the lines in the copy. Note, concerns about the
> -     entire L3 cache being evicted by the copy are mostly alleviated
> -     by the fact that modern HW detects streaming patterns and
> -     provides proper LRU hints so that the maximum thrashing
> -     capped at 1/associativity. */
> -  unsigned long int non_temporal_threshold = shared / 4;

> +  unsigned long int cachesize_non_temporal_divisor
> +      = cpu_features->cachesize_non_temporal_divisor;
> +  if (cachesize_non_temporal_divisor <= 0)
> +    cachesize_non_temporal_divisor = 4;
> +
> +  /* The default setting for the non_temporal threshold is [1/2, 1/8] of size

FYI this range is backwards ;-)

> +     of the chip's cache (depending on `cachesize_non_temporal_divisor` which
> +     is microarch specific. The defeault is 1/4). For most Intel and AMD
> +     processors with an initial release date between 2017 and 2023, a thread's
> +     typical share of the cache is from 18-64MB. Using a reasonable size
> +     fraction of L3 is meant to estimate the point where non-temporal stores
> +     begin outcompeting REP MOVSB. As well the point where the fact that
> +     non-temporal stores are forced back to main memory would already occurred
> +     to the majority of the lines in the copy. Note, concerns about the entire
> +     L3 cache being evicted by the copy are mostly alleviated by the fact that
> +     modern HW detects streaming patterns and provides proper LRU hints so that
> +     the maximum thrashing capped at 1/associativity. */
> +  unsigned long int non_temporal_threshold
> +      = shared / cachesize_non_temporal_divisor;
>    /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
>       a higher risk of actually thrashing the cache as they don't have a HW LRU
>       hint. As well, there performance in highly parallel situations is

Ok, defaults to the same behavior.


> diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> index 29b8c8c133..ba789d6fc1 100644
> --- a/sysdeps/x86/cpu-features.c
> +++ b/sysdeps/x86/cpu-features.c
> @@ -635,6 +635,7 @@ init_cpu_features (struct cpu_features *cpu_features)
>    unsigned int stepping = 0;
>    enum cpu_features_kind kind;
>  
> +  cpu_features->cachesize_non_temporal_divisor = 4;

Ok.

> @@ -714,12 +715,13 @@ init_cpu_features (struct cpu_features *cpu_features)
>  
>  	      /* Bigcore/Default Tuning.  */
>  	    default:
> +	    default_tuning:
>  	      /* Unknown family 0x06 processors.  Assuming this is one
>  		 of Core i3/i5/i7 processors if AVX is available.  */
>  	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
>  		break;

Ok.

> -	    case INTEL_BIGCORE_NEHALEM:
> -	    case INTEL_BIGCORE_WESTMERE:
> +
> +	    enable_modern_features:

Ok.
>  	      /* Rep string instructions, unaligned load, unaligned copy,
>  		 and pminub are fast on Intel Core i3, i5 and i7.  */
>  	      cpu_features->preferred[index_arch_Fast_Rep_String]
> @@ -728,12 +730,20 @@ init_cpu_features (struct cpu_features *cpu_features)
>  		      | bit_arch_Prefer_PMINUB_for_stringop);
>  	      break;
>  
> -	   /*
> -	    Default tuned Bigcore microarch.

Note comment begin removed here...

> +	    case INTEL_BIGCORE_NEHALEM:
> +	    case INTEL_BIGCORE_WESTMERE:
> +	      /* Older CPUs prefer non-temporal stores at lower threshold.  */
> +	      cpu_features->cachesize_non_temporal_divisor = 8;
> +	      goto enable_modern_features;
> +
> +	      /* Default tuned Bigcore microarch.  */

Ok.

>  	    case INTEL_BIGCORE_SANDYBRIDGE:
>  	    case INTEL_BIGCORE_IVYBRIDGE:
>  	    case INTEL_BIGCORE_HASWELL:
>  	    case INTEL_BIGCORE_BROADWELL:
> +	      cpu_features->cachesize_non_temporal_divisor = 8;
> +	      goto default_tuning;
> +

Ok.

>  	    case INTEL_BIGCORE_SKYLAKE:
>  	    case INTEL_BIGCORE_KABYLAKE:
>  	    case INTEL_BIGCORE_COMETLAKE:
Note nothing but more cases here, ok.
>  	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
>  	    case INTEL_BIGCORE_EMERALDRAPIDS:
>  	    case INTEL_BIGCORE_GRANITERAPIDS:
> -	    */

... and comment end removed here.  Ok.

> +	      cpu_features->cachesize_non_temporal_divisor = 2;
> +	      goto default_tuning;

Ok.

> -	   /*
> -	    Default tuned Mixed (bigcore + atom SOC).
> +	      /* Default tuned Mixed (bigcore + atom SOC). */
>  	    case INTEL_MIXED_LAKEFIELD:
>  	    case INTEL_MIXED_ALDERLAKE:
> -	    */
> +	      cpu_features->cachesize_non_temporal_divisor = 2;
> +	      goto default_tuning;
>  	    }

Ok.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v9 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific
  2023-05-26  3:34     ` DJ Delorie
@ 2023-05-27 18:46       ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-27 18:46 UTC (permalink / raw)
  To: DJ Delorie; +Cc: libc-alpha, hjl.tools, carlos

On Thu, May 25, 2023 at 10:34 PM DJ Delorie <dj@redhat.com> wrote:
>
>
> One question about upgradability, one comment nit that I don't care
> about but include for completeness.
>
> Noah Goldstein via Libc-alpha <libc-alpha@sourceware.org> writes:
> > Different systems prefer a different divisors.
> >
> >>From benchmarks[1] so far the following divisors have been found:
> >     ICX     : 2
> >     SKX     : 2
> >     BWD     : 8
> >
> > For Intel, we are generalizing that BWD and older prefers 8 as a
> > divisor, and SKL and newer prefers 2. This number can be further tuned
> > as benchmarks are run.
> >
> > [1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
> > ---
> >  sysdeps/x86/cpu-features.c         | 27 +++++++++++++++++--------
> >  sysdeps/x86/dl-cacheinfo.h         | 32 ++++++++++++++++++------------
> >  sysdeps/x86/include/cpu-features.h |  3 +++
> >  3 files changed, 41 insertions(+), 21 deletions(-)
> >
>
> > diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> > index 40b8129d6a..f5b9dd54fe 100644
> > --- a/sysdeps/x86/include/cpu-features.h
> > +++ b/sysdeps/x86/include/cpu-features.h
> > @@ -915,6 +915,9 @@ struct cpu_features
> >    unsigned long int shared_cache_size;
> >    /* Threshold to use non temporal store.  */
> >    unsigned long int non_temporal_threshold;
> > +  /* When no user non_temporal_threshold is specified. We default to
> > +     cachesize / cachesize_non_temporal_divisor.  */
> > +  unsigned long int cachesize_non_temporal_divisor;
> >    /* Threshold to use "rep movsb".  */
> >    unsigned long int rep_movsb_threshold;
> >    /* Threshold to stop using "rep movsb".  */
>
> This adds a new field to "struct cpu_features".  Is this structure
> something that is shared between ld.so and libc.so ?  I.e. tunables
> related?  If so, does this field need to be added to the end of the
> struct, so as to not cause problems during an upgrade (when we have an
> old ld.so and a new libc.so)?

Not sure. HJ do you know?

But moved for now as a kind of "why not".
>
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index 4a1a5423ff..864b00a521 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 1/4 of size
> > -     of the chip's cache. For most Intel and AMD processors with an
> > -     initial release date between 2017 and 2023, a thread's typical
> > -     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > -     estimate the point where non-temporal stores begin outcompeting
> > -     REP MOVSB. As well the point where the fact that non-temporal
> > -     stores are forced back to main memory would already occurred to the
> > -     majority of the lines in the copy. Note, concerns about the
> > -     entire L3 cache being evicted by the copy are mostly alleviated
> > -     by the fact that modern HW detects streaming patterns and
> > -     provides proper LRU hints so that the maximum thrashing
> > -     capped at 1/associativity. */
> > -  unsigned long int non_temporal_threshold = shared / 4;
>
> > +  unsigned long int cachesize_non_temporal_divisor
> > +      = cpu_features->cachesize_non_temporal_divisor;
> > +  if (cachesize_non_temporal_divisor <= 0)
> > +    cachesize_non_temporal_divisor = 4;
> > +
> > +  /* The default setting for the non_temporal threshold is [1/2, 1/8] of size
>
> FYI this range is backwards ;-)

Fixed.
>
> > +     of the chip's cache (depending on `cachesize_non_temporal_divisor` which
> > +     is microarch specific. The defeault is 1/4). For most Intel and AMD
> > +     processors with an initial release date between 2017 and 2023, a thread's
> > +     typical share of the cache is from 18-64MB. Using a reasonable size
> > +     fraction of L3 is meant to estimate the point where non-temporal stores
> > +     begin outcompeting REP MOVSB. As well the point where the fact that
> > +     non-temporal stores are forced back to main memory would already occurred
> > +     to the majority of the lines in the copy. Note, concerns about the entire
> > +     L3 cache being evicted by the copy are mostly alleviated by the fact that
> > +     modern HW detects streaming patterns and provides proper LRU hints so that
> > +     the maximum thrashing capped at 1/associativity. */
> > +  unsigned long int non_temporal_threshold
> > +      = shared / cachesize_non_temporal_divisor;
> >    /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> >       a higher risk of actually thrashing the cache as they don't have a HW LRU
> >       hint. As well, there performance in highly parallel situations is
>
> Ok, defaults to the same behavior.
>
>
> > diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
> > index 29b8c8c133..ba789d6fc1 100644
> > --- a/sysdeps/x86/cpu-features.c
> > +++ b/sysdeps/x86/cpu-features.c
> > @@ -635,6 +635,7 @@ init_cpu_features (struct cpu_features *cpu_features)
> >    unsigned int stepping = 0;
> >    enum cpu_features_kind kind;
> >
> > +  cpu_features->cachesize_non_temporal_divisor = 4;
>
> Ok.
>
> > @@ -714,12 +715,13 @@ init_cpu_features (struct cpu_features *cpu_features)
> >
> >             /* Bigcore/Default Tuning.  */
> >           default:
> > +         default_tuning:
> >             /* Unknown family 0x06 processors.  Assuming this is one
> >                of Core i3/i5/i7 processors if AVX is available.  */
> >             if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
> >               break;
>
> Ok.
>
> > -         case INTEL_BIGCORE_NEHALEM:
> > -         case INTEL_BIGCORE_WESTMERE:
> > +
> > +         enable_modern_features:
>
> Ok.
> >             /* Rep string instructions, unaligned load, unaligned copy,
> >                and pminub are fast on Intel Core i3, i5 and i7.  */
> >             cpu_features->preferred[index_arch_Fast_Rep_String]
> > @@ -728,12 +730,20 @@ init_cpu_features (struct cpu_features *cpu_features)
> >                     | bit_arch_Prefer_PMINUB_for_stringop);
> >             break;
> >
> > -        /*
> > -         Default tuned Bigcore microarch.
>
> Note comment begin removed here...
>
> > +         case INTEL_BIGCORE_NEHALEM:
> > +         case INTEL_BIGCORE_WESTMERE:
> > +           /* Older CPUs prefer non-temporal stores at lower threshold.  */
> > +           cpu_features->cachesize_non_temporal_divisor = 8;
> > +           goto enable_modern_features;
> > +
> > +           /* Default tuned Bigcore microarch.  */
>
> Ok.
>
> >           case INTEL_BIGCORE_SANDYBRIDGE:
> >           case INTEL_BIGCORE_IVYBRIDGE:
> >           case INTEL_BIGCORE_HASWELL:
> >           case INTEL_BIGCORE_BROADWELL:
> > +           cpu_features->cachesize_non_temporal_divisor = 8;
> > +           goto default_tuning;
> > +
>
> Ok.
>
> >           case INTEL_BIGCORE_SKYLAKE:
> >           case INTEL_BIGCORE_KABYLAKE:
> >           case INTEL_BIGCORE_COMETLAKE:
> Note nothing but more cases here, ok.
> >           case INTEL_BIGCORE_SAPPHIRERAPIDS:
> >           case INTEL_BIGCORE_EMERALDRAPIDS:
> >           case INTEL_BIGCORE_GRANITERAPIDS:
> > -         */
>
> ... and comment end removed here.  Ok.
>
> > +           cpu_features->cachesize_non_temporal_divisor = 2;
> > +           goto default_tuning;
>
> Ok.
>
> > -        /*
> > -         Default tuned Mixed (bigcore + atom SOC).
> > +           /* Default tuned Mixed (bigcore + atom SOC). */
> >           case INTEL_MIXED_LAKEFIELD:
> >           case INTEL_MIXED_ALDERLAKE:
> > -         */
> > +           cpu_features->cachesize_non_temporal_divisor = 2;
> > +           goto default_tuning;
> >           }
>
> Ok.
>

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-26  3:34     ` DJ Delorie
@ 2023-05-27 18:46       ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-27 18:46 UTC (permalink / raw)
  To: DJ Delorie; +Cc: libc-alpha, hjl.tools, carlos

On Thu, May 25, 2023 at 10:34 PM DJ Delorie <dj@redhat.com> wrote:
>
>
> LGTM with a few nits about comments that I don't really care about, but
> mention for completeness.
>
> Reviewed-by: DJ Delorie <dj@redhat.com>
>
> Noah Goldstein via Libc-alpha <libc-alpha@sourceware.org> writes:
> > This patch should have no affect on existing functionality.
>
> > +/* Intel Family-6 microarch list.  */
> > +enum
> > +{
> > +  /* Atom processors.  */
> > +  INTEL_ATOM_BONNELL,
> > +  INTEL_ATOM_SILVERMONT,
> > +  INTEL_ATOM_AIRMONT,
> > +  INTEL_ATOM_GOLDMONT,
> > +  INTEL_ATOM_GOLDMONT_PLUS,
> > +  INTEL_ATOM_SIERRAFOREST,
> > +  INTEL_ATOM_GRANDRIDGE,
> > +  INTEL_ATOM_TREMONT,
> > +
> > +  /* Bigcore processors.  */
> > +  INTEL_BIGCORE_MEROM,
> > +  INTEL_BIGCORE_PENRYN,
> > +  INTEL_BIGCORE_DUNNINGTON,
> > +  INTEL_BIGCORE_NEHALEM,
> > +  INTEL_BIGCORE_WESTMERE,
> > +  INTEL_BIGCORE_SANDYBRIDGE,
> > +  INTEL_BIGCORE_IVYBRIDGE,
> > +  INTEL_BIGCORE_HASWELL,
> > +  INTEL_BIGCORE_BROADWELL,
> > +  INTEL_BIGCORE_SKYLAKE,
> > +  INTEL_BIGCORE_KABYLAKE,
> > +  INTEL_BIGCORE_COMETLAKE,
> > +  INTEL_BIGCORE_SKYLAKE_AVX512,
> > +  INTEL_BIGCORE_CANNONLAKE,
> > +  INTEL_BIGCORE_ICELAKE,
> > +  INTEL_BIGCORE_TIGERLAKE,
> > +  INTEL_BIGCORE_ROCKETLAKE,
> > +  INTEL_BIGCORE_SAPPHIRERAPIDS,
> > +  INTEL_BIGCORE_RAPTORLAKE,
> > +  INTEL_BIGCORE_EMERALDRAPIDS,
> > +  INTEL_BIGCORE_METEORLAKE,
> > +  INTEL_BIGCORE_LUNARLAKE,
> > +  INTEL_BIGCORE_ARROWLAKE,
> > +  INTEL_BIGCORE_GRANITERAPIDS,
> > +
> > +  /* Mixed (bigcore + atom SOC).  */
> > +  INTEL_MIXED_LAKEFIELD,
> > +  INTEL_MIXED_ALDERLAKE,
> > +
> > +  /* KNL.  */
>
> (Honestly, not a useful comment... ;-)

Just want it to be consistent so we label each class.
>
> > +  INTEL_KNIGHTS_MILL,
> > +  INTEL_KNIGHTS_LANDING,
> > +
> > +  /* Unknown.  */
> > +  INTEL_UNKNOWN,
> > +};
>
> Ok.
>
> > +static unsigned int
> > +intel_get_fam6_microarch (unsigned int model, unsigned int stepping)
>
> stepping is never used, could it be marked notused or something?
Done.
>
> > +{
> > +  switch (model)
> > +    {
>  . . .
> > +    }
> > +}
>
> Ok.
>
>
> > @@ -453,129 +662,143 @@ init_cpu_features (struct cpu_features *cpu_features)
> >        if (family == 0x06)
> >       {
> >         model += extended_model;
> > -       switch (model)
> > +       unsigned int microarch
> > +           = intel_get_fam6_microarch (model, stepping);
> > +
> > +       switch (microarch)
>
> Ok.
>
> >           {
> > -         case 0x1c:
> > -         case 0x26:
> > -           /* BSF is slow on Atom.  */
> > +           /* Atom / KNL tuning.  */
> > +         case INTEL_ATOM_BONNELL:
> > +           /* BSF is slow on Bonnell.  */
>
> Ok.
>
> > -         case 0x57:
> > -           /* Knights Landing.  Enable Silvermont optimizations.  */
> > -
> > -         case 0x7a:
> > -           /* Unaligned load versions are faster than SSSE3
> > -              on Goldmont Plus.  */
> > -
> > -         case 0x5c:
> > -         case 0x5f:
> >             /* Unaligned load versions are faster than SSSE3
> > -              on Goldmont.  */
> > +                  on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
> > +         case INTEL_ATOM_AIRMONT:
> > +         case INTEL_ATOM_SILVERMONT:
> > +         case INTEL_ATOM_GOLDMONT:
> > +         case INTEL_ATOM_GOLDMONT_PLUS:
>
> Ok.
>
> > -         case 0x4c:
> > -         case 0x5a:
> > -         case 0x75:
> > -           /* Airmont is a die shrink of Silvermont.  */
> > +          /* Knights Landing.  Enable Silvermont optimizations.  */
> > +         case INTEL_KNIGHTS_LANDING:
> >
> > -         case 0x37:
> > -         case 0x4a:
> > -         case 0x4d:
> > -         case 0x5d:
> > -           /* Unaligned load versions are faster than SSSE3
> > -              on Silvermont.  */
>
> Ok.
>
> >             cpu_features->preferred[index_arch_Fast_Unaligned_Load]
> > -             |= (bit_arch_Fast_Unaligned_Load
> > -                 | bit_arch_Fast_Unaligned_Copy
> > -                 | bit_arch_Prefer_PMINUB_for_stringop
> > -                 | bit_arch_Slow_SSE4_2);
> > +               |= (bit_arch_Fast_Unaligned_Load
> > +                   | bit_arch_Fast_Unaligned_Copy
> > +                   | bit_arch_Prefer_PMINUB_for_stringop
> > +                   | bit_arch_Slow_SSE4_2);
> >             break;
>
> Ok.
>
> > -         case 0x86:
> > -         case 0x96:
> > -         case 0x9c:
> > +         case INTEL_ATOM_TREMONT:
>
> Ok.
>
> >             cpu_features->preferred[index_arch_Fast_Rep_String]
> > -             |= (bit_arch_Fast_Rep_String
> > -                 | bit_arch_Fast_Unaligned_Load
> > -                 | bit_arch_Fast_Unaligned_Copy
> > -                 | bit_arch_Prefer_PMINUB_for_stringop
> > -                 | bit_arch_Slow_SSE4_2);
> > +               |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
>
> Minor nit: I think the one-per-line is easier to follow.  Ok though.
Done.
>
> > +                   | bit_arch_Fast_Unaligned_Copy
> > +                   | bit_arch_Prefer_PMINUB_for_stringop
> > +                   | bit_arch_Slow_SSE4_2);
>
> Ok.
>
> >
> > +        /*
> > +         Default tuned Knights microarch.
> > +         case INTEL_KNIGHTS_MILL:
> > +        */
> > +
> > +        /*
> > +         Default tuned atom microarch.
> > +         case INTEL_ATOM_SIERRAFOREST:
> > +         case INTEL_ATOM_GRANDRIDGE:
> > +        */
> > +
> > +           /* Bigcore/Default Tuning.  */
> >           default:
>
> Ok.
>
> >             /* Unknown family 0x06 processors.  Assuming this is one
> >                of Core i3/i5/i7 processors if AVX is available.  */
> >             if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
> >               break;
>
> > -           /* Fall through.  */
>
> This should stay, as we're still falling through.
>
> > -         case 0x1a:
> > -         case 0x1e:
> > -         case 0x1f:
> > -         case 0x25:
> > -         case 0x2c:
> > -         case 0x2e:
> > -         case 0x2f:
> > +         case INTEL_BIGCORE_NEHALEM:
> > +         case INTEL_BIGCORE_WESTMERE:
>
> Ok.
>
> >             /* Rep string instructions, unaligned load, unaligned copy,
> >                and pminub are fast on Intel Core i3, i5 and i7.  */
> >             cpu_features->preferred[index_arch_Fast_Rep_String]
> > -             |= (bit_arch_Fast_Rep_String
> > -                 | bit_arch_Fast_Unaligned_Load
> > -                 | bit_arch_Fast_Unaligned_Copy
> > -                 | bit_arch_Prefer_PMINUB_for_stringop);
> > +               |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load
> > +                   | bit_arch_Fast_Unaligned_Copy
> > +                   | bit_arch_Prefer_PMINUB_for_stringop);
>
> Same nit as before.
>
Done.
> >             break;
> > +
> > +        /*
> > +         Default tuned Bigcore microarch.
>
> This is a bit confusing, because these cases are not actually active
> here.  It seems to mean "these CPUs will end up here due to logic above"
> but why not make that explicit, either by saying so, or activating these
> cases?
> [future me: they're used in 3/3, so ok]
>
> > +         case INTEL_BIGCORE_SANDYBRIDGE:
> > +         case INTEL_BIGCORE_IVYBRIDGE:
> > +         case INTEL_BIGCORE_HASWELL:
> > +         case INTEL_BIGCORE_BROADWELL:
> > +         case INTEL_BIGCORE_SKYLAKE:
> > +         case INTEL_BIGCORE_KABYLAKE:
> > +         case INTEL_BIGCORE_COMETLAKE:
> > +         case INTEL_BIGCORE_SKYLAKE_AVX512:
> > +         case INTEL_BIGCORE_CANNONLAKE:
> > +         case INTEL_BIGCORE_ICELAKE:
> > +         case INTEL_BIGCORE_TIGERLAKE:
> > +         case INTEL_BIGCORE_ROCKETLAKE:
> > +         case INTEL_BIGCORE_RAPTORLAKE:
> > +         case INTEL_BIGCORE_METEORLAKE:
> > +         case INTEL_BIGCORE_LUNARLAKE:
> > +         case INTEL_BIGCORE_ARROWLAKE:
> > +         case INTEL_BIGCORE_SAPPHIRERAPIDS:
> > +         case INTEL_BIGCORE_EMERALDRAPIDS:
> > +         case INTEL_BIGCORE_GRANITERAPIDS:
> > +         */
> > +
> > +        /*
> > +         Default tuned Mixed (bigcore + atom SOC).
> > +         case INTEL_MIXED_LAKEFIELD:
> > +         case INTEL_MIXED_ALDERLAKE:
> > +         */
> >           }
>
> Ok.
>
> > -      /* Disable TSX on some processors to avoid TSX on kernels that
> > -         weren't updated with the latest microcode package (which
> > -         disables broken feature by default).  */
> > -      switch (model)
> > +           /* Disable TSX on some processors to avoid TSX on kernels that
> > +              weren't updated with the latest microcode package (which
> > +              disables broken feature by default).  */
> > +       switch (microarch)
>
> Ok.
>
> > -         case 0x55:
> > +         case INTEL_BIGCORE_SKYLAKE_AVX512:
> > +           /* 0x55 (Skylake-avx512) && stepping <= 5 disable TSX. */
> >             if (stepping <= 5)
> >               goto disable_tsx;
>
> Ok.
>
> > -         case 0x8e:
> > -           /* NB: Although the errata documents that for model == 0x8e,
> > -              only 0xb stepping or lower are impacted, the intention of
> > -              the errata was to disable TSX on all client processors on
> > -              all steppings.  Include 0xc stepping which is an Intel
> > -              Core i7-8665U, a client mobile processor.  */
> > -         case 0x9e:
> > -           if (stepping > 0xc)
> > +
> > +         case INTEL_BIGCORE_SKYLAKE:
> > +         case INTEL_BIGCORE_KABYLAKE:
> > +             /* NB: Although the errata documents that for model == 0x8e
> > +                (skylake client), only 0xb stepping or lower are impacted,
> > +                the intention of the errata was to disable TSX on all client
> > +                processors on all steppings.  Include 0xc stepping which is
> > +                an Intel Core i7-8665U, a client mobile processor.  */
> > +             if ((model == 0x8e || model == 0x9e) && stepping > 0xc)
>
> Could have just used INTEL_BIGCORE_KABYLAKE instead of model numbers
> here, but ok.

that seems cleaner. Changed.
>
> >               break;
> > -           /* Fall through.  */
> > -         case 0x4e:
> > -         case 0x5e:
> > -           {
> > +
> Ok.
>
> >                  processors listed in:
> >
> >  https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
> >                */
> > -disable_tsx:
> > +         disable_tsx:
> >               CPU_FEATURE_UNSET (cpu_features, HLE);
> >               CPU_FEATURE_UNSET (cpu_features, RTM);
> >               CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
> > -           }
> > -           break;
>
> Matches brace removal above.  Ok.
>
> > -         case 0x3f:
> > -           /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
> > -           if (stepping >= 4)
> >               break;
> > -           /* Fall through.  */
> > -         case 0x3c:
> > -         case 0x45:
> > -         case 0x46:
> > -           /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
> > -              with stepping >= 4) to avoid TSX on kernels that weren't
> > -              updated with the latest microcode package (which disables
> > -              broken feature by default).  */
> > -           CPU_FEATURE_UNSET (cpu_features, RTM);
> > -           break;
> > +
> > +         case INTEL_BIGCORE_HASWELL:
> > +             /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
> > +                TSX.  Haswell also include other model numbers that have
> > +                working TSX.  */
> > +             if (model == 0x3f && stepping >= 4)
> > +             break;
> > +
> > +             CPU_FEATURE_UNSET (cpu_features, RTM);
> > +             break;
> >           }
> >       }
>
> Ok.
>

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v10 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
                   ` (9 preceding siblings ...)
  2023-05-13  5:19 ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-05-27 18:46 ` Noah Goldstein
  2023-05-27 18:46   ` [PATCH v10 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
                     ` (2 more replies)
  2023-06-07 18:18 ` [PATCH v11 " Noah Goldstein
  11 siblings, 3 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-27 18:46 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos, DJ Delorie

Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.

Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.

The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.

Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
Reviewed-by: DJ Delorie <dj@redhat.com>
---
 sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
 1 file changed, 43 insertions(+), 27 deletions(-)

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index ec88945b39..4a1a5423ff 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@ handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/4 of size
+     of the chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     REP MOVSB. As well the point where the fact that non-temporal
+     stores are forced back to main memory would already occurred to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the maximum thrashing
+     capped at 1/associativity. */
+  unsigned long int non_temporal_threshold = shared / 4;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, there performance in highly parallel situations is
+     noticeably worse.  */
+  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
+    non_temporal_threshold = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v10 2/3] x86: Refactor Intel `init_cpu_features`
  2023-05-27 18:46 ` [PATCH v10 " Noah Goldstein
@ 2023-05-27 18:46   ` Noah Goldstein
  2023-05-27 18:46   ` [PATCH v10 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
  2023-06-07  0:15   ` [PATCH v10 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Carlos O'Donell
  2 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-27 18:46 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos, DJ Delorie

This patch should have no affect on existing functionality.

The current code, which has a single switch for model detection and
setting prefered features, is difficult to follow/extend. The cases
use magic numbers and many microarchitectures are missing. This makes
it difficult to reason about what is implemented so far and/or
how/where to add support for new features.

This patch splits the model detection and preference setting stages so
that CPU preferences can be set based on a complete list of available
microarchitectures, rather than based on model magic numbers.
Reviewed-by: DJ Delorie <dj@redhat.com>
---
 sysdeps/x86/cpu-features.c | 390 +++++++++++++++++++++++++++++--------
 1 file changed, 309 insertions(+), 81 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 5bff8ec0b4..1b6e00c88f 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -417,6 +417,216 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
 		     == index_arch_Fast_Copy_Backward)),
 		"Incorrect index_arch_Fast_Unaligned_Load");
 
+
+/* Intel Family-6 microarch list.  */
+enum
+{
+  /* Atom processors.  */
+  INTEL_ATOM_BONNELL,
+  INTEL_ATOM_SILVERMONT,
+  INTEL_ATOM_AIRMONT,
+  INTEL_ATOM_GOLDMONT,
+  INTEL_ATOM_GOLDMONT_PLUS,
+  INTEL_ATOM_SIERRAFOREST,
+  INTEL_ATOM_GRANDRIDGE,
+  INTEL_ATOM_TREMONT,
+
+  /* Bigcore processors.  */
+  INTEL_BIGCORE_MEROM,
+  INTEL_BIGCORE_PENRYN,
+  INTEL_BIGCORE_DUNNINGTON,
+  INTEL_BIGCORE_NEHALEM,
+  INTEL_BIGCORE_WESTMERE,
+  INTEL_BIGCORE_SANDYBRIDGE,
+  INTEL_BIGCORE_IVYBRIDGE,
+  INTEL_BIGCORE_HASWELL,
+  INTEL_BIGCORE_BROADWELL,
+  INTEL_BIGCORE_SKYLAKE,
+  INTEL_BIGCORE_KABYLAKE,
+  INTEL_BIGCORE_COMETLAKE,
+  INTEL_BIGCORE_SKYLAKE_AVX512,
+  INTEL_BIGCORE_CANNONLAKE,
+  INTEL_BIGCORE_ICELAKE,
+  INTEL_BIGCORE_TIGERLAKE,
+  INTEL_BIGCORE_ROCKETLAKE,
+  INTEL_BIGCORE_SAPPHIRERAPIDS,
+  INTEL_BIGCORE_RAPTORLAKE,
+  INTEL_BIGCORE_EMERALDRAPIDS,
+  INTEL_BIGCORE_METEORLAKE,
+  INTEL_BIGCORE_LUNARLAKE,
+  INTEL_BIGCORE_ARROWLAKE,
+  INTEL_BIGCORE_GRANITERAPIDS,
+
+  /* Mixed (bigcore + atom SOC).  */
+  INTEL_MIXED_LAKEFIELD,
+  INTEL_MIXED_ALDERLAKE,
+
+  /* KNL.  */
+  INTEL_KNIGHTS_MILL,
+  INTEL_KNIGHTS_LANDING,
+
+  /* Unknown.  */
+  INTEL_UNKNOWN,
+};
+
+static unsigned int
+intel_get_fam6_microarch (unsigned int model,
+			  __attribute__ ((unused)) unsigned int stepping)
+{
+  switch (model)
+    {
+    case 0x1C:
+    case 0x26:
+      return INTEL_ATOM_BONNELL;
+    case 0x27:
+    case 0x35:
+    case 0x36:
+      /* Really Saltwell, but Saltwell is just a die shrink of Bonnell
+         (microarchitecturally identical).  */
+      return INTEL_ATOM_BONNELL;
+    case 0x37:
+    case 0x4A:
+    case 0x4D:
+    case 0x5D:
+      return INTEL_ATOM_SILVERMONT;
+    case 0x4C:
+    case 0x5A:
+    case 0x75:
+      return INTEL_ATOM_AIRMONT;
+    case 0x5C:
+    case 0x5F:
+      return INTEL_ATOM_GOLDMONT;
+    case 0x7A:
+      return INTEL_ATOM_GOLDMONT_PLUS;
+    case 0xAF:
+      return INTEL_ATOM_SIERRAFOREST;
+    case 0xB6:
+      return INTEL_ATOM_GRANDRIDGE;
+    case 0x86:
+    case 0x96:
+    case 0x9C:
+      return INTEL_ATOM_TREMONT;
+    case 0x0F:
+    case 0x16:
+      return INTEL_BIGCORE_MEROM;
+    case 0x17:
+      return INTEL_BIGCORE_PENRYN;
+    case 0x1D:
+      return INTEL_BIGCORE_DUNNINGTON;
+    case 0x1A:
+    case 0x1E:
+    case 0x1F:
+    case 0x2E:
+      return INTEL_BIGCORE_NEHALEM;
+    case 0x25:
+    case 0x2C:
+    case 0x2F:
+      return INTEL_BIGCORE_WESTMERE;
+    case 0x2A:
+    case 0x2D:
+      return INTEL_BIGCORE_SANDYBRIDGE;
+    case 0x3A:
+    case 0x3E:
+      return INTEL_BIGCORE_IVYBRIDGE;
+    case 0x3C:
+    case 0x3F:
+    case 0x45:
+    case 0x46:
+      return INTEL_BIGCORE_HASWELL;
+    case 0x3D:
+    case 0x47:
+    case 0x4F:
+    case 0x56:
+      return INTEL_BIGCORE_BROADWELL;
+    case 0x4E:
+    case 0x5E:
+      return INTEL_BIGCORE_SKYLAKE;
+    case 0x8E:
+    /*
+     Stepping = {9}
+        -> Amberlake
+     Stepping = {10}
+        -> Coffeelake
+     Stepping = {11, 12}
+        -> Whiskeylake
+     else
+        -> Kabylake
+
+     All of these are derivatives of Kabylake (Skylake client).
+     */
+	  return INTEL_BIGCORE_KABYLAKE;
+    case 0x9E:
+    /*
+     Stepping = {10, 11, 12, 13}
+        -> Coffeelake
+     else
+        -> Kabylake
+
+     Coffeelake is a derivatives of Kabylake (Skylake client).
+     */
+	  return INTEL_BIGCORE_KABYLAKE;
+    case 0xA5:
+    case 0xA6:
+      return INTEL_BIGCORE_COMETLAKE;
+    case 0x66:
+      return INTEL_BIGCORE_CANNONLAKE;
+    case 0x55:
+    /*
+     Stepping = {6, 7}
+        -> Cascadelake
+     Stepping = {11}
+        -> Cooperlake
+     else
+        -> Skylake-avx512
+
+     These are all microarchitecturally indentical, so use
+     Skylake-avx512 for all of them.
+     */
+      return INTEL_BIGCORE_SKYLAKE_AVX512;
+    case 0x6A:
+    case 0x6C:
+    case 0x7D:
+    case 0x7E:
+    case 0x9D:
+      return INTEL_BIGCORE_ICELAKE;
+    case 0x8C:
+    case 0x8D:
+      return INTEL_BIGCORE_TIGERLAKE;
+    case 0xA7:
+      return INTEL_BIGCORE_ROCKETLAKE;
+    case 0x8F:
+      return INTEL_BIGCORE_SAPPHIRERAPIDS;
+    case 0xB7:
+    case 0xBA:
+    case 0xBF:
+      return INTEL_BIGCORE_RAPTORLAKE;
+    case 0xCF:
+      return INTEL_BIGCORE_EMERALDRAPIDS;
+    case 0xAA:
+    case 0xAC:
+      return INTEL_BIGCORE_METEORLAKE;
+    case 0xbd:
+      return INTEL_BIGCORE_LUNARLAKE;
+    case 0xc6:
+      return INTEL_BIGCORE_ARROWLAKE;
+    case 0xAD:
+    case 0xAE:
+      return INTEL_BIGCORE_GRANITERAPIDS;
+    case 0x8A:
+      return INTEL_MIXED_LAKEFIELD;
+    case 0x97:
+    case 0x9A:
+    case 0xBE:
+      return INTEL_MIXED_ALDERLAKE;
+    case 0x85:
+      return INTEL_KNIGHTS_MILL;
+    case 0x57:
+      return INTEL_KNIGHTS_LANDING;
+    default:
+      return INTEL_UNKNOWN;
+    }
+}
+
 static inline void
 init_cpu_features (struct cpu_features *cpu_features)
 {
@@ -453,129 +663,147 @@ init_cpu_features (struct cpu_features *cpu_features)
       if (family == 0x06)
 	{
 	  model += extended_model;
-	  switch (model)
+	  unsigned int microarch
+	      = intel_get_fam6_microarch (model, stepping);
+
+	  switch (microarch)
 	    {
-	    case 0x1c:
-	    case 0x26:
-	      /* BSF is slow on Atom.  */
+	      /* Atom / KNL tuning.  */
+	    case INTEL_ATOM_BONNELL:
+	      /* BSF is slow on Bonnell.  */
 	      cpu_features->preferred[index_arch_Slow_BSF]
-		|= bit_arch_Slow_BSF;
+		  |= bit_arch_Slow_BSF;
 	      break;
 
-	    case 0x57:
-	      /* Knights Landing.  Enable Silvermont optimizations.  */
-
-	    case 0x7a:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont Plus.  */
-
-	    case 0x5c:
-	    case 0x5f:
 	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont.  */
+		     on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
+	    case INTEL_ATOM_AIRMONT:
+	    case INTEL_ATOM_SILVERMONT:
+	    case INTEL_ATOM_GOLDMONT:
+	    case INTEL_ATOM_GOLDMONT_PLUS:
 
-	    case 0x4c:
-	    case 0x5a:
-	    case 0x75:
-	      /* Airmont is a die shrink of Silvermont.  */
+          /* Knights Landing.  Enable Silvermont optimizations.  */
+	    case INTEL_KNIGHTS_LANDING:
 
-	    case 0x37:
-	    case 0x4a:
-	    case 0x4d:
-	    case 0x5d:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Silvermont.  */
 	      cpu_features->preferred[index_arch_Fast_Unaligned_Load]
-		|= (bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
-	    case 0x86:
-	    case 0x96:
-	    case 0x9c:
+	    case INTEL_ATOM_TREMONT:
 	      /* Enable rep string instructions, unaligned load, unaligned
-	         copy, pminub and avoid SSE 4.2 on Tremont.  */
+		 copy, pminub and avoid SSE 4.2 on Tremont.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Rep_String
+		      | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
+	   /*
+	    Default tuned Knights microarch.
+	    case INTEL_KNIGHTS_MILL:
+        */
+
+	   /*
+	    Default tuned atom microarch.
+	    case INTEL_ATOM_SIERRAFOREST:
+	    case INTEL_ATOM_GRANDRIDGE:
+	   */
+
+	      /* Bigcore/Default Tuning.  */
 	    default:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
 	      /* Fall through.  */
-
-	    case 0x1a:
-	    case 0x1e:
-	    case 0x1f:
-	    case 0x25:
-	    case 0x2c:
-	    case 0x2e:
-	    case 0x2f:
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop);
+		  |= (bit_arch_Fast_Rep_String
+		      | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
+
+	   /*
+	    Default tuned Bigcore microarch.
+	    case INTEL_BIGCORE_SANDYBRIDGE:
+	    case INTEL_BIGCORE_IVYBRIDGE:
+	    case INTEL_BIGCORE_HASWELL:
+	    case INTEL_BIGCORE_BROADWELL:
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+	    case INTEL_BIGCORE_COMETLAKE:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	    case INTEL_BIGCORE_CANNONLAKE:
+	    case INTEL_BIGCORE_ICELAKE:
+	    case INTEL_BIGCORE_TIGERLAKE:
+	    case INTEL_BIGCORE_ROCKETLAKE:
+	    case INTEL_BIGCORE_RAPTORLAKE:
+	    case INTEL_BIGCORE_METEORLAKE:
+	    case INTEL_BIGCORE_LUNARLAKE:
+	    case INTEL_BIGCORE_ARROWLAKE:
+	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
+	    case INTEL_BIGCORE_EMERALDRAPIDS:
+	    case INTEL_BIGCORE_GRANITERAPIDS:
+	    */
+
+	   /*
+	    Default tuned Mixed (bigcore + atom SOC).
+	    case INTEL_MIXED_LAKEFIELD:
+	    case INTEL_MIXED_ALDERLAKE:
+	    */
 	    }
 
-	 /* Disable TSX on some processors to avoid TSX on kernels that
-	    weren't updated with the latest microcode package (which
-	    disables broken feature by default).  */
-	 switch (model)
+	      /* Disable TSX on some processors to avoid TSX on kernels that
+		 weren't updated with the latest microcode package (which
+		 disables broken feature by default).  */
+	  switch (microarch)
 	    {
-	    case 0x55:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	      /* 0x55 (Skylake-avx512) && stepping <= 5 disable TSX. */
 	      if (stepping <= 5)
 		goto disable_tsx;
 	      break;
-	    case 0x8e:
-	      /* NB: Although the errata documents that for model == 0x8e,
-		 only 0xb stepping or lower are impacted, the intention of
-		 the errata was to disable TSX on all client processors on
-		 all steppings.  Include 0xc stepping which is an Intel
-		 Core i7-8665U, a client mobile processor.  */
-	    case 0x9e:
+
+	    case INTEL_BIGCORE_KABYLAKE:
+	      /* NB: Although the errata documents that for model == 0x8e
+		     (kabylake skylake client), only 0xb stepping or lower are
+		     impacted, the intention of the errata was to disable TSX on
+		     all client processors on all steppings.  Include 0xc
+		     stepping which is an Intel Core i7-8665U, a client mobile
+		     processor.  */
 	      if (stepping > 0xc)
 		break;
 	      /* Fall through.  */
-	    case 0x4e:
-	    case 0x5e:
-	      {
+	    case INTEL_BIGCORE_SKYLAKE:
 		/* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
 		   processors listed in:
 
 https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
 		 */
-disable_tsx:
+	    disable_tsx:
 		CPU_FEATURE_UNSET (cpu_features, HLE);
 		CPU_FEATURE_UNSET (cpu_features, RTM);
 		CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
-	      }
-	      break;
-	    case 0x3f:
-	      /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
-	      if (stepping >= 4)
 		break;
-	      /* Fall through.  */
-	    case 0x3c:
-	    case 0x45:
-	    case 0x46:
-	      /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
-		 with stepping >= 4) to avoid TSX on kernels that weren't
-		 updated with the latest microcode package (which disables
-		 broken feature by default).  */
-	      CPU_FEATURE_UNSET (cpu_features, RTM);
-	      break;
+
+	    case INTEL_BIGCORE_HASWELL:
+		/* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
+		   TSX.  Haswell also include other model numbers that have
+		   working TSX.  */
+		if (model == 0x3f && stepping >= 4)
+		break;
+
+		CPU_FEATURE_UNSET (cpu_features, RTM);
+		break;
 	    }
 	}
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v10 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific
  2023-05-27 18:46 ` [PATCH v10 " Noah Goldstein
  2023-05-27 18:46   ` [PATCH v10 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-05-27 18:46   ` Noah Goldstein
  2023-05-31  2:33     ` DJ Delorie
  2023-07-10  5:23     ` Sajan Karumanchi
  2023-06-07  0:15   ` [PATCH v10 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Carlos O'Donell
  2 siblings, 2 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-05-27 18:46 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos

Different systems prefer a different divisors.

From benchmarks[1] so far the following divisors have been found:
    ICX     : 2
    SKX     : 2
    BWD     : 8

For Intel, we are generalizing that BWD and older prefers 8 as a
divisor, and SKL and newer prefers 2. This number can be further tuned
as benchmarks are run.

[1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
---
 sysdeps/x86/cpu-features.c         | 31 ++++++++++++++++++++---------
 sysdeps/x86/dl-cacheinfo.h         | 32 ++++++++++++++++++------------
 sysdeps/x86/dl-diagnostics-cpu.c   | 11 ++++++----
 sysdeps/x86/include/cpu-features.h |  3 +++
 4 files changed, 51 insertions(+), 26 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 1b6e00c88f..325ec2b825 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -636,6 +636,7 @@ init_cpu_features (struct cpu_features *cpu_features)
   unsigned int stepping = 0;
   enum cpu_features_kind kind;
 
+  cpu_features->cachesize_non_temporal_divisor = 4;
 #if !HAS_CPUID
   if (__get_cpuid_max (0, 0) == 0)
     {
@@ -716,13 +717,13 @@ init_cpu_features (struct cpu_features *cpu_features)
 
 	      /* Bigcore/Default Tuning.  */
 	    default:
+	    default_tuning:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	      /* Fall through.  */
-	    case INTEL_BIGCORE_NEHALEM:
-	    case INTEL_BIGCORE_WESTMERE:
+
+	    enable_modern_features:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
@@ -732,12 +733,23 @@ init_cpu_features (struct cpu_features *cpu_features)
 		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
 
-	   /*
-	    Default tuned Bigcore microarch.
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
+	      /* Older CPUs prefer non-temporal stores at lower threshold.  */
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto enable_modern_features;
+
+	      /* Older Bigcore microarch (smaller non-temporal store
+		 threshold).  */
 	    case INTEL_BIGCORE_SANDYBRIDGE:
 	    case INTEL_BIGCORE_IVYBRIDGE:
 	    case INTEL_BIGCORE_HASWELL:
 	    case INTEL_BIGCORE_BROADWELL:
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto default_tuning;
+
+	      /* Newer Bigcore microarch (larger non-temporal store
+		 threshold).  */
 	    case INTEL_BIGCORE_SKYLAKE:
 	    case INTEL_BIGCORE_KABYLAKE:
 	    case INTEL_BIGCORE_COMETLAKE:
@@ -753,13 +765,14 @@ init_cpu_features (struct cpu_features *cpu_features)
 	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
 	    case INTEL_BIGCORE_EMERALDRAPIDS:
 	    case INTEL_BIGCORE_GRANITERAPIDS:
-	    */
+	      cpu_features->cachesize_non_temporal_divisor = 2;
+	      goto default_tuning;
 
-	   /*
-	    Default tuned Mixed (bigcore + atom SOC).
+	      /* Default tuned Mixed (bigcore + atom SOC). */
 	    case INTEL_MIXED_LAKEFIELD:
 	    case INTEL_MIXED_ALDERLAKE:
-	    */
+	      cpu_features->cachesize_non_temporal_divisor = 2;
+	      goto default_tuning;
 	    }
 
 	      /* Disable TSX on some processors to avoid TSX on kernels that
diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index 4a1a5423ff..8292a4a50d 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 1/4 of size
-     of the chip's cache. For most Intel and AMD processors with an
-     initial release date between 2017 and 2023, a thread's typical
-     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
-     estimate the point where non-temporal stores begin outcompeting
-     REP MOVSB. As well the point where the fact that non-temporal
-     stores are forced back to main memory would already occurred to the
-     majority of the lines in the copy. Note, concerns about the
-     entire L3 cache being evicted by the copy are mostly alleviated
-     by the fact that modern HW detects streaming patterns and
-     provides proper LRU hints so that the maximum thrashing
-     capped at 1/associativity. */
-  unsigned long int non_temporal_threshold = shared / 4;
+  unsigned long int cachesize_non_temporal_divisor
+      = cpu_features->cachesize_non_temporal_divisor;
+  if (cachesize_non_temporal_divisor <= 0)
+    cachesize_non_temporal_divisor = 4;
+
+  /* The default setting for the non_temporal threshold is [1/8, 1/2] of size
+     of the chip's cache (depending on `cachesize_non_temporal_divisor` which
+     is microarch specific. The defeault is 1/4). For most Intel and AMD
+     processors with an initial release date between 2017 and 2023, a thread's
+     typical share of the cache is from 18-64MB. Using a reasonable size
+     fraction of L3 is meant to estimate the point where non-temporal stores
+     begin outcompeting REP MOVSB. As well the point where the fact that
+     non-temporal stores are forced back to main memory would already occurred
+     to the majority of the lines in the copy. Note, concerns about the entire
+     L3 cache being evicted by the copy are mostly alleviated by the fact that
+     modern HW detects streaming patterns and provides proper LRU hints so that
+     the maximum thrashing capped at 1/associativity. */
+  unsigned long int non_temporal_threshold
+      = shared / cachesize_non_temporal_divisor;
   /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
      a higher risk of actually thrashing the cache as they don't have a HW LRU
      hint. As well, there performance in highly parallel situations is
diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
index a1578e4665..5aab63e532 100644
--- a/sysdeps/x86/dl-diagnostics-cpu.c
+++ b/sysdeps/x86/dl-diagnostics-cpu.c
@@ -113,8 +113,11 @@ _dl_diagnostics_cpu (void)
                             cpu_features->level3_cache_linesize);
   print_cpu_features_value ("level4_cache_size",
                             cpu_features->level4_cache_size);
-  _Static_assert (offsetof (struct cpu_features, level4_cache_size)
-                  + sizeof (cpu_features->level4_cache_size)
-                  == sizeof (*cpu_features),
-                  "last cpu_features field has been printed");
+  print_cpu_features_value ("cachesize_non_temporal_divisor",
+			    cpu_features->cachesize_non_temporal_divisor);
+  _Static_assert (
+      offsetof (struct cpu_features, cachesize_non_temporal_divisor)
+	      + sizeof (cpu_features->cachesize_non_temporal_divisor)
+	  == sizeof (*cpu_features),
+      "last cpu_features field has been printed");
 }
diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
index 40b8129d6a..c740e1a5fc 100644
--- a/sysdeps/x86/include/cpu-features.h
+++ b/sysdeps/x86/include/cpu-features.h
@@ -945,6 +945,9 @@ struct cpu_features
   unsigned long int level3_cache_linesize;
   /* /_SC_LEVEL4_CACHE_SIZE.  */
   unsigned long int level4_cache_size;
+  /* When no user non_temporal_threshold is specified. We default to
+     cachesize / cachesize_non_temporal_divisor.  */
+  unsigned long int cachesize_non_temporal_divisor;
 };
 
 /* Get a pointer to the CPU features structure.  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v10 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific
  2023-05-27 18:46   ` [PATCH v10 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
@ 2023-05-31  2:33     ` DJ Delorie
  2023-07-10  5:23     ` Sajan Karumanchi
  1 sibling, 0 replies; 76+ messages in thread
From: DJ Delorie @ 2023-05-31  2:33 UTC (permalink / raw)
  To: Noah Goldstein; +Cc: libc-alpha, goldstein.w.n, hjl.tools, carlos


LGTM.
Reviewed-by: DJ Delorie <dj@redhat.com>


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v10 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-05-27 18:46 ` [PATCH v10 " Noah Goldstein
  2023-05-27 18:46   ` [PATCH v10 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
  2023-05-27 18:46   ` [PATCH v10 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
@ 2023-06-07  0:15   ` Carlos O'Donell
  2023-06-07 18:18     ` Noah Goldstein
  2 siblings, 1 reply; 76+ messages in thread
From: Carlos O'Donell @ 2023-06-07  0:15 UTC (permalink / raw)
  To: Noah Goldstein, libc-alpha; +Cc: hjl.tools, carlos, DJ Delorie

On 5/27/23 14:46, Noah Goldstein via Libc-alpha wrote:
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 4`

LGTM. Minor typos noted, OK for you to keep my Reviewed-by if you change only those.

Reviewed-by: Carlos O'Donell <carlos@redhat.com>

> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
> 
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal thresholds and leads to using non-temporal stores in
> cases where REP MOVSB is multiple times faster.
> 
> Furthermore, non-temporal stores are written directly to main memory
> so using it at a size much smaller than L3 can place soon to be
> accessed data much further away than it otherwise could be. As well,
> modern machines are able to detect streaming patterns (especially if
> REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> in affect caps the total amount of eviction at 1/cache_associativity,
> far below meaningfully thrashing the entire cache.
> 
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be REP MOVSB which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> up to `sizeof_L3`.

DJ and I reviewed the numbers and we agree with the divisor of 2 for Skylake/Icelake
and 8 for Broadwell. We considered 4 was reasonably well balanced, and that a middle
ground in this patch serves as a good starting point.

DJ also ran STREAMS on various configurations and the changes you propose do not make
anything worse than before which was Oracle's original complaint for the tuning. In
the STREAMS case the size of the copy buffer was set following the STREAMS
recommendation i.e. much larger than L3.

All-in-all this patch should improve things.

> The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> such as Broadwell prefer something closer to `8`. This patch is meant
> to be followed up by another one to make the divisor cpu-specific, but
> in the meantime (and for easier backporting), this patch settles on
> `4` as a middle-ground.
> 
> Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
> 
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> Reviewed-by: DJ Delorie <dj@redhat.com>
> ---
>  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
>  1 file changed, 43 insertions(+), 27 deletions(-)
> 
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index ec88945b39..4a1a5423ff 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>  
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -	       & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +	intel_bug_no_cache_info:
> +	  /* Assume that all logical threads share the highest cache
> +	     level.  */
> +	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +		     & 0xff);
> +
> +	  /* Get per-thread size of highest level cache.  */
> +	  if (shared_per_thread > 0 && threads > 0)
> +	    shared_per_thread /= threads;
> +	}
>      }
>  
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +	shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>  
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>  
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>  
>        level1_icache_size
>  	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>  	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>  
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>  
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>  
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>  
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>  	shared = core;
> +
> +      if (shared_per_thread <= 0)
> +	shared_per_thread = shared;
>      }
>  
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>  
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/4 of size
> +     of the chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> +     estimate the point where non-temporal stores begin outcompeting

s/outcompeting/out-competing/g

> +     REP MOVSB. As well the point where the fact that non-temporal
> +     stores are forced back to main memory would already occurred to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the maximum thrashing
> +     capped at 1/associativity. */
> +  unsigned long int non_temporal_threshold = shared / 4;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, there performance in highly parallel situations is

s/there/their/g

> +     noticeably worse.  */
> +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> +    non_temporal_threshold = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
                   ` (10 preceding siblings ...)
  2023-05-27 18:46 ` [PATCH v10 " Noah Goldstein
@ 2023-06-07 18:18 ` Noah Goldstein
  2023-06-07 18:18   ` [PATCH v11 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
                     ` (3 more replies)
  11 siblings, 4 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-06-07 18:18 UTC (permalink / raw)
  To: libc-alpha
  Cc: goldstein.w.n, hjl.tools, carlos, DJ Delorie, Carlos O'Donell

Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.

Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.

The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.

Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
Reviewed-by: DJ Delorie <dj@redhat.com>
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
---
 sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
 1 file changed, 43 insertions(+), 27 deletions(-)

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index 877e73d700..3bd3b3ec1b 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@ handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/4 of size
+     of the chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
+     estimate the point where non-temporal stores begin out-competing
+     REP MOVSB. As well the point where the fact that non-temporal
+     stores are forced back to main memory would already occurred to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the maximum thrashing
+     capped at 1/associativity. */
+  unsigned long int non_temporal_threshold = shared / 4;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, their performance in highly parallel situations is
+     noticeably worse.  */
+  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
+    non_temporal_threshold = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v11 2/3] x86: Refactor Intel `init_cpu_features`
  2023-06-07 18:18 ` [PATCH v11 " Noah Goldstein
@ 2023-06-07 18:18   ` Noah Goldstein
  2023-06-07 18:18   ` [PATCH v11 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-06-07 18:18 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos, DJ Delorie

This patch should have no affect on existing functionality.

The current code, which has a single switch for model detection and
setting prefered features, is difficult to follow/extend. The cases
use magic numbers and many microarchitectures are missing. This makes
it difficult to reason about what is implemented so far and/or
how/where to add support for new features.

This patch splits the model detection and preference setting stages so
that CPU preferences can be set based on a complete list of available
microarchitectures, rather than based on model magic numbers.
Reviewed-by: DJ Delorie <dj@redhat.com>
---
 sysdeps/x86/cpu-features.c | 390 +++++++++++++++++++++++++++++--------
 1 file changed, 309 insertions(+), 81 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index 0a99efdb28..d52a718e92 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -417,6 +417,216 @@ _Static_assert (((index_arch_Fast_Unaligned_Load
 		     == index_arch_Fast_Copy_Backward)),
 		"Incorrect index_arch_Fast_Unaligned_Load");
 
+
+/* Intel Family-6 microarch list.  */
+enum
+{
+  /* Atom processors.  */
+  INTEL_ATOM_BONNELL,
+  INTEL_ATOM_SILVERMONT,
+  INTEL_ATOM_AIRMONT,
+  INTEL_ATOM_GOLDMONT,
+  INTEL_ATOM_GOLDMONT_PLUS,
+  INTEL_ATOM_SIERRAFOREST,
+  INTEL_ATOM_GRANDRIDGE,
+  INTEL_ATOM_TREMONT,
+
+  /* Bigcore processors.  */
+  INTEL_BIGCORE_MEROM,
+  INTEL_BIGCORE_PENRYN,
+  INTEL_BIGCORE_DUNNINGTON,
+  INTEL_BIGCORE_NEHALEM,
+  INTEL_BIGCORE_WESTMERE,
+  INTEL_BIGCORE_SANDYBRIDGE,
+  INTEL_BIGCORE_IVYBRIDGE,
+  INTEL_BIGCORE_HASWELL,
+  INTEL_BIGCORE_BROADWELL,
+  INTEL_BIGCORE_SKYLAKE,
+  INTEL_BIGCORE_KABYLAKE,
+  INTEL_BIGCORE_COMETLAKE,
+  INTEL_BIGCORE_SKYLAKE_AVX512,
+  INTEL_BIGCORE_CANNONLAKE,
+  INTEL_BIGCORE_ICELAKE,
+  INTEL_BIGCORE_TIGERLAKE,
+  INTEL_BIGCORE_ROCKETLAKE,
+  INTEL_BIGCORE_SAPPHIRERAPIDS,
+  INTEL_BIGCORE_RAPTORLAKE,
+  INTEL_BIGCORE_EMERALDRAPIDS,
+  INTEL_BIGCORE_METEORLAKE,
+  INTEL_BIGCORE_LUNARLAKE,
+  INTEL_BIGCORE_ARROWLAKE,
+  INTEL_BIGCORE_GRANITERAPIDS,
+
+  /* Mixed (bigcore + atom SOC).  */
+  INTEL_MIXED_LAKEFIELD,
+  INTEL_MIXED_ALDERLAKE,
+
+  /* KNL.  */
+  INTEL_KNIGHTS_MILL,
+  INTEL_KNIGHTS_LANDING,
+
+  /* Unknown.  */
+  INTEL_UNKNOWN,
+};
+
+static unsigned int
+intel_get_fam6_microarch (unsigned int model,
+			  __attribute__ ((unused)) unsigned int stepping)
+{
+  switch (model)
+    {
+    case 0x1C:
+    case 0x26:
+      return INTEL_ATOM_BONNELL;
+    case 0x27:
+    case 0x35:
+    case 0x36:
+      /* Really Saltwell, but Saltwell is just a die shrink of Bonnell
+         (microarchitecturally identical).  */
+      return INTEL_ATOM_BONNELL;
+    case 0x37:
+    case 0x4A:
+    case 0x4D:
+    case 0x5D:
+      return INTEL_ATOM_SILVERMONT;
+    case 0x4C:
+    case 0x5A:
+    case 0x75:
+      return INTEL_ATOM_AIRMONT;
+    case 0x5C:
+    case 0x5F:
+      return INTEL_ATOM_GOLDMONT;
+    case 0x7A:
+      return INTEL_ATOM_GOLDMONT_PLUS;
+    case 0xAF:
+      return INTEL_ATOM_SIERRAFOREST;
+    case 0xB6:
+      return INTEL_ATOM_GRANDRIDGE;
+    case 0x86:
+    case 0x96:
+    case 0x9C:
+      return INTEL_ATOM_TREMONT;
+    case 0x0F:
+    case 0x16:
+      return INTEL_BIGCORE_MEROM;
+    case 0x17:
+      return INTEL_BIGCORE_PENRYN;
+    case 0x1D:
+      return INTEL_BIGCORE_DUNNINGTON;
+    case 0x1A:
+    case 0x1E:
+    case 0x1F:
+    case 0x2E:
+      return INTEL_BIGCORE_NEHALEM;
+    case 0x25:
+    case 0x2C:
+    case 0x2F:
+      return INTEL_BIGCORE_WESTMERE;
+    case 0x2A:
+    case 0x2D:
+      return INTEL_BIGCORE_SANDYBRIDGE;
+    case 0x3A:
+    case 0x3E:
+      return INTEL_BIGCORE_IVYBRIDGE;
+    case 0x3C:
+    case 0x3F:
+    case 0x45:
+    case 0x46:
+      return INTEL_BIGCORE_HASWELL;
+    case 0x3D:
+    case 0x47:
+    case 0x4F:
+    case 0x56:
+      return INTEL_BIGCORE_BROADWELL;
+    case 0x4E:
+    case 0x5E:
+      return INTEL_BIGCORE_SKYLAKE;
+    case 0x8E:
+    /*
+     Stepping = {9}
+        -> Amberlake
+     Stepping = {10}
+        -> Coffeelake
+     Stepping = {11, 12}
+        -> Whiskeylake
+     else
+        -> Kabylake
+
+     All of these are derivatives of Kabylake (Skylake client).
+     */
+	  return INTEL_BIGCORE_KABYLAKE;
+    case 0x9E:
+    /*
+     Stepping = {10, 11, 12, 13}
+        -> Coffeelake
+     else
+        -> Kabylake
+
+     Coffeelake is a derivatives of Kabylake (Skylake client).
+     */
+	  return INTEL_BIGCORE_KABYLAKE;
+    case 0xA5:
+    case 0xA6:
+      return INTEL_BIGCORE_COMETLAKE;
+    case 0x66:
+      return INTEL_BIGCORE_CANNONLAKE;
+    case 0x55:
+    /*
+     Stepping = {6, 7}
+        -> Cascadelake
+     Stepping = {11}
+        -> Cooperlake
+     else
+        -> Skylake-avx512
+
+     These are all microarchitecturally indentical, so use
+     Skylake-avx512 for all of them.
+     */
+      return INTEL_BIGCORE_SKYLAKE_AVX512;
+    case 0x6A:
+    case 0x6C:
+    case 0x7D:
+    case 0x7E:
+    case 0x9D:
+      return INTEL_BIGCORE_ICELAKE;
+    case 0x8C:
+    case 0x8D:
+      return INTEL_BIGCORE_TIGERLAKE;
+    case 0xA7:
+      return INTEL_BIGCORE_ROCKETLAKE;
+    case 0x8F:
+      return INTEL_BIGCORE_SAPPHIRERAPIDS;
+    case 0xB7:
+    case 0xBA:
+    case 0xBF:
+      return INTEL_BIGCORE_RAPTORLAKE;
+    case 0xCF:
+      return INTEL_BIGCORE_EMERALDRAPIDS;
+    case 0xAA:
+    case 0xAC:
+      return INTEL_BIGCORE_METEORLAKE;
+    case 0xbd:
+      return INTEL_BIGCORE_LUNARLAKE;
+    case 0xc6:
+      return INTEL_BIGCORE_ARROWLAKE;
+    case 0xAD:
+    case 0xAE:
+      return INTEL_BIGCORE_GRANITERAPIDS;
+    case 0x8A:
+      return INTEL_MIXED_LAKEFIELD;
+    case 0x97:
+    case 0x9A:
+    case 0xBE:
+      return INTEL_MIXED_ALDERLAKE;
+    case 0x85:
+      return INTEL_KNIGHTS_MILL;
+    case 0x57:
+      return INTEL_KNIGHTS_LANDING;
+    default:
+      return INTEL_UNKNOWN;
+    }
+}
+
 static inline void
 init_cpu_features (struct cpu_features *cpu_features)
 {
@@ -453,129 +663,147 @@ init_cpu_features (struct cpu_features *cpu_features)
       if (family == 0x06)
 	{
 	  model += extended_model;
-	  switch (model)
+	  unsigned int microarch
+	      = intel_get_fam6_microarch (model, stepping);
+
+	  switch (microarch)
 	    {
-	    case 0x1c:
-	    case 0x26:
-	      /* BSF is slow on Atom.  */
+	      /* Atom / KNL tuning.  */
+	    case INTEL_ATOM_BONNELL:
+	      /* BSF is slow on Bonnell.  */
 	      cpu_features->preferred[index_arch_Slow_BSF]
-		|= bit_arch_Slow_BSF;
+		  |= bit_arch_Slow_BSF;
 	      break;
 
-	    case 0x57:
-	      /* Knights Landing.  Enable Silvermont optimizations.  */
-
-	    case 0x7a:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont Plus.  */
-
-	    case 0x5c:
-	    case 0x5f:
 	      /* Unaligned load versions are faster than SSSE3
-		 on Goldmont.  */
+		     on Airmont, Silvermont, Goldmont, and Goldmont Plus.  */
+	    case INTEL_ATOM_AIRMONT:
+	    case INTEL_ATOM_SILVERMONT:
+	    case INTEL_ATOM_GOLDMONT:
+	    case INTEL_ATOM_GOLDMONT_PLUS:
 
-	    case 0x4c:
-	    case 0x5a:
-	    case 0x75:
-	      /* Airmont is a die shrink of Silvermont.  */
+          /* Knights Landing.  Enable Silvermont optimizations.  */
+	    case INTEL_KNIGHTS_LANDING:
 
-	    case 0x37:
-	    case 0x4a:
-	    case 0x4d:
-	    case 0x5d:
-	      /* Unaligned load versions are faster than SSSE3
-		 on Silvermont.  */
 	      cpu_features->preferred[index_arch_Fast_Unaligned_Load]
-		|= (bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
-	    case 0x86:
-	    case 0x96:
-	    case 0x9c:
+	    case INTEL_ATOM_TREMONT:
 	      /* Enable rep string instructions, unaligned load, unaligned
-	         copy, pminub and avoid SSE 4.2 on Tremont.  */
+		 copy, pminub and avoid SSE 4.2 on Tremont.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop
-		    | bit_arch_Slow_SSE4_2);
+		  |= (bit_arch_Fast_Rep_String
+		      | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop
+		      | bit_arch_Slow_SSE4_2);
 	      break;
 
+	   /*
+	    Default tuned Knights microarch.
+	    case INTEL_KNIGHTS_MILL:
+        */
+
+	   /*
+	    Default tuned atom microarch.
+	    case INTEL_ATOM_SIERRAFOREST:
+	    case INTEL_ATOM_GRANDRIDGE:
+	   */
+
+	      /* Bigcore/Default Tuning.  */
 	    default:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
 	      /* Fall through.  */
-
-	    case 0x1a:
-	    case 0x1e:
-	    case 0x1f:
-	    case 0x25:
-	    case 0x2c:
-	    case 0x2e:
-	    case 0x2f:
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
-		|= (bit_arch_Fast_Rep_String
-		    | bit_arch_Fast_Unaligned_Load
-		    | bit_arch_Fast_Unaligned_Copy
-		    | bit_arch_Prefer_PMINUB_for_stringop);
+		  |= (bit_arch_Fast_Rep_String
+		      | bit_arch_Fast_Unaligned_Load
+		      | bit_arch_Fast_Unaligned_Copy
+		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
+
+	   /*
+	    Default tuned Bigcore microarch.
+	    case INTEL_BIGCORE_SANDYBRIDGE:
+	    case INTEL_BIGCORE_IVYBRIDGE:
+	    case INTEL_BIGCORE_HASWELL:
+	    case INTEL_BIGCORE_BROADWELL:
+	    case INTEL_BIGCORE_SKYLAKE:
+	    case INTEL_BIGCORE_KABYLAKE:
+	    case INTEL_BIGCORE_COMETLAKE:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	    case INTEL_BIGCORE_CANNONLAKE:
+	    case INTEL_BIGCORE_ICELAKE:
+	    case INTEL_BIGCORE_TIGERLAKE:
+	    case INTEL_BIGCORE_ROCKETLAKE:
+	    case INTEL_BIGCORE_RAPTORLAKE:
+	    case INTEL_BIGCORE_METEORLAKE:
+	    case INTEL_BIGCORE_LUNARLAKE:
+	    case INTEL_BIGCORE_ARROWLAKE:
+	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
+	    case INTEL_BIGCORE_EMERALDRAPIDS:
+	    case INTEL_BIGCORE_GRANITERAPIDS:
+	    */
+
+	   /*
+	    Default tuned Mixed (bigcore + atom SOC).
+	    case INTEL_MIXED_LAKEFIELD:
+	    case INTEL_MIXED_ALDERLAKE:
+	    */
 	    }
 
-	 /* Disable TSX on some processors to avoid TSX on kernels that
-	    weren't updated with the latest microcode package (which
-	    disables broken feature by default).  */
-	 switch (model)
+	      /* Disable TSX on some processors to avoid TSX on kernels that
+		 weren't updated with the latest microcode package (which
+		 disables broken feature by default).  */
+	  switch (microarch)
 	    {
-	    case 0x55:
+	    case INTEL_BIGCORE_SKYLAKE_AVX512:
+	      /* 0x55 (Skylake-avx512) && stepping <= 5 disable TSX. */
 	      if (stepping <= 5)
 		goto disable_tsx;
 	      break;
-	    case 0x8e:
-	      /* NB: Although the errata documents that for model == 0x8e,
-		 only 0xb stepping or lower are impacted, the intention of
-		 the errata was to disable TSX on all client processors on
-		 all steppings.  Include 0xc stepping which is an Intel
-		 Core i7-8665U, a client mobile processor.  */
-	    case 0x9e:
+
+	    case INTEL_BIGCORE_KABYLAKE:
+	      /* NB: Although the errata documents that for model == 0x8e
+		     (kabylake skylake client), only 0xb stepping or lower are
+		     impacted, the intention of the errata was to disable TSX on
+		     all client processors on all steppings.  Include 0xc
+		     stepping which is an Intel Core i7-8665U, a client mobile
+		     processor.  */
 	      if (stepping > 0xc)
 		break;
 	      /* Fall through.  */
-	    case 0x4e:
-	    case 0x5e:
-	      {
+	    case INTEL_BIGCORE_SKYLAKE:
 		/* Disable Intel TSX and enable RTM_ALWAYS_ABORT for
 		   processors listed in:
 
 https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html
 		 */
-disable_tsx:
+	    disable_tsx:
 		CPU_FEATURE_UNSET (cpu_features, HLE);
 		CPU_FEATURE_UNSET (cpu_features, RTM);
 		CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT);
-	      }
-	      break;
-	    case 0x3f:
-	      /* Xeon E7 v3 with stepping >= 4 has working TSX.  */
-	      if (stepping >= 4)
 		break;
-	      /* Fall through.  */
-	    case 0x3c:
-	    case 0x45:
-	    case 0x46:
-	      /* Disable Intel TSX on Haswell processors (except Xeon E7 v3
-		 with stepping >= 4) to avoid TSX on kernels that weren't
-		 updated with the latest microcode package (which disables
-		 broken feature by default).  */
-	      CPU_FEATURE_UNSET (cpu_features, RTM);
-	      break;
+
+	    case INTEL_BIGCORE_HASWELL:
+		/* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working
+		   TSX.  Haswell also include other model numbers that have
+		   working TSX.  */
+		if (model == 0x3f && stepping >= 4)
+		break;
+
+		CPU_FEATURE_UNSET (cpu_features, RTM);
+		break;
 	    }
 	}
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v10 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-06-07  0:15   ` [PATCH v10 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Carlos O'Donell
@ 2023-06-07 18:18     ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-06-07 18:18 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: libc-alpha, hjl.tools, carlos, DJ Delorie

On Tue, Jun 6, 2023 at 7:16 PM Carlos O'Donell <carlos@redhat.com> wrote:
>
> On 5/27/23 14:46, Noah Goldstein via Libc-alpha wrote:
> > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > ncores_per_socket'. This patch updates that value to roughly
> > 'sizeof_L3 / 4`
>
> LGTM. Minor typos noted, OK for you to keep my Reviewed-by if you change only those.
>
> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
>
> > The original value (specifically dividing the `ncores_per_socket`) was
> > done to limit the amount of other threads' data a `memcpy`/`memset`
> > could evict.
> >
> > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > non-temporal thresholds and leads to using non-temporal stores in
> > cases where REP MOVSB is multiple times faster.
> >
> > Furthermore, non-temporal stores are written directly to main memory
> > so using it at a size much smaller than L3 can place soon to be
> > accessed data much further away than it otherwise could be. As well,
> > modern machines are able to detect streaming patterns (especially if
> > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > in affect caps the total amount of eviction at 1/cache_associativity,
> > far below meaningfully thrashing the entire cache.
> >
> > As best I can tell, the benchmarks that lead this small threshold
> > where done comparing non-temporal stores versus standard cacheable
> > stores. A better comparison (linked below) is to be REP MOVSB which,
> > on the measure systems, is nearly 2x faster than non-temporal stores
> > at the low-end of the previous threshold, and within 10% for over
> > 100MB copies (well past even the current threshold). In cases with a
> > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > up to `sizeof_L3`.
>
> DJ and I reviewed the numbers and we agree with the divisor of 2 for Skylake/Icelake
> and 8 for Broadwell. We considered 4 was reasonably well balanced, and that a middle
> ground in this patch serves as a good starting point.
>
> DJ also ran STREAMS on various configurations and the changes you propose do not make
> anything worse than before which was Oracle's original complaint for the tuning. In
> the STREAMS case the size of the copy buffer was set following the STREAMS
> recommendation i.e. much larger than L3.
>
> All-in-all this patch should improve things.
>
> > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > such as Broadwell prefer something closer to `8`. This patch is meant
> > to be followed up by another one to make the divisor cpu-specific, but
> > in the meantime (and for easier backporting), this patch settles on
> > `4` as a middle-ground.
> >
> > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > stores where done using:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > Sheets results (also available in pdf on the github):
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > Reviewed-by: DJ Delorie <dj@redhat.com>
> > ---
> >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> >  1 file changed, 43 insertions(+), 27 deletions(-)
> >
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index ec88945b39..4a1a5423ff 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> >  }
> >
> >  static void
> > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> >                  long int core)
> >  {
> >    unsigned int eax;
> > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >    unsigned int family = cpu_features->basic.family;
> >    unsigned int model = cpu_features->basic.model;
> >    long int shared = *shared_ptr;
> > +  long int shared_per_thread = *shared_per_thread_ptr;
> >    unsigned int threads = *threads_ptr;
> >    bool inclusive_cache = true;
> >    bool support_count_mask = true;
> > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >        /* Try L2 otherwise.  */
> >        level  = 2;
> >        shared = core;
> > +      shared_per_thread = core;
> >        threads_l2 = 0;
> >        threads_l3 = -1;
> >      }
> > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >          }
> >        else
> >          {
> > -intel_bug_no_cache_info:
> > -          /* Assume that all logical threads share the highest cache
> > -             level.  */
> > -          threads
> > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > -            & 0xff);
> > -        }
> > -
> > -        /* Cap usage of highest cache level to the number of supported
> > -           threads.  */
> > -        if (shared > 0 && threads > 0)
> > -          shared /= threads;
> > +     intel_bug_no_cache_info:
> > +       /* Assume that all logical threads share the highest cache
> > +          level.  */
> > +       threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > +                  & 0xff);
> > +
> > +       /* Get per-thread size of highest level cache.  */
> > +       if (shared_per_thread > 0 && threads > 0)
> > +         shared_per_thread /= threads;
> > +     }
> >      }
> >
> >    /* Account for non-inclusive L2 and L3 caches.  */
> >    if (!inclusive_cache)
> >      {
> >        if (threads_l2 > 0)
> > -        core /= threads_l2;
> > +     shared_per_thread += core / threads_l2;
> >        shared += core;
> >      }
> >
> >    *shared_ptr = shared;
> > +  *shared_per_thread_ptr = shared_per_thread;
> >    *threads_ptr = threads;
> >  }
> >
> > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    /* Find out what brand of processor.  */
> >    long int data = -1;
> >    long int shared = -1;
> > +  long int shared_per_thread = -1;
> >    long int core = -1;
> >    unsigned int threads = 0;
> >    unsigned long int level1_icache_size = -1;
> > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size
> >       = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level4_cache_size
> >       = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> >      {
> >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_amd)
> >      {
> >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        if (shared <= 0)
> >          /* No shared L3 cache.  All we have is the L2 cache.  */
> >       shared = core;
> > +
> > +      if (shared_per_thread <= 0)
> > +     shared_per_thread = shared;
> >      }
> >
> >    cpu_features->level1_icache_size = level1_icache_size;
> > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > -     thread's share of the chip's cache. For most Intel and AMD processors
> > -     with an initial release date between 2017 and 2020, a thread's typical
> > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > -     in cache after a maximum temporal copy, which will maintain
> > -     in cache a reasonable portion of the thread's stack and other
> > -     active data. If the threshold is set higher than one thread's
> > -     share of the cache, it has a substantial risk of negatively
> > -     impacting the performance of other threads running on the chip. */
> > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > +     of the chip's cache. For most Intel and AMD processors with an
> > +     initial release date between 2017 and 2023, a thread's typical
> > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > +     estimate the point where non-temporal stores begin outcompeting
>
> s/outcompeting/out-competing/g

Fixed.

>
> > +     REP MOVSB. As well the point where the fact that non-temporal
> > +     stores are forced back to main memory would already occurred to the
> > +     majority of the lines in the copy. Note, concerns about the
> > +     entire L3 cache being evicted by the copy are mostly alleviated
> > +     by the fact that modern HW detects streaming patterns and
> > +     provides proper LRU hints so that the maximum thrashing
> > +     capped at 1/associativity. */
> > +  unsigned long int non_temporal_threshold = shared / 4;
> > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > +     hint. As well, there performance in highly parallel situations is
>
> s/there/their/g
Fixed.
>
> > +     noticeably worse.  */
> > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
>
> --
> Cheers,
> Carlos.
>

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH v11 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific
  2023-06-07 18:18 ` [PATCH v11 " Noah Goldstein
  2023-06-07 18:18   ` [PATCH v11 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
@ 2023-06-07 18:18   ` Noah Goldstein
  2023-06-07 18:19   ` [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
  2023-08-14 23:00   ` Noah Goldstein
  3 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-06-07 18:18 UTC (permalink / raw)
  To: libc-alpha; +Cc: goldstein.w.n, hjl.tools, carlos, DJ Delorie

Different systems prefer a different divisors.

From benchmarks[1] so far the following divisors have been found:
    ICX     : 2
    SKX     : 2
    BWD     : 8

For Intel, we are generalizing that BWD and older prefers 8 as a
divisor, and SKL and newer prefers 2. This number can be further tuned
as benchmarks are run.

[1]: https://github.com/goldsteinn/memcpy-nt-benchmarks
Reviewed-by: DJ Delorie <dj@redhat.com>
---
 sysdeps/x86/cpu-features.c         | 31 ++++++++++++++++++++---------
 sysdeps/x86/dl-cacheinfo.h         | 32 ++++++++++++++++++------------
 sysdeps/x86/dl-diagnostics-cpu.c   | 11 ++++++----
 sysdeps/x86/include/cpu-features.h |  3 +++
 4 files changed, 51 insertions(+), 26 deletions(-)

diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index d52a718e92..525828f59c 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -636,6 +636,7 @@ init_cpu_features (struct cpu_features *cpu_features)
   unsigned int stepping = 0;
   enum cpu_features_kind kind;
 
+  cpu_features->cachesize_non_temporal_divisor = 4;
 #if !HAS_CPUID
   if (__get_cpuid_max (0, 0) == 0)
     {
@@ -716,13 +717,13 @@ init_cpu_features (struct cpu_features *cpu_features)
 
 	      /* Bigcore/Default Tuning.  */
 	    default:
+	    default_tuning:
 	      /* Unknown family 0x06 processors.  Assuming this is one
 		 of Core i3/i5/i7 processors if AVX is available.  */
 	      if (!CPU_FEATURES_CPU_P (cpu_features, AVX))
 		break;
-	      /* Fall through.  */
-	    case INTEL_BIGCORE_NEHALEM:
-	    case INTEL_BIGCORE_WESTMERE:
+
+	    enable_modern_features:
 	      /* Rep string instructions, unaligned load, unaligned copy,
 		 and pminub are fast on Intel Core i3, i5 and i7.  */
 	      cpu_features->preferred[index_arch_Fast_Rep_String]
@@ -732,12 +733,23 @@ init_cpu_features (struct cpu_features *cpu_features)
 		      | bit_arch_Prefer_PMINUB_for_stringop);
 	      break;
 
-	   /*
-	    Default tuned Bigcore microarch.
+	    case INTEL_BIGCORE_NEHALEM:
+	    case INTEL_BIGCORE_WESTMERE:
+	      /* Older CPUs prefer non-temporal stores at lower threshold.  */
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto enable_modern_features;
+
+	      /* Older Bigcore microarch (smaller non-temporal store
+		 threshold).  */
 	    case INTEL_BIGCORE_SANDYBRIDGE:
 	    case INTEL_BIGCORE_IVYBRIDGE:
 	    case INTEL_BIGCORE_HASWELL:
 	    case INTEL_BIGCORE_BROADWELL:
+	      cpu_features->cachesize_non_temporal_divisor = 8;
+	      goto default_tuning;
+
+	      /* Newer Bigcore microarch (larger non-temporal store
+		 threshold).  */
 	    case INTEL_BIGCORE_SKYLAKE:
 	    case INTEL_BIGCORE_KABYLAKE:
 	    case INTEL_BIGCORE_COMETLAKE:
@@ -753,13 +765,14 @@ init_cpu_features (struct cpu_features *cpu_features)
 	    case INTEL_BIGCORE_SAPPHIRERAPIDS:
 	    case INTEL_BIGCORE_EMERALDRAPIDS:
 	    case INTEL_BIGCORE_GRANITERAPIDS:
-	    */
+	      cpu_features->cachesize_non_temporal_divisor = 2;
+	      goto default_tuning;
 
-	   /*
-	    Default tuned Mixed (bigcore + atom SOC).
+	      /* Default tuned Mixed (bigcore + atom SOC). */
 	    case INTEL_MIXED_LAKEFIELD:
 	    case INTEL_MIXED_ALDERLAKE:
-	    */
+	      cpu_features->cachesize_non_temporal_divisor = 2;
+	      goto default_tuning;
 	    }
 
 	      /* Disable TSX on some processors to avoid TSX on kernels that
diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index 3bd3b3ec1b..fb1a6cf4a9 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 1/4 of size
-     of the chip's cache. For most Intel and AMD processors with an
-     initial release date between 2017 and 2023, a thread's typical
-     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
-     estimate the point where non-temporal stores begin out-competing
-     REP MOVSB. As well the point where the fact that non-temporal
-     stores are forced back to main memory would already occurred to the
-     majority of the lines in the copy. Note, concerns about the
-     entire L3 cache being evicted by the copy are mostly alleviated
-     by the fact that modern HW detects streaming patterns and
-     provides proper LRU hints so that the maximum thrashing
-     capped at 1/associativity. */
-  unsigned long int non_temporal_threshold = shared / 4;
+  unsigned long int cachesize_non_temporal_divisor
+      = cpu_features->cachesize_non_temporal_divisor;
+  if (cachesize_non_temporal_divisor <= 0)
+    cachesize_non_temporal_divisor = 4;
+
+  /* The default setting for the non_temporal threshold is [1/8, 1/2] of size
+     of the chip's cache (depending on `cachesize_non_temporal_divisor` which
+     is microarch specific. The defeault is 1/4). For most Intel and AMD
+     processors with an initial release date between 2017 and 2023, a thread's
+     typical share of the cache is from 18-64MB. Using a reasonable size
+     fraction of L3 is meant to estimate the point where non-temporal stores
+     begin out-competing REP MOVSB. As well the point where the fact that
+     non-temporal stores are forced back to main memory would already occurred
+     to the majority of the lines in the copy. Note, concerns about the entire
+     L3 cache being evicted by the copy are mostly alleviated by the fact that
+     modern HW detects streaming patterns and provides proper LRU hints so that
+     the maximum thrashing capped at 1/associativity. */
+  unsigned long int non_temporal_threshold
+      = shared / cachesize_non_temporal_divisor;
   /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
      a higher risk of actually thrashing the cache as they don't have a HW LRU
      hint. As well, their performance in highly parallel situations is
diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
index a1578e4665..5aab63e532 100644
--- a/sysdeps/x86/dl-diagnostics-cpu.c
+++ b/sysdeps/x86/dl-diagnostics-cpu.c
@@ -113,8 +113,11 @@ _dl_diagnostics_cpu (void)
                             cpu_features->level3_cache_linesize);
   print_cpu_features_value ("level4_cache_size",
                             cpu_features->level4_cache_size);
-  _Static_assert (offsetof (struct cpu_features, level4_cache_size)
-                  + sizeof (cpu_features->level4_cache_size)
-                  == sizeof (*cpu_features),
-                  "last cpu_features field has been printed");
+  print_cpu_features_value ("cachesize_non_temporal_divisor",
+			    cpu_features->cachesize_non_temporal_divisor);
+  _Static_assert (
+      offsetof (struct cpu_features, cachesize_non_temporal_divisor)
+	      + sizeof (cpu_features->cachesize_non_temporal_divisor)
+	  == sizeof (*cpu_features),
+      "last cpu_features field has been printed");
 }
diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
index 40b8129d6a..c740e1a5fc 100644
--- a/sysdeps/x86/include/cpu-features.h
+++ b/sysdeps/x86/include/cpu-features.h
@@ -945,6 +945,9 @@ struct cpu_features
   unsigned long int level3_cache_linesize;
   /* /_SC_LEVEL4_CACHE_SIZE.  */
   unsigned long int level4_cache_size;
+  /* When no user non_temporal_threshold is specified. We default to
+     cachesize / cachesize_non_temporal_divisor.  */
+  unsigned long int cachesize_non_temporal_divisor;
 };
 
 /* Get a pointer to the CPU features structure.  */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-06-07 18:18 ` [PATCH v11 " Noah Goldstein
  2023-06-07 18:18   ` [PATCH v11 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
  2023-06-07 18:18   ` [PATCH v11 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
@ 2023-06-07 18:19   ` Noah Goldstein
  2023-08-14 23:00   ` Noah Goldstein
  3 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-06-07 18:19 UTC (permalink / raw)
  To: libc-alpha; +Cc: hjl.tools, carlos, DJ Delorie, Carlos O'Donell

On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 4`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal thresholds and leads to using non-temporal stores in
> cases where REP MOVSB is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to main memory
> so using it at a size much smaller than L3 can place soon to be
> accessed data much further away than it otherwise could be. As well,
> modern machines are able to detect streaming patterns (especially if
> REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> in affect caps the total amount of eviction at 1/cache_associativity,
> far below meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be REP MOVSB which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> up to `sizeof_L3`.
>
> The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> such as Broadwell prefer something closer to `8`. This patch is meant
> to be followed up by another one to make the divisor cpu-specific, but
> in the meantime (and for easier backporting), this patch settles on
> `4` as a middle-ground.
>
> Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> Reviewed-by: DJ Delorie <dj@redhat.com>
> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> ---
>  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
>  1 file changed, 43 insertions(+), 27 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index 877e73d700..3bd3b3ec1b 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -              & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +       intel_bug_no_cache_info:
> +         /* Assume that all logical threads share the highest cache
> +            level.  */
> +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +                    & 0xff);
> +
> +         /* Get per-thread size of highest level cache.  */
> +         if (shared_per_thread > 0 && threads > 0)
> +           shared_per_thread /= threads;
> +       }
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +       shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>
>        level1_icache_size
>         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>         shared = core;
> +
> +      if (shared_per_thread <= 0)
> +       shared_per_thread = shared;
>      }
>
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/4 of size
> +     of the chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> +     estimate the point where non-temporal stores begin out-competing
> +     REP MOVSB. As well the point where the fact that non-temporal
> +     stores are forced back to main memory would already occurred to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the maximum thrashing
> +     capped at 1/associativity. */
> +  unsigned long int non_temporal_threshold = shared / 4;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, their performance in highly parallel situations is
> +     noticeably worse.  */
> +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> +    non_temporal_threshold = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>

Now that Carlos, DJ, and HJ have signed off on this and Carlos + DJ
have reproduced
the results, I'm going to push this shortly.

Thank you all for the review!

^ permalink raw reply	[flat|nested] 76+ messages in thread

* (no subject)
  2023-05-27 18:46   ` [PATCH v10 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
  2023-05-31  2:33     ` DJ Delorie
@ 2023-07-10  5:23     ` Sajan Karumanchi
  2023-07-10 15:58       ` Noah Goldstein
  1 sibling, 1 reply; 76+ messages in thread
From: Sajan Karumanchi @ 2023-07-10  5:23 UTC (permalink / raw)
  To: goldstein.w.n; +Cc: premachandra.mallappa, dj, hjl.tools, libc-alpha, carlos

Noah,
I verified your patches on the master branch that impacts the non-threshold
 parameter on x86 CPUs. This patch modifies the non-temporal threshold value
from 24MB(3/4th of L3$) to 8MB(1/4th of L3$) on ZEN4.
From the Glibc benchmarks, we saw a significant performance drop ranging
from 15% to 99% for size ranges of 8MB to 16MB.
I also ran the new tool developed by you on all Zen architectures and the
results conclude that 3/4th L3 size holds good on AMD CPUs.
Hence the current patch degrades the performance of AMD CPUs.
We strongly recommend marking this change to Intel CPUs only.

Thanks,
Sajan K.


^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re:
  2023-07-10  5:23     ` Sajan Karumanchi
@ 2023-07-10 15:58       ` Noah Goldstein
  2023-07-14  2:21         ` Re: Noah Goldstein
  2023-07-14  7:39         ` Re: sajan karumanchi
  0 siblings, 2 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-07-10 15:58 UTC (permalink / raw)
  To: Sajan Karumanchi; +Cc: premachandra.mallappa, dj, hjl.tools, libc-alpha, carlos

On Mon, Jul 10, 2023 at 12:23 AM Sajan Karumanchi
<sajan.karumanchi@gmail.com> wrote:
>
> Noah,
> I verified your patches on the master branch that impacts the non-threshold
>  parameter on x86 CPUs. This patch modifies the non-temporal threshold value
> from 24MB(3/4th of L3$) to 8MB(1/4th of L3$) on ZEN4.
> From the Glibc benchmarks, we saw a significant performance drop ranging
> from 15% to 99% for size ranges of 8MB to 16MB.
> I also ran the new tool developed by you on all Zen architectures and the
> results conclude that 3/4th L3 size holds good on AMD CPUs.
> Hence the current patch degrades the performance of AMD CPUs.
> We strongly recommend marking this change to Intel CPUs only.
>

So it shouldn't actually go down. I think what is missing is:
```
get_common_cache_info (&shared, &shared_per_thread, &threads, core);
```

In the AMD case shared == shared_per_thread which shouldn't really
be the case.

The intended new calculation is: Total_L3_Size / Scale
as opposed to: (L3_Size / NThread) / Scale"

Before just going with default for AMD, maybe try out the following patch?

```
---
 sysdeps/x86/dl-cacheinfo.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index c98fa57a7b..c1866ca898 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -717,6 +717,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_amd (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_amd (_SC_LEVEL3_CACHE_LINESIZE);

+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
  shared = core;
-- 
2.34.1
```
> Thanks,
> Sajan K.
>

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re:
  2023-07-10 15:58       ` Noah Goldstein
@ 2023-07-14  2:21         ` Noah Goldstein
  2023-07-14  7:39         ` Re: sajan karumanchi
  1 sibling, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-07-14  2:21 UTC (permalink / raw)
  To: Sajan Karumanchi; +Cc: premachandra.mallappa, dj, hjl.tools, libc-alpha, carlos

On Mon, Jul 10, 2023 at 10:58 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Mon, Jul 10, 2023 at 12:23 AM Sajan Karumanchi
> <sajan.karumanchi@gmail.com> wrote:
> >
> > Noah,
> > I verified your patches on the master branch that impacts the non-threshold
> >  parameter on x86 CPUs. This patch modifies the non-temporal threshold value
> > from 24MB(3/4th of L3$) to 8MB(1/4th of L3$) on ZEN4.
> > From the Glibc benchmarks, we saw a significant performance drop ranging
> > from 15% to 99% for size ranges of 8MB to 16MB.
> > I also ran the new tool developed by you on all Zen architectures and the
> > results conclude that 3/4th L3 size holds good on AMD CPUs.
> > Hence the current patch degrades the performance of AMD CPUs.
> > We strongly recommend marking this change to Intel CPUs only.
> >
>
> So it shouldn't actually go down. I think what is missing is:
> ```
> get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> ```
>
> In the AMD case shared == shared_per_thread which shouldn't really
> be the case.
>
> The intended new calculation is: Total_L3_Size / Scale
> as opposed to: (L3_Size / NThread) / Scale"
>
> Before just going with default for AMD, maybe try out the following patch?
>
> ```
> ---
>  sysdeps/x86/dl-cacheinfo.h | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index c98fa57a7b..c1866ca898 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -717,6 +717,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_amd (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_amd (_SC_LEVEL3_CACHE_LINESIZE);
>
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>   shared = core;
> --
> 2.34.1
> ```
> > Thanks,
> > Sajan K.
> >

ping. 2.38 is approaching and I expect you want to get any fixes in before
that.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re:
  2023-07-10 15:58       ` Noah Goldstein
  2023-07-14  2:21         ` Re: Noah Goldstein
@ 2023-07-14  7:39         ` sajan karumanchi
  1 sibling, 0 replies; 76+ messages in thread
From: sajan karumanchi @ 2023-07-14  7:39 UTC (permalink / raw)
  To: Noah Goldstein
  Cc: premachandra.mallappa, dj, hjl.tools, libc-alpha, carlos,
	Sajan Karumanchi

* Noah,
On Mon, Jul 10, 2023 at 9:28 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Mon, Jul 10, 2023 at 12:23 AM Sajan Karumanchi
> <sajan.karumanchi@gmail.com> wrote:
> >
> > Noah,
> > I verified your patches on the master branch that impacts the non-threshold
> >  parameter on x86 CPUs. This patch modifies the non-temporal threshold value
> > from 24MB(3/4th of L3$) to 8MB(1/4th of L3$) on ZEN4.
> > From the Glibc benchmarks, we saw a significant performance drop ranging
> > from 15% to 99% for size ranges of 8MB to 16MB.
> > I also ran the new tool developed by you on all Zen architectures and the
> > results conclude that 3/4th L3 size holds good on AMD CPUs.
> > Hence the current patch degrades the performance of AMD CPUs.
> > We strongly recommend marking this change to Intel CPUs only.
> >
>
> So it shouldn't actually go down. I think what is missing is:
> ```
> get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> ```
>
The cache info of AMD CPUs is spread across CPUID registers:
0x80000005,  0x80000006, and  0x8000001D.
But, 'get_common_cache_info(...)' is using CPUID register 0x00000004
for enumerating cache details. This leads to an infinite loop in the
initialization stage for enumerating the cache details on AMD CPUs.

> In the AMD case shared == shared_per_thread which shouldn't really
> be the case.
>
> The intended new calculation is: Total_L3_Size / Scale
> as opposed to: (L3_Size / NThread) / Scale"
>
AMD Zen CPUs are chiplet based, so we consider only L3/CCX for
computing the nt_threshold.
* handle_amd(_SC_LEVEL3_CACHE_SIZE) initializes 'shared' variable with
'l3_cache_per_ccx' for Zen architectures and 'l3_cache_per_thread' for
pre-Zen architectures.

> Before just going with default for AMD, maybe try out the following patch?
>
Since the cache info registers and the approach to compute the cache
details on AMD are different from Intel, we cannot use the below
patch.
> ```
> ---
>  sysdeps/x86/dl-cacheinfo.h | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index c98fa57a7b..c1866ca898 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -717,6 +717,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_amd (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_amd (_SC_LEVEL3_CACHE_LINESIZE);
>
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>   shared = core;
> --
> 2.34.1
> ```
> > Thanks,
> > Sajan K.
> >

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-06-07 18:18 ` [PATCH v11 " Noah Goldstein
                     ` (2 preceding siblings ...)
  2023-06-07 18:19   ` [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
@ 2023-08-14 23:00   ` Noah Goldstein
  2023-08-22 15:11     ` Noah Goldstein
  3 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-08-14 23:00 UTC (permalink / raw)
  To: libc-alpha; +Cc: hjl.tools, carlos, DJ Delorie, Carlos O'Donell

On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 4`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal thresholds and leads to using non-temporal stores in
> cases where REP MOVSB is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to main memory
> so using it at a size much smaller than L3 can place soon to be
> accessed data much further away than it otherwise could be. As well,
> modern machines are able to detect streaming patterns (especially if
> REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> in affect caps the total amount of eviction at 1/cache_associativity,
> far below meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be REP MOVSB which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> up to `sizeof_L3`.
>
> The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> such as Broadwell prefer something closer to `8`. This patch is meant
> to be followed up by another one to make the divisor cpu-specific, but
> in the meantime (and for easier backporting), this patch settles on
> `4` as a middle-ground.
>
> Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> Reviewed-by: DJ Delorie <dj@redhat.com>
> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> ---
>  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
>  1 file changed, 43 insertions(+), 27 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index 877e73d700..3bd3b3ec1b 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -              & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +       intel_bug_no_cache_info:
> +         /* Assume that all logical threads share the highest cache
> +            level.  */
> +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +                    & 0xff);
> +
> +         /* Get per-thread size of highest level cache.  */
> +         if (shared_per_thread > 0 && threads > 0)
> +           shared_per_thread /= threads;
> +       }
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +       shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>
>        level1_icache_size
>         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>         shared = core;
> +
> +      if (shared_per_thread <= 0)
> +       shared_per_thread = shared;
>      }
>
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/4 of size
> +     of the chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> +     estimate the point where non-temporal stores begin out-competing
> +     REP MOVSB. As well the point where the fact that non-temporal
> +     stores are forced back to main memory would already occurred to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the maximum thrashing
> +     capped at 1/associativity. */
> +  unsigned long int non_temporal_threshold = shared / 4;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, their performance in highly parallel situations is
> +     noticeably worse.  */
> +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> +    non_temporal_threshold = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>

Hi All,

I want to backport this series (minus CPUID codes) too 2.28 - 2.37

The patches I want to backport are:

1/4
```
commit af992e7abdc9049714da76cae1e5e18bc4838fb8
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Wed Jun 7 13:18:01 2023 -0500

    x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
```

2/4
```
commit 47f747217811db35854ea06741be3685e8bbd44d
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Mon Jul 17 23:14:33 2023 -0500

    x86: Fix slight bug in `shared_per_thread` cache size calculation.
```

3/4
```
commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Tue Jul 18 10:27:59 2023 -0500

    [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
NT threshold.
```

4/4
```
commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
origin/HEAD, master)
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Thu Aug 10 19:28:24 2023 -0500

    x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
```

The proposed patches are at:
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37

I know the protocol is not to normally backport optimizations, but I'd argue
these are closer to bug fixes for a severe misconfiguration than a proper
optimization series.  As well, the risk of introducing a correctness related
bug is exceedingly low.

Typically the type of optimization patch this is discouraged are the ones
that actually change a particular function. I.e if these fixes where directly
to the memmove implementation. These patches, however, don't touch
any of the memmove code itself, and are just re-tuning a value used by
memmove which seems categorically different.

The value also only informs memmove strategy. If these patches turn
out to be deeply buggy and set the new threshold incorrectly, the
blowback is limited to a bad performance (which we already have),
and is extremely unlikely to affect correctness in any way.

Thoughts?

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-08-14 23:00   ` Noah Goldstein
@ 2023-08-22 15:11     ` Noah Goldstein
  2023-08-24 17:06       ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-08-22 15:11 UTC (permalink / raw)
  To: libc-alpha; +Cc: hjl.tools, carlos, DJ Delorie, Carlos O'Donell

On Mon, Aug 14, 2023 at 6:00 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > ncores_per_socket'. This patch updates that value to roughly
> > 'sizeof_L3 / 4`
> >
> > The original value (specifically dividing the `ncores_per_socket`) was
> > done to limit the amount of other threads' data a `memcpy`/`memset`
> > could evict.
> >
> > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > non-temporal thresholds and leads to using non-temporal stores in
> > cases where REP MOVSB is multiple times faster.
> >
> > Furthermore, non-temporal stores are written directly to main memory
> > so using it at a size much smaller than L3 can place soon to be
> > accessed data much further away than it otherwise could be. As well,
> > modern machines are able to detect streaming patterns (especially if
> > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > in affect caps the total amount of eviction at 1/cache_associativity,
> > far below meaningfully thrashing the entire cache.
> >
> > As best I can tell, the benchmarks that lead this small threshold
> > where done comparing non-temporal stores versus standard cacheable
> > stores. A better comparison (linked below) is to be REP MOVSB which,
> > on the measure systems, is nearly 2x faster than non-temporal stores
> > at the low-end of the previous threshold, and within 10% for over
> > 100MB copies (well past even the current threshold). In cases with a
> > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > up to `sizeof_L3`.
> >
> > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > such as Broadwell prefer something closer to `8`. This patch is meant
> > to be followed up by another one to make the divisor cpu-specific, but
> > in the meantime (and for easier backporting), this patch settles on
> > `4` as a middle-ground.
> >
> > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > stores where done using:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > Sheets results (also available in pdf on the github):
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > Reviewed-by: DJ Delorie <dj@redhat.com>
> > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> > ---
> >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> >  1 file changed, 43 insertions(+), 27 deletions(-)
> >
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index 877e73d700..3bd3b3ec1b 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> >  }
> >
> >  static void
> > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> >                  long int core)
> >  {
> >    unsigned int eax;
> > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >    unsigned int family = cpu_features->basic.family;
> >    unsigned int model = cpu_features->basic.model;
> >    long int shared = *shared_ptr;
> > +  long int shared_per_thread = *shared_per_thread_ptr;
> >    unsigned int threads = *threads_ptr;
> >    bool inclusive_cache = true;
> >    bool support_count_mask = true;
> > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >        /* Try L2 otherwise.  */
> >        level  = 2;
> >        shared = core;
> > +      shared_per_thread = core;
> >        threads_l2 = 0;
> >        threads_l3 = -1;
> >      }
> > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >          }
> >        else
> >          {
> > -intel_bug_no_cache_info:
> > -          /* Assume that all logical threads share the highest cache
> > -             level.  */
> > -          threads
> > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > -              & 0xff);
> > -        }
> > -
> > -        /* Cap usage of highest cache level to the number of supported
> > -           threads.  */
> > -        if (shared > 0 && threads > 0)
> > -          shared /= threads;
> > +       intel_bug_no_cache_info:
> > +         /* Assume that all logical threads share the highest cache
> > +            level.  */
> > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > +                    & 0xff);
> > +
> > +         /* Get per-thread size of highest level cache.  */
> > +         if (shared_per_thread > 0 && threads > 0)
> > +           shared_per_thread /= threads;
> > +       }
> >      }
> >
> >    /* Account for non-inclusive L2 and L3 caches.  */
> >    if (!inclusive_cache)
> >      {
> >        if (threads_l2 > 0)
> > -        core /= threads_l2;
> > +       shared_per_thread += core / threads_l2;
> >        shared += core;
> >      }
> >
> >    *shared_ptr = shared;
> > +  *shared_per_thread_ptr = shared_per_thread;
> >    *threads_ptr = threads;
> >  }
> >
> > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    /* Find out what brand of processor.  */
> >    long int data = -1;
> >    long int shared = -1;
> > +  long int shared_per_thread = -1;
> >    long int core = -1;
> >    unsigned int threads = 0;
> >    unsigned long int level1_icache_size = -1;
> > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size
> >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level4_cache_size
> >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> >      {
> >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_amd)
> >      {
> >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        if (shared <= 0)
> >          /* No shared L3 cache.  All we have is the L2 cache.  */
> >         shared = core;
> > +
> > +      if (shared_per_thread <= 0)
> > +       shared_per_thread = shared;
> >      }
> >
> >    cpu_features->level1_icache_size = level1_icache_size;
> > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > -     thread's share of the chip's cache. For most Intel and AMD processors
> > -     with an initial release date between 2017 and 2020, a thread's typical
> > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > -     in cache after a maximum temporal copy, which will maintain
> > -     in cache a reasonable portion of the thread's stack and other
> > -     active data. If the threshold is set higher than one thread's
> > -     share of the cache, it has a substantial risk of negatively
> > -     impacting the performance of other threads running on the chip. */
> > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > +     of the chip's cache. For most Intel and AMD processors with an
> > +     initial release date between 2017 and 2023, a thread's typical
> > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > +     estimate the point where non-temporal stores begin out-competing
> > +     REP MOVSB. As well the point where the fact that non-temporal
> > +     stores are forced back to main memory would already occurred to the
> > +     majority of the lines in the copy. Note, concerns about the
> > +     entire L3 cache being evicted by the copy are mostly alleviated
> > +     by the fact that modern HW detects streaming patterns and
> > +     provides proper LRU hints so that the maximum thrashing
> > +     capped at 1/associativity. */
> > +  unsigned long int non_temporal_threshold = shared / 4;
> > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > +     hint. As well, their performance in highly parallel situations is
> > +     noticeably worse.  */
> > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > --
> > 2.34.1
> >
>
> Hi All,
>
> I want to backport this series (minus CPUID codes) too 2.28 - 2.37
>
> The patches I want to backport are:
>
> 1/4
> ```
> commit af992e7abdc9049714da76cae1e5e18bc4838fb8
> Author: Noah Goldstein <goldstein.w.n@gmail.com>
> Date:   Wed Jun 7 13:18:01 2023 -0500
>
>     x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
> ```
>
> 2/4
> ```
> commit 47f747217811db35854ea06741be3685e8bbd44d
> Author: Noah Goldstein <goldstein.w.n@gmail.com>
> Date:   Mon Jul 17 23:14:33 2023 -0500
>
>     x86: Fix slight bug in `shared_per_thread` cache size calculation.
> ```
>
> 3/4
> ```
> commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
> Author: Noah Goldstein <goldstein.w.n@gmail.com>
> Date:   Tue Jul 18 10:27:59 2023 -0500
>
>     [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
> NT threshold.
> ```
>
> 4/4
> ```
> commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
> origin/HEAD, master)
> Author: Noah Goldstein <goldstein.w.n@gmail.com>
> Date:   Thu Aug 10 19:28:24 2023 -0500
>
>     x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
> ```
>
> The proposed patches are at:
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
> https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37
>
> I know the protocol is not to normally backport optimizations, but I'd argue
> these are closer to bug fixes for a severe misconfiguration than a proper
> optimization series.  As well, the risk of introducing a correctness related
> bug is exceedingly low.
>
> Typically the type of optimization patch this is discouraged are the ones
> that actually change a particular function. I.e if these fixes where directly
> to the memmove implementation. These patches, however, don't touch
> any of the memmove code itself, and are just re-tuning a value used by
> memmove which seems categorically different.
>
> The value also only informs memmove strategy. If these patches turn
> out to be deeply buggy and set the new threshold incorrectly, the
> blowback is limited to a bad performance (which we already have),
> and is extremely unlikely to affect correctness in any way.
>
> Thoughts?

Ping/Any Objections to me backporting?

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-08-22 15:11     ` Noah Goldstein
@ 2023-08-24 17:06       ` Noah Goldstein
  2023-08-28 20:02         ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-08-24 17:06 UTC (permalink / raw)
  To: libc-alpha; +Cc: hjl.tools, carlos, DJ Delorie, Carlos O'Donell

On Tue, Aug 22, 2023 at 10:11 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Mon, Aug 14, 2023 at 6:00 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > ncores_per_socket'. This patch updates that value to roughly
> > > 'sizeof_L3 / 4`
> > >
> > > The original value (specifically dividing the `ncores_per_socket`) was
> > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > could evict.
> > >
> > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > non-temporal thresholds and leads to using non-temporal stores in
> > > cases where REP MOVSB is multiple times faster.
> > >
> > > Furthermore, non-temporal stores are written directly to main memory
> > > so using it at a size much smaller than L3 can place soon to be
> > > accessed data much further away than it otherwise could be. As well,
> > > modern machines are able to detect streaming patterns (especially if
> > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > far below meaningfully thrashing the entire cache.
> > >
> > > As best I can tell, the benchmarks that lead this small threshold
> > > where done comparing non-temporal stores versus standard cacheable
> > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > at the low-end of the previous threshold, and within 10% for over
> > > 100MB copies (well past even the current threshold). In cases with a
> > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > up to `sizeof_L3`.
> > >
> > > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > > such as Broadwell prefer something closer to `8`. This patch is meant
> > > to be followed up by another one to make the divisor cpu-specific, but
> > > in the meantime (and for easier backporting), this patch settles on
> > > `4` as a middle-ground.
> > >
> > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > stores where done using:
> > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > >
> > > Sheets results (also available in pdf on the github):
> > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > Reviewed-by: DJ Delorie <dj@redhat.com>
> > > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> > > ---
> > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > >
> > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > index 877e73d700..3bd3b3ec1b 100644
> > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > >  }
> > >
> > >  static void
> > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > >                  long int core)
> > >  {
> > >    unsigned int eax;
> > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >    unsigned int family = cpu_features->basic.family;
> > >    unsigned int model = cpu_features->basic.model;
> > >    long int shared = *shared_ptr;
> > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > >    unsigned int threads = *threads_ptr;
> > >    bool inclusive_cache = true;
> > >    bool support_count_mask = true;
> > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >        /* Try L2 otherwise.  */
> > >        level  = 2;
> > >        shared = core;
> > > +      shared_per_thread = core;
> > >        threads_l2 = 0;
> > >        threads_l3 = -1;
> > >      }
> > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >          }
> > >        else
> > >          {
> > > -intel_bug_no_cache_info:
> > > -          /* Assume that all logical threads share the highest cache
> > > -             level.  */
> > > -          threads
> > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > -              & 0xff);
> > > -        }
> > > -
> > > -        /* Cap usage of highest cache level to the number of supported
> > > -           threads.  */
> > > -        if (shared > 0 && threads > 0)
> > > -          shared /= threads;
> > > +       intel_bug_no_cache_info:
> > > +         /* Assume that all logical threads share the highest cache
> > > +            level.  */
> > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > +                    & 0xff);
> > > +
> > > +         /* Get per-thread size of highest level cache.  */
> > > +         if (shared_per_thread > 0 && threads > 0)
> > > +           shared_per_thread /= threads;
> > > +       }
> > >      }
> > >
> > >    /* Account for non-inclusive L2 and L3 caches.  */
> > >    if (!inclusive_cache)
> > >      {
> > >        if (threads_l2 > 0)
> > > -        core /= threads_l2;
> > > +       shared_per_thread += core / threads_l2;
> > >        shared += core;
> > >      }
> > >
> > >    *shared_ptr = shared;
> > > +  *shared_per_thread_ptr = shared_per_thread;
> > >    *threads_ptr = threads;
> > >  }
> > >
> > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    /* Find out what brand of processor.  */
> > >    long int data = -1;
> > >    long int shared = -1;
> > > +  long int shared_per_thread = -1;
> > >    long int core = -1;
> > >    unsigned int threads = 0;
> > >    unsigned long int level1_icache_size = -1;
> > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size
> > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        level4_cache_size
> > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > >
> > > -      get_common_cache_info (&shared, &threads, core);
> > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > >      }
> > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > >      {
> > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > >
> > > -      get_common_cache_info (&shared, &threads, core);
> > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > >      }
> > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > >      {
> > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        if (shared <= 0)
> > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > >         shared = core;
> > > +
> > > +      if (shared_per_thread <= 0)
> > > +       shared_per_thread = shared;
> > >      }
> > >
> > >    cpu_features->level1_icache_size = level1_icache_size;
> > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > >    cpu_features->level4_cache_size = level4_cache_size;
> > >
> > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > -     in cache after a maximum temporal copy, which will maintain
> > > -     in cache a reasonable portion of the thread's stack and other
> > > -     active data. If the threshold is set higher than one thread's
> > > -     share of the cache, it has a substantial risk of negatively
> > > -     impacting the performance of other threads running on the chip. */
> > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > > +     of the chip's cache. For most Intel and AMD processors with an
> > > +     initial release date between 2017 and 2023, a thread's typical
> > > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > > +     estimate the point where non-temporal stores begin out-competing
> > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > +     stores are forced back to main memory would already occurred to the
> > > +     majority of the lines in the copy. Note, concerns about the
> > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > +     by the fact that modern HW detects streaming patterns and
> > > +     provides proper LRU hints so that the maximum thrashing
> > > +     capped at 1/associativity. */
> > > +  unsigned long int non_temporal_threshold = shared / 4;
> > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > +     hint. As well, their performance in highly parallel situations is
> > > +     noticeably worse.  */
> > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > --
> > > 2.34.1
> > >
> >
> > Hi All,
> >
> > I want to backport this series (minus CPUID codes) too 2.28 - 2.37
> >
> > The patches I want to backport are:
> >
> > 1/4
> > ```
> > commit af992e7abdc9049714da76cae1e5e18bc4838fb8
> > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > Date:   Wed Jun 7 13:18:01 2023 -0500
> >
> >     x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
> > ```
> >
> > 2/4
> > ```
> > commit 47f747217811db35854ea06741be3685e8bbd44d
> > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > Date:   Mon Jul 17 23:14:33 2023 -0500
> >
> >     x86: Fix slight bug in `shared_per_thread` cache size calculation.
> > ```
> >
> > 3/4
> > ```
> > commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
> > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > Date:   Tue Jul 18 10:27:59 2023 -0500
> >
> >     [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
> > NT threshold.
> > ```
> >
> > 4/4
> > ```
> > commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
> > origin/HEAD, master)
> > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > Date:   Thu Aug 10 19:28:24 2023 -0500
> >
> >     x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
> > ```
> >
> > The proposed patches are at:
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
> > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37
> >
> > I know the protocol is not to normally backport optimizations, but I'd argue
> > these are closer to bug fixes for a severe misconfiguration than a proper
> > optimization series.  As well, the risk of introducing a correctness related
> > bug is exceedingly low.
> >
> > Typically the type of optimization patch this is discouraged are the ones
> > that actually change a particular function. I.e if these fixes where directly
> > to the memmove implementation. These patches, however, don't touch
> > any of the memmove code itself, and are just re-tuning a value used by
> > memmove which seems categorically different.
> >
> > The value also only informs memmove strategy. If these patches turn
> > out to be deeply buggy and set the new threshold incorrectly, the
> > blowback is limited to a bad performance (which we already have),
> > and is extremely unlikely to affect correctness in any way.
> >
> > Thoughts?
>
> Ping/Any Objections to me backporting?

I am going to take the continued lack of objections to mean no one
has issue with me backporting these.

I will start backporting next week. Will do so piecemeal to give time
for issues to emerge before fulling committing.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-08-24 17:06       ` Noah Goldstein
@ 2023-08-28 20:02         ` Noah Goldstein
  2023-09-05 15:37           ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-08-28 20:02 UTC (permalink / raw)
  To: libc-alpha; +Cc: hjl.tools, carlos, DJ Delorie, Carlos O'Donell

On Thu, Aug 24, 2023 at 12:06 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Tue, Aug 22, 2023 at 10:11 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Mon, Aug 14, 2023 at 6:00 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > ncores_per_socket'. This patch updates that value to roughly
> > > > 'sizeof_L3 / 4`
> > > >
> > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > could evict.
> > > >
> > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > non-temporal thresholds and leads to using non-temporal stores in
> > > > cases where REP MOVSB is multiple times faster.
> > > >
> > > > Furthermore, non-temporal stores are written directly to main memory
> > > > so using it at a size much smaller than L3 can place soon to be
> > > > accessed data much further away than it otherwise could be. As well,
> > > > modern machines are able to detect streaming patterns (especially if
> > > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > > far below meaningfully thrashing the entire cache.
> > > >
> > > > As best I can tell, the benchmarks that lead this small threshold
> > > > where done comparing non-temporal stores versus standard cacheable
> > > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > at the low-end of the previous threshold, and within 10% for over
> > > > 100MB copies (well past even the current threshold). In cases with a
> > > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > > up to `sizeof_L3`.
> > > >
> > > > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > > > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > > > such as Broadwell prefer something closer to `8`. This patch is meant
> > > > to be followed up by another one to make the divisor cpu-specific, but
> > > > in the meantime (and for easier backporting), this patch settles on
> > > > `4` as a middle-ground.
> > > >
> > > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > > stores where done using:
> > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > >
> > > > Sheets results (also available in pdf on the github):
> > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > Reviewed-by: DJ Delorie <dj@redhat.com>
> > > > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> > > > ---
> > > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > > >
> > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > index 877e73d700..3bd3b3ec1b 100644
> > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > > >  }
> > > >
> > > >  static void
> > > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > > >                  long int core)
> > > >  {
> > > >    unsigned int eax;
> > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >    unsigned int family = cpu_features->basic.family;
> > > >    unsigned int model = cpu_features->basic.model;
> > > >    long int shared = *shared_ptr;
> > > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > > >    unsigned int threads = *threads_ptr;
> > > >    bool inclusive_cache = true;
> > > >    bool support_count_mask = true;
> > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >        /* Try L2 otherwise.  */
> > > >        level  = 2;
> > > >        shared = core;
> > > > +      shared_per_thread = core;
> > > >        threads_l2 = 0;
> > > >        threads_l3 = -1;
> > > >      }
> > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >          }
> > > >        else
> > > >          {
> > > > -intel_bug_no_cache_info:
> > > > -          /* Assume that all logical threads share the highest cache
> > > > -             level.  */
> > > > -          threads
> > > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > -              & 0xff);
> > > > -        }
> > > > -
> > > > -        /* Cap usage of highest cache level to the number of supported
> > > > -           threads.  */
> > > > -        if (shared > 0 && threads > 0)
> > > > -          shared /= threads;
> > > > +       intel_bug_no_cache_info:
> > > > +         /* Assume that all logical threads share the highest cache
> > > > +            level.  */
> > > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > +                    & 0xff);
> > > > +
> > > > +         /* Get per-thread size of highest level cache.  */
> > > > +         if (shared_per_thread > 0 && threads > 0)
> > > > +           shared_per_thread /= threads;
> > > > +       }
> > > >      }
> > > >
> > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > >    if (!inclusive_cache)
> > > >      {
> > > >        if (threads_l2 > 0)
> > > > -        core /= threads_l2;
> > > > +       shared_per_thread += core / threads_l2;
> > > >        shared += core;
> > > >      }
> > > >
> > > >    *shared_ptr = shared;
> > > > +  *shared_per_thread_ptr = shared_per_thread;
> > > >    *threads_ptr = threads;
> > > >  }
> > > >
> > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    /* Find out what brand of processor.  */
> > > >    long int data = -1;
> > > >    long int shared = -1;
> > > > +  long int shared_per_thread = -1;
> > > >    long int core = -1;
> > > >    unsigned int threads = 0;
> > > >    unsigned long int level1_icache_size = -1;
> > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size
> > > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        level4_cache_size
> > > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > > >
> > > > -      get_common_cache_info (&shared, &threads, core);
> > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > >      }
> > > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > > >      {
> > > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > > >
> > > > -      get_common_cache_info (&shared, &threads, core);
> > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > >      }
> > > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > > >      {
> > > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        if (shared <= 0)
> > > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > > >         shared = core;
> > > > +
> > > > +      if (shared_per_thread <= 0)
> > > > +       shared_per_thread = shared;
> > > >      }
> > > >
> > > >    cpu_features->level1_icache_size = level1_icache_size;
> > > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > >
> > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > -     in cache after a maximum temporal copy, which will maintain
> > > > -     in cache a reasonable portion of the thread's stack and other
> > > > -     active data. If the threshold is set higher than one thread's
> > > > -     share of the cache, it has a substantial risk of negatively
> > > > -     impacting the performance of other threads running on the chip. */
> > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > > > +     of the chip's cache. For most Intel and AMD processors with an
> > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > > > +     estimate the point where non-temporal stores begin out-competing
> > > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > > +     stores are forced back to main memory would already occurred to the
> > > > +     majority of the lines in the copy. Note, concerns about the
> > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > +     by the fact that modern HW detects streaming patterns and
> > > > +     provides proper LRU hints so that the maximum thrashing
> > > > +     capped at 1/associativity. */
> > > > +  unsigned long int non_temporal_threshold = shared / 4;
> > > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > > +     hint. As well, their performance in highly parallel situations is
> > > > +     noticeably worse.  */
> > > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > --
> > > > 2.34.1
> > > >
> > >
> > > Hi All,
> > >
> > > I want to backport this series (minus CPUID codes) too 2.28 - 2.37
> > >
> > > The patches I want to backport are:
> > >
> > > 1/4
> > > ```
> > > commit af992e7abdc9049714da76cae1e5e18bc4838fb8
> > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > Date:   Wed Jun 7 13:18:01 2023 -0500
> > >
> > >     x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
> > > ```
> > >
> > > 2/4
> > > ```
> > > commit 47f747217811db35854ea06741be3685e8bbd44d
> > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > Date:   Mon Jul 17 23:14:33 2023 -0500
> > >
> > >     x86: Fix slight bug in `shared_per_thread` cache size calculation.
> > > ```
> > >
> > > 3/4
> > > ```
> > > commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
> > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > Date:   Tue Jul 18 10:27:59 2023 -0500
> > >
> > >     [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
> > > NT threshold.
> > > ```
> > >
> > > 4/4
> > > ```
> > > commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
> > > origin/HEAD, master)
> > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > Date:   Thu Aug 10 19:28:24 2023 -0500
> > >
> > >     x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
> > > ```
> > >
> > > The proposed patches are at:
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
> > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37
> > >
> > > I know the protocol is not to normally backport optimizations, but I'd argue
> > > these are closer to bug fixes for a severe misconfiguration than a proper
> > > optimization series.  As well, the risk of introducing a correctness related
> > > bug is exceedingly low.
> > >
> > > Typically the type of optimization patch this is discouraged are the ones
> > > that actually change a particular function. I.e if these fixes where directly
> > > to the memmove implementation. These patches, however, don't touch
> > > any of the memmove code itself, and are just re-tuning a value used by
> > > memmove which seems categorically different.
> > >
> > > The value also only informs memmove strategy. If these patches turn
> > > out to be deeply buggy and set the new threshold incorrectly, the
> > > blowback is limited to a bad performance (which we already have),
> > > and is extremely unlikely to affect correctness in any way.
> > >
> > > Thoughts?
> >
> > Ping/Any Objections to me backporting?
>
> I am going to take the continued lack of objections to mean no one
> has issue with me backporting these.
>
> I will start backporting next week. Will do so piecemeal to give time
> for issues to emerge before fulling committing.

Have done the 2.37 backport. If nothing comes up this week will proceed
to 2.36 and further.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-08-28 20:02         ` Noah Goldstein
@ 2023-09-05 15:37           ` Noah Goldstein
  2023-09-12  3:50             ` Noah Goldstein
  0 siblings, 1 reply; 76+ messages in thread
From: Noah Goldstein @ 2023-09-05 15:37 UTC (permalink / raw)
  To: libc-alpha; +Cc: hjl.tools, carlos, DJ Delorie, Carlos O'Donell

On Mon, Aug 28, 2023 at 3:02 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Thu, Aug 24, 2023 at 12:06 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Tue, Aug 22, 2023 at 10:11 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > On Mon, Aug 14, 2023 at 6:00 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > > >
> > > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > > ncores_per_socket'. This patch updates that value to roughly
> > > > > 'sizeof_L3 / 4`
> > > > >
> > > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > > could evict.
> > > > >
> > > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > > non-temporal thresholds and leads to using non-temporal stores in
> > > > > cases where REP MOVSB is multiple times faster.
> > > > >
> > > > > Furthermore, non-temporal stores are written directly to main memory
> > > > > so using it at a size much smaller than L3 can place soon to be
> > > > > accessed data much further away than it otherwise could be. As well,
> > > > > modern machines are able to detect streaming patterns (especially if
> > > > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > > > far below meaningfully thrashing the entire cache.
> > > > >
> > > > > As best I can tell, the benchmarks that lead this small threshold
> > > > > where done comparing non-temporal stores versus standard cacheable
> > > > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > > at the low-end of the previous threshold, and within 10% for over
> > > > > 100MB copies (well past even the current threshold). In cases with a
> > > > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > > > up to `sizeof_L3`.
> > > > >
> > > > > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > > > > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > > > > such as Broadwell prefer something closer to `8`. This patch is meant
> > > > > to be followed up by another one to make the divisor cpu-specific, but
> > > > > in the meantime (and for easier backporting), this patch settles on
> > > > > `4` as a middle-ground.
> > > > >
> > > > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > > > stores where done using:
> > > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > > >
> > > > > Sheets results (also available in pdf on the github):
> > > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > > Reviewed-by: DJ Delorie <dj@redhat.com>
> > > > > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> > > > > ---
> > > > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > > > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > > > >
> > > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > > index 877e73d700..3bd3b3ec1b 100644
> > > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > > > >  }
> > > > >
> > > > >  static void
> > > > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > > > >                  long int core)
> > > > >  {
> > > > >    unsigned int eax;
> > > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > >    unsigned int family = cpu_features->basic.family;
> > > > >    unsigned int model = cpu_features->basic.model;
> > > > >    long int shared = *shared_ptr;
> > > > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > > > >    unsigned int threads = *threads_ptr;
> > > > >    bool inclusive_cache = true;
> > > > >    bool support_count_mask = true;
> > > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > >        /* Try L2 otherwise.  */
> > > > >        level  = 2;
> > > > >        shared = core;
> > > > > +      shared_per_thread = core;
> > > > >        threads_l2 = 0;
> > > > >        threads_l3 = -1;
> > > > >      }
> > > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > >          }
> > > > >        else
> > > > >          {
> > > > > -intel_bug_no_cache_info:
> > > > > -          /* Assume that all logical threads share the highest cache
> > > > > -             level.  */
> > > > > -          threads
> > > > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > > -              & 0xff);
> > > > > -        }
> > > > > -
> > > > > -        /* Cap usage of highest cache level to the number of supported
> > > > > -           threads.  */
> > > > > -        if (shared > 0 && threads > 0)
> > > > > -          shared /= threads;
> > > > > +       intel_bug_no_cache_info:
> > > > > +         /* Assume that all logical threads share the highest cache
> > > > > +            level.  */
> > > > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > > +                    & 0xff);
> > > > > +
> > > > > +         /* Get per-thread size of highest level cache.  */
> > > > > +         if (shared_per_thread > 0 && threads > 0)
> > > > > +           shared_per_thread /= threads;
> > > > > +       }
> > > > >      }
> > > > >
> > > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > > >    if (!inclusive_cache)
> > > > >      {
> > > > >        if (threads_l2 > 0)
> > > > > -        core /= threads_l2;
> > > > > +       shared_per_thread += core / threads_l2;
> > > > >        shared += core;
> > > > >      }
> > > > >
> > > > >    *shared_ptr = shared;
> > > > > +  *shared_per_thread_ptr = shared_per_thread;
> > > > >    *threads_ptr = threads;
> > > > >  }
> > > > >
> > > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >    /* Find out what brand of processor.  */
> > > > >    long int data = -1;
> > > > >    long int shared = -1;
> > > > > +  long int shared_per_thread = -1;
> > > > >    long int core = -1;
> > > > >    unsigned int threads = 0;
> > > > >    unsigned long int level1_icache_size = -1;
> > > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > > > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > > > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > > > +      shared_per_thread = shared;
> > > > >
> > > > >        level1_icache_size
> > > > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        level4_cache_size
> > > > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > > > >
> > > > > -      get_common_cache_info (&shared, &threads, core);
> > > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > > >      }
> > > > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > > > >      {
> > > > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > > > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > > > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > > > +      shared_per_thread = shared;
> > > > >
> > > > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > > > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > > > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > > > >
> > > > > -      get_common_cache_info (&shared, &threads, core);
> > > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > > >      }
> > > > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > > > >      {
> > > > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > > > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > > > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > > > +      shared_per_thread = shared;
> > > > >
> > > > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > > > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >        if (shared <= 0)
> > > > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > > > >         shared = core;
> > > > > +
> > > > > +      if (shared_per_thread <= 0)
> > > > > +       shared_per_thread = shared;
> > > > >      }
> > > > >
> > > > >    cpu_features->level1_icache_size = level1_icache_size;
> > > > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > > >
> > > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > > -     in cache after a maximum temporal copy, which will maintain
> > > > > -     in cache a reasonable portion of the thread's stack and other
> > > > > -     active data. If the threshold is set higher than one thread's
> > > > > -     share of the cache, it has a substantial risk of negatively
> > > > > -     impacting the performance of other threads running on the chip. */
> > > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > > > > +     of the chip's cache. For most Intel and AMD processors with an
> > > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > > > > +     estimate the point where non-temporal stores begin out-competing
> > > > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > > > +     stores are forced back to main memory would already occurred to the
> > > > > +     majority of the lines in the copy. Note, concerns about the
> > > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > > +     by the fact that modern HW detects streaming patterns and
> > > > > +     provides proper LRU hints so that the maximum thrashing
> > > > > +     capped at 1/associativity. */
> > > > > +  unsigned long int non_temporal_threshold = shared / 4;
> > > > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > > > +     hint. As well, their performance in highly parallel situations is
> > > > > +     noticeably worse.  */
> > > > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > > --
> > > > > 2.34.1
> > > > >
> > > >
> > > > Hi All,
> > > >
> > > > I want to backport this series (minus CPUID codes) too 2.28 - 2.37
> > > >
> > > > The patches I want to backport are:
> > > >
> > > > 1/4
> > > > ```
> > > > commit af992e7abdc9049714da76cae1e5e18bc4838fb8
> > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > Date:   Wed Jun 7 13:18:01 2023 -0500
> > > >
> > > >     x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
> > > > ```
> > > >
> > > > 2/4
> > > > ```
> > > > commit 47f747217811db35854ea06741be3685e8bbd44d
> > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > Date:   Mon Jul 17 23:14:33 2023 -0500
> > > >
> > > >     x86: Fix slight bug in `shared_per_thread` cache size calculation.
> > > > ```
> > > >
> > > > 3/4
> > > > ```
> > > > commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
> > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > Date:   Tue Jul 18 10:27:59 2023 -0500
> > > >
> > > >     [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
> > > > NT threshold.
> > > > ```
> > > >
> > > > 4/4
> > > > ```
> > > > commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
> > > > origin/HEAD, master)
> > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > Date:   Thu Aug 10 19:28:24 2023 -0500
> > > >
> > > >     x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
> > > > ```
> > > >
> > > > The proposed patches are at:
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
> > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37
> > > >
> > > > I know the protocol is not to normally backport optimizations, but I'd argue
> > > > these are closer to bug fixes for a severe misconfiguration than a proper
> > > > optimization series.  As well, the risk of introducing a correctness related
> > > > bug is exceedingly low.
> > > >
> > > > Typically the type of optimization patch this is discouraged are the ones
> > > > that actually change a particular function. I.e if these fixes where directly
> > > > to the memmove implementation. These patches, however, don't touch
> > > > any of the memmove code itself, and are just re-tuning a value used by
> > > > memmove which seems categorically different.
> > > >
> > > > The value also only informs memmove strategy. If these patches turn
> > > > out to be deeply buggy and set the new threshold incorrectly, the
> > > > blowback is limited to a bad performance (which we already have),
> > > > and is extremely unlikely to affect correctness in any way.
> > > >
> > > > Thoughts?
> > >
> > > Ping/Any Objections to me backporting?
> >
> > I am going to take the continued lack of objections to mean no one
> > has issue with me backporting these.
> >
> > I will start backporting next week. Will do so piecemeal to give time
> > for issues to emerge before fulling committing.
>
> Have done the 2.37 backport. If nothing comes up this week will proceed
> to 2.36 and further.
Have seen no issue with the 2.37 backport.
Going to backport to 2.36/2.35/2.34. If no issues will complete backporting
through 2.28 next week.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
  2023-09-05 15:37           ` Noah Goldstein
@ 2023-09-12  3:50             ` Noah Goldstein
  0 siblings, 0 replies; 76+ messages in thread
From: Noah Goldstein @ 2023-09-12  3:50 UTC (permalink / raw)
  To: libc-alpha; +Cc: hjl.tools, carlos, DJ Delorie, Carlos O'Donell

On Tue, Sep 5, 2023 at 8:37 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Mon, Aug 28, 2023 at 3:02 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Thu, Aug 24, 2023 at 12:06 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > On Tue, Aug 22, 2023 at 10:11 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > On Mon, Aug 14, 2023 at 6:00 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > > >
> > > > > On Wed, Jun 7, 2023 at 1:18 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > > > >
> > > > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > > > ncores_per_socket'. This patch updates that value to roughly
> > > > > > 'sizeof_L3 / 4`
> > > > > >
> > > > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > > > could evict.
> > > > > >
> > > > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > > > non-temporal thresholds and leads to using non-temporal stores in
> > > > > > cases where REP MOVSB is multiple times faster.
> > > > > >
> > > > > > Furthermore, non-temporal stores are written directly to main memory
> > > > > > so using it at a size much smaller than L3 can place soon to be
> > > > > > accessed data much further away than it otherwise could be. As well,
> > > > > > modern machines are able to detect streaming patterns (especially if
> > > > > > REP MOVSB is used) and provide LRU hints to the memory subsystem. This
> > > > > > in affect caps the total amount of eviction at 1/cache_associativity,
> > > > > > far below meaningfully thrashing the entire cache.
> > > > > >
> > > > > > As best I can tell, the benchmarks that lead this small threshold
> > > > > > where done comparing non-temporal stores versus standard cacheable
> > > > > > stores. A better comparison (linked below) is to be REP MOVSB which,
> > > > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > > > at the low-end of the previous threshold, and within 10% for over
> > > > > > 100MB copies (well past even the current threshold). In cases with a
> > > > > > low number of threads competing for bandwidth, REP MOVSB is ~2x faster
> > > > > > up to `sizeof_L3`.
> > > > > >
> > > > > > The divisor of `4` is a somewhat arbitrary value. From benchmarks it
> > > > > > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
> > > > > > such as Broadwell prefer something closer to `8`. This patch is meant
> > > > > > to be followed up by another one to make the divisor cpu-specific, but
> > > > > > in the meantime (and for easier backporting), this patch settles on
> > > > > > `4` as a middle-ground.
> > > > > >
> > > > > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
> > > > > > stores where done using:
> > > > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > > > >
> > > > > > Sheets results (also available in pdf on the github):
> > > > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > > > Reviewed-by: DJ Delorie <dj@redhat.com>
> > > > > > Reviewed-by: Carlos O'Donell <carlos@redhat.com>
> > > > > > ---
> > > > > >  sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++---------------
> > > > > >  1 file changed, 43 insertions(+), 27 deletions(-)
> > > > > >
> > > > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > > > index 877e73d700..3bd3b3ec1b 100644
> > > > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > > > > >  }
> > > > > >
> > > > > >  static void
> > > > > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > > > > >                  long int core)
> > > > > >  {
> > > > > >    unsigned int eax;
> > > > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > >    unsigned int family = cpu_features->basic.family;
> > > > > >    unsigned int model = cpu_features->basic.model;
> > > > > >    long int shared = *shared_ptr;
> > > > > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > > > > >    unsigned int threads = *threads_ptr;
> > > > > >    bool inclusive_cache = true;
> > > > > >    bool support_count_mask = true;
> > > > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > >        /* Try L2 otherwise.  */
> > > > > >        level  = 2;
> > > > > >        shared = core;
> > > > > > +      shared_per_thread = core;
> > > > > >        threads_l2 = 0;
> > > > > >        threads_l3 = -1;
> > > > > >      }
> > > > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > > >          }
> > > > > >        else
> > > > > >          {
> > > > > > -intel_bug_no_cache_info:
> > > > > > -          /* Assume that all logical threads share the highest cache
> > > > > > -             level.  */
> > > > > > -          threads
> > > > > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > > > -              & 0xff);
> > > > > > -        }
> > > > > > -
> > > > > > -        /* Cap usage of highest cache level to the number of supported
> > > > > > -           threads.  */
> > > > > > -        if (shared > 0 && threads > 0)
> > > > > > -          shared /= threads;
> > > > > > +       intel_bug_no_cache_info:
> > > > > > +         /* Assume that all logical threads share the highest cache
> > > > > > +            level.  */
> > > > > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > > > +                    & 0xff);
> > > > > > +
> > > > > > +         /* Get per-thread size of highest level cache.  */
> > > > > > +         if (shared_per_thread > 0 && threads > 0)
> > > > > > +           shared_per_thread /= threads;
> > > > > > +       }
> > > > > >      }
> > > > > >
> > > > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > > > >    if (!inclusive_cache)
> > > > > >      {
> > > > > >        if (threads_l2 > 0)
> > > > > > -        core /= threads_l2;
> > > > > > +       shared_per_thread += core / threads_l2;
> > > > > >        shared += core;
> > > > > >      }
> > > > > >
> > > > > >    *shared_ptr = shared;
> > > > > > +  *shared_per_thread_ptr = shared_per_thread;
> > > > > >    *threads_ptr = threads;
> > > > > >  }
> > > > > >
> > > > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >    /* Find out what brand of processor.  */
> > > > > >    long int data = -1;
> > > > > >    long int shared = -1;
> > > > > > +  long int shared_per_thread = -1;
> > > > > >    long int core = -1;
> > > > > >    unsigned int threads = 0;
> > > > > >    unsigned long int level1_icache_size = -1;
> > > > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > > > > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > > > > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > > > > +      shared_per_thread = shared;
> > > > > >
> > > > > >        level1_icache_size
> > > > > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >        level4_cache_size
> > > > > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > > > > >
> > > > > > -      get_common_cache_info (&shared, &threads, core);
> > > > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > > > >      }
> > > > > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > > > > >      {
> > > > > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > > > > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > > > > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > > > > +      shared_per_thread = shared;
> > > > > >
> > > > > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > > > > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > > > > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > > > > >
> > > > > > -      get_common_cache_info (&shared, &threads, core);
> > > > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > > > >      }
> > > > > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > > > > >      {
> > > > > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > > > > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > > > > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > > > > +      shared_per_thread = shared;
> > > > > >
> > > > > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > > > > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >        if (shared <= 0)
> > > > > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > > > > >         shared = core;
> > > > > > +
> > > > > > +      if (shared_per_thread <= 0)
> > > > > > +       shared_per_thread = shared;
> > > > > >      }
> > > > > >
> > > > > >    cpu_features->level1_icache_size = level1_icache_size;
> > > > > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > > > >
> > > > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > > > -     in cache after a maximum temporal copy, which will maintain
> > > > > > -     in cache a reasonable portion of the thread's stack and other
> > > > > > -     active data. If the threshold is set higher than one thread's
> > > > > > -     share of the cache, it has a substantial risk of negatively
> > > > > > -     impacting the performance of other threads running on the chip. */
> > > > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > > > +  /* The default setting for the non_temporal threshold is 1/4 of size
> > > > > > +     of the chip's cache. For most Intel and AMD processors with an
> > > > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > > > +     share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> > > > > > +     estimate the point where non-temporal stores begin out-competing
> > > > > > +     REP MOVSB. As well the point where the fact that non-temporal
> > > > > > +     stores are forced back to main memory would already occurred to the
> > > > > > +     majority of the lines in the copy. Note, concerns about the
> > > > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > > > +     by the fact that modern HW detects streaming patterns and
> > > > > > +     provides proper LRU hints so that the maximum thrashing
> > > > > > +     capped at 1/associativity. */
> > > > > > +  unsigned long int non_temporal_threshold = shared / 4;
> > > > > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > > > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > > > > +     hint. As well, their performance in highly parallel situations is
> > > > > > +     noticeably worse.  */
> > > > > > +  if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > > > > > +    non_temporal_threshold = shared_per_thread * 3 / 4;
> > > > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > > > --
> > > > > > 2.34.1
> > > > > >
> > > > >
> > > > > Hi All,
> > > > >
> > > > > I want to backport this series (minus CPUID codes) too 2.28 - 2.37
> > > > >
> > > > > The patches I want to backport are:
> > > > >
> > > > > 1/4
> > > > > ```
> > > > > commit af992e7abdc9049714da76cae1e5e18bc4838fb8
> > > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > > Date:   Wed Jun 7 13:18:01 2023 -0500
> > > > >
> > > > >     x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
> > > > > ```
> > > > >
> > > > > 2/4
> > > > > ```
> > > > > commit 47f747217811db35854ea06741be3685e8bbd44d
> > > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > > Date:   Mon Jul 17 23:14:33 2023 -0500
> > > > >
> > > > >     x86: Fix slight bug in `shared_per_thread` cache size calculation.
> > > > > ```
> > > > >
> > > > > 3/4
> > > > > ```
> > > > > commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da
> > > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > > Date:   Tue Jul 18 10:27:59 2023 -0500
> > > > >
> > > > >     [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for
> > > > > NT threshold.
> > > > > ```
> > > > >
> > > > > 4/4
> > > > > ```
> > > > > commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master,
> > > > > origin/HEAD, master)
> > > > > Author: Noah Goldstein <goldstein.w.n@gmail.com>
> > > > > Date:   Thu Aug 10 19:28:24 2023 -0500
> > > > >
> > > > >     x86: Fix incorrect scope of setting `shared_per_thread` [BZ# 30745]
> > > > > ```
> > > > >
> > > > > The proposed patches are at:
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-28
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-29
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-30
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-31
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-32
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-33
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-34
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-35
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-36
> > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/backport-37
> > > > >
> > > > > I know the protocol is not to normally backport optimizations, but I'd argue
> > > > > these are closer to bug fixes for a severe misconfiguration than a proper
> > > > > optimization series.  As well, the risk of introducing a correctness related
> > > > > bug is exceedingly low.
> > > > >
> > > > > Typically the type of optimization patch this is discouraged are the ones
> > > > > that actually change a particular function. I.e if these fixes where directly
> > > > > to the memmove implementation. These patches, however, don't touch
> > > > > any of the memmove code itself, and are just re-tuning a value used by
> > > > > memmove which seems categorically different.
> > > > >
> > > > > The value also only informs memmove strategy. If these patches turn
> > > > > out to be deeply buggy and set the new threshold incorrectly, the
> > > > > blowback is limited to a bad performance (which we already have),
> > > > > and is extremely unlikely to affect correctness in any way.
> > > > >
> > > > > Thoughts?
> > > >
> > > > Ping/Any Objections to me backporting?
> > >
> > > I am going to take the continued lack of objections to mean no one
> > > has issue with me backporting these.
> > >
> > > I will start backporting next week. Will do so piecemeal to give time
> > > for issues to emerge before fulling committing.
> >
> > Have done the 2.37 backport. If nothing comes up this week will proceed
> > to 2.36 and further.
> Have seen no issue with the 2.37 backport.
> Going to backport to 2.36/2.35/2.34. If no issues will complete backporting
> through 2.28 next week.

I have backported these patches to 2.28-2.33. Expect to be done with the issue
now.

^ permalink raw reply	[flat|nested] 76+ messages in thread

end of thread, other threads:[~2023-09-12  3:50 UTC | newest]

Thread overview: 76+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-24  5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
2023-04-24 18:09 ` H.J. Lu
2023-04-24 18:34   ` Noah Goldstein
2023-04-24 20:44     ` H.J. Lu
2023-04-24 22:30       ` Noah Goldstein
2023-04-24 22:30 ` [PATCH v2] " Noah Goldstein
2023-04-24 22:48   ` H.J. Lu
2023-04-25  2:05     ` Noah Goldstein
2023-04-25  2:55       ` H.J. Lu
2023-04-25  3:43         ` Noah Goldstein
2023-04-25  3:43 ` [PATCH v3] " Noah Goldstein
2023-04-25 17:42   ` H.J. Lu
2023-04-25 21:45     ` Noah Goldstein
2023-04-25 21:45 ` [PATCH v4] " Noah Goldstein
2023-04-26 15:59   ` H.J. Lu
2023-04-26 17:15     ` Noah Goldstein
2023-05-04  3:28       ` Noah Goldstein
2023-05-05 18:06         ` H.J. Lu
2023-05-09  3:14           ` Noah Goldstein
2023-05-09  3:13 ` [PATCH v5 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-05-09  3:13   ` [PATCH v5 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-09 21:58     ` H.J. Lu
2023-05-10  0:33       ` Noah Goldstein
2023-05-09  3:13   ` [PATCH v5 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-10  0:33 ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-05-10  0:33   ` [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-10 22:13     ` H.J. Lu
2023-05-10 23:17       ` Noah Goldstein
2023-05-11 21:36         ` H.J. Lu
2023-05-12  5:11           ` Noah Goldstein
2023-05-10  0:33   ` [PATCH v6 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-10  0:33   ` [PATCH v6 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Noah Goldstein
2023-05-10 22:04     ` H.J. Lu
2023-05-10 22:12       ` Noah Goldstein
2023-05-10 15:55   ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` H.J. Lu
2023-05-10 16:07     ` Noah Goldstein
2023-05-10 22:12 ` [PATCH v7 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-10 22:12   ` [PATCH v7 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-10 22:12   ` [PATCH v7 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Noah Goldstein
2023-05-12  5:12     ` Noah Goldstein
2023-05-12  5:10 ` [PATCH v8 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-05-12  5:10   ` [PATCH v8 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-12 22:17     ` H.J. Lu
2023-05-13  5:18       ` Noah Goldstein
2023-05-12 22:03 ` [PATCH v8 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-13  5:19 ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-05-13  5:19   ` [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-15 20:57     ` H.J. Lu
2023-05-26  3:34     ` DJ Delorie
2023-05-27 18:46       ` Noah Goldstein
2023-05-13  5:19   ` [PATCH v9 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-26  3:34     ` DJ Delorie
2023-05-27 18:46       ` Noah Goldstein
2023-05-15 18:29   ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-05-17 12:00     ` Carlos O'Donell
2023-05-26  3:34   ` DJ Delorie
2023-05-27 18:46 ` [PATCH v10 " Noah Goldstein
2023-05-27 18:46   ` [PATCH v10 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-27 18:46   ` [PATCH v10 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-31  2:33     ` DJ Delorie
2023-07-10  5:23     ` Sajan Karumanchi
2023-07-10 15:58       ` Noah Goldstein
2023-07-14  2:21         ` Re: Noah Goldstein
2023-07-14  7:39         ` Re: sajan karumanchi
2023-06-07  0:15   ` [PATCH v10 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Carlos O'Donell
2023-06-07 18:18     ` Noah Goldstein
2023-06-07 18:18 ` [PATCH v11 " Noah Goldstein
2023-06-07 18:18   ` [PATCH v11 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-06-07 18:18   ` [PATCH v11 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-06-07 18:19   ` [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-08-14 23:00   ` Noah Goldstein
2023-08-22 15:11     ` Noah Goldstein
2023-08-24 17:06       ` Noah Goldstein
2023-08-28 20:02         ` Noah Goldstein
2023-09-05 15:37           ` Noah Goldstein
2023-09-12  3:50             ` Noah Goldstein

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).