public inbox for glibc-cvs@sourceware.org
help / color / mirror / Atom feed
* [glibc/release/2.29/master] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for NT threshold.
@ 2023-09-12  3:48 Noah Goldstein
  0 siblings, 0 replies; only message in thread
From: Noah Goldstein @ 2023-09-12  3:48 UTC (permalink / raw)
  To: glibc-cvs

https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=83ce856e77fc50e213195a3f59aefb1e7dde6d71

commit 83ce856e77fc50e213195a3f59aefb1e7dde6d71
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Fri Aug 11 18:47:17 2023 -0500

    x86: Use `3/4*sizeof(per-thread-L3)` as low bound for NT threshold.
    
    On some machines we end up with incomplete cache information. This can
    make the new calculation of `sizeof(total-L3)/custom-divisor` end up
    lower than intended (and lower than the prior value). So reintroduce
    the old bound as a lower bound to avoid potentially regressing code
    where we don't have complete information to make the decision.
    Reviewed-by: DJ Delorie <dj@redhat.com>
    
    (cherry picked from commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da)

Diff:
---
 sysdeps/x86/cacheinfo.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c
index 62986a7327..0080e8dad5 100644
--- a/sysdeps/x86/cacheinfo.c
+++ b/sysdeps/x86/cacheinfo.c
@@ -797,12 +797,21 @@ init_cacheinfo (void)
      modern HW detects streaming patterns and provides proper LRU hints so that
      the maximum thrashing capped at 1/associativity. */
   unsigned long int non_temporal_threshold = shared / 4;
+
+  /* If the computed non_temporal_threshold <= 3/4 * per-thread L3, we most
+     likely have incorrect/incomplete cache info in which case, default to
+     3/4 * per-thread L3 to avoid regressions.  */
+  unsigned long int non_temporal_threshold_lowbound
+      = shared_per_thread * 3 / 4;
+  if (non_temporal_threshold < non_temporal_threshold_lowbound)
+    non_temporal_threshold = non_temporal_threshold_lowbound;
+
   /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
      a higher risk of actually thrashing the cache as they don't have a HW LRU
      hint. As well, their performance in highly parallel situations is
      noticeably worse.  */
   if (!CPU_FEATURES_CPU_P (cpu_features, ERMS))
-    non_temporal_threshold = shared_per_thread * 3 / 4;
+    non_temporal_threshold = non_temporal_threshold_lowbound;
 
   __x86_shared_non_temporal_threshold
     = (cpu_features->non_temporal_threshold != 0

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-09-12  3:48 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-09-12  3:48 [glibc/release/2.29/master] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for NT threshold Noah Goldstein

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).