From: DJ Delorie <dj@redhat.com>
To: Noah Goldstein <goldstein.w.n@gmail.com>
Cc: libc-alpha@sourceware.org, goldstein.w.n@gmail.com,
hjl.tools@gmail.com, carlos@systemhalted.org
Subject: Re: [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
Date: Thu, 25 May 2023 23:34:42 -0400 [thread overview]
Message-ID: <xnwn0v7okt.fsf@greed.delorie.com> (raw)
In-Reply-To: <20230513051906.1287611-1-goldstein.w.n@gmail.com>
Ignoring the benchmarks for now, technical review...
LGTM.
Reviewed-by: DJ Delorie <dj@redhat.com>
> static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> long int core)
Adding a new parameter, ok.
> + long int shared_per_thread = *shared_per_thread_ptr;
No NULL check, but all callers pass address-of-variable, so OK.
> + shared_per_thread = core;
Ok.
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> }
> else
> {
> -intel_bug_no_cache_info:
> - /* Assume that all logical threads share the highest cache
> - level. */
> - threads
> - = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> - & 0xff);
> - }
> -
> - /* Cap usage of highest cache level to the number of supported
> - threads. */
> - if (shared > 0 && threads > 0)
> - shared /= threads;
> + intel_bug_no_cache_info:
> + /* Assume that all logical threads share the highest cache
> + level. */
> + threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> + & 0xff);
> +
> + /* Get per-thread size of highest level cache. */
> + if (shared_per_thread > 0 && threads > 0)
> + shared_per_thread /= threads;
> + }
> }
Mostly whitespace, but real change is to modify shared_per_thread
instead of shared. Ok.
> if (threads_l2 > 0)
> - core /= threads_l2;
> + shared_per_thread += core / threads_l2;
> shared += core;
shared is still unscaled, shared_per_thread has the scaled version. Ok.
> }
>
> *shared_ptr = shared;
> + *shared_per_thread_ptr = shared_per_thread;
Ok.
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> /* Find out what brand of processor. */
> long int data = -1;
> long int shared = -1;
> + long int shared_per_thread = -1;
Ok.
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> + shared_per_thread = shared;
Ok.
> - get_common_cache_info (&shared, &threads, core);
> + get_common_cache_info (&shared, &shared_per_thread, &threads, core);
Ok.
> shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> + shared_per_thread = shared;
Ok.
> - get_common_cache_info (&shared, &threads, core);
> + get_common_cache_info (&shared, &shared_per_thread, &threads, core);
Ok.
> shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> + shared_per_thread = shared;
Ok.
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> if (shared <= 0)
> /* No shared L3 cache. All we have is the L2 cache. */
> shared = core;
> +
> + if (shared_per_thread <= 0)
> + shared_per_thread = shared;
Reasonable default, ok.
> @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> cpu_features->level3_cache_linesize = level3_cache_linesize;
> cpu_features->level4_cache_size = level4_cache_size;
>
> - /* The default setting for the non_temporal threshold is 3/4 of one
> - thread's share of the chip's cache. For most Intel and AMD processors
> - with an initial release date between 2017 and 2020, a thread's typical
> - share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> - threshold leaves 125 KBytes to 500 KBytes of the thread's data
> - in cache after a maximum temporal copy, which will maintain
> - in cache a reasonable portion of the thread's stack and other
> - active data. If the threshold is set higher than one thread's
> - share of the cache, it has a substantial risk of negatively
> - impacting the performance of other threads running on the chip. */
> - unsigned long int non_temporal_threshold = shared * 3 / 4;
> + /* The default setting for the non_temporal threshold is 1/4 of size
> + of the chip's cache. For most Intel and AMD processors with an
> + initial release date between 2017 and 2023, a thread's typical
> + share of the cache is from 18-64MB. Using the 1/4 L3 is meant to
> + estimate the point where non-temporal stores begin outcompeting
> + REP MOVSB. As well the point where the fact that non-temporal
> + stores are forced back to main memory would already occurred to the
> + majority of the lines in the copy. Note, concerns about the
> + entire L3 cache being evicted by the copy are mostly alleviated
> + by the fact that modern HW detects streaming patterns and
> + provides proper LRU hints so that the maximum thrashing
> + capped at 1/associativity. */
> + unsigned long int non_temporal_threshold = shared / 4;
Changing from 3/4 of a scaled amount, to 1/4 of the total amount. Ok.
> + /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> + a higher risk of actually thrashing the cache as they don't have a HW LRU
> + hint. As well, there performance in highly parallel situations is
> + noticeably worse. */
> + if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> + non_temporal_threshold = shared_per_thread * 3 / 4;
This is the same as the old default, so OK.
next prev parent reply other threads:[~2023-05-26 3:34 UTC|newest]
Thread overview: 76+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-24 5:03 [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` Noah Goldstein
2023-04-24 18:09 ` H.J. Lu
2023-04-24 18:34 ` Noah Goldstein
2023-04-24 20:44 ` H.J. Lu
2023-04-24 22:30 ` Noah Goldstein
2023-04-24 22:30 ` [PATCH v2] " Noah Goldstein
2023-04-24 22:48 ` H.J. Lu
2023-04-25 2:05 ` Noah Goldstein
2023-04-25 2:55 ` H.J. Lu
2023-04-25 3:43 ` Noah Goldstein
2023-04-25 3:43 ` [PATCH v3] " Noah Goldstein
2023-04-25 17:42 ` H.J. Lu
2023-04-25 21:45 ` Noah Goldstein
2023-04-25 21:45 ` [PATCH v4] " Noah Goldstein
2023-04-26 15:59 ` H.J. Lu
2023-04-26 17:15 ` Noah Goldstein
2023-05-04 3:28 ` Noah Goldstein
2023-05-05 18:06 ` H.J. Lu
2023-05-09 3:14 ` Noah Goldstein
2023-05-09 3:13 ` [PATCH v5 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-05-09 3:13 ` [PATCH v5 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-09 21:58 ` H.J. Lu
2023-05-10 0:33 ` Noah Goldstein
2023-05-09 3:13 ` [PATCH v5 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-10 0:33 ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-05-10 0:33 ` [PATCH v6 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-10 22:13 ` H.J. Lu
2023-05-10 23:17 ` Noah Goldstein
2023-05-11 21:36 ` H.J. Lu
2023-05-12 5:11 ` Noah Goldstein
2023-05-10 0:33 ` [PATCH v6 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-10 0:33 ` [PATCH v6 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Noah Goldstein
2023-05-10 22:04 ` H.J. Lu
2023-05-10 22:12 ` Noah Goldstein
2023-05-10 15:55 ` [PATCH v6 1/4] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` H.J. Lu
2023-05-10 16:07 ` Noah Goldstein
2023-05-10 22:12 ` [PATCH v7 2/4] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-10 22:12 ` [PATCH v7 3/4] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-10 22:12 ` [PATCH v7 4/4] x86: Tune 'Saltwell' microarch the same was a 'Bonnell' Noah Goldstein
2023-05-12 5:12 ` Noah Goldstein
2023-05-12 5:10 ` [PATCH v8 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-05-12 5:10 ` [PATCH v8 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-12 22:17 ` H.J. Lu
2023-05-13 5:18 ` Noah Goldstein
2023-05-12 22:03 ` [PATCH v8 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-13 5:19 ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-05-13 5:19 ` [PATCH v9 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-15 20:57 ` H.J. Lu
2023-05-26 3:34 ` DJ Delorie
2023-05-27 18:46 ` Noah Goldstein
2023-05-13 5:19 ` [PATCH v9 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-26 3:34 ` DJ Delorie
2023-05-27 18:46 ` Noah Goldstein
2023-05-15 18:29 ` [PATCH v9 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-05-17 12:00 ` Carlos O'Donell
2023-05-26 3:34 ` DJ Delorie [this message]
2023-05-27 18:46 ` [PATCH v10 " Noah Goldstein
2023-05-27 18:46 ` [PATCH v10 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-05-27 18:46 ` [PATCH v10 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-05-31 2:33 ` DJ Delorie
2023-07-10 5:23 ` Sajan Karumanchi
2023-07-10 15:58 ` Noah Goldstein
2023-07-14 2:21 ` Re: Noah Goldstein
2023-07-14 7:39 ` Re: sajan karumanchi
2023-06-07 0:15 ` [PATCH v10 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Carlos O'Donell
2023-06-07 18:18 ` Noah Goldstein
2023-06-07 18:18 ` [PATCH v11 " Noah Goldstein
2023-06-07 18:18 ` [PATCH v11 2/3] x86: Refactor Intel `init_cpu_features` Noah Goldstein
2023-06-07 18:18 ` [PATCH v11 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Noah Goldstein
2023-06-07 18:19 ` [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Noah Goldstein
2023-08-14 23:00 ` Noah Goldstein
2023-08-22 15:11 ` Noah Goldstein
2023-08-24 17:06 ` Noah Goldstein
2023-08-28 20:02 ` Noah Goldstein
2023-09-05 15:37 ` Noah Goldstein
2023-09-12 3:50 ` Noah Goldstein
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=xnwn0v7okt.fsf@greed.delorie.com \
--to=dj@redhat.com \
--cc=carlos@systemhalted.org \
--cc=goldstein.w.n@gmail.com \
--cc=hjl.tools@gmail.com \
--cc=libc-alpha@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).