From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com [IPv6:2a00:1450:4864:20::530]) by sourceware.org (Postfix) with ESMTPS id 806183858D32 for ; Tue, 25 Apr 2023 02:05:42 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 806183858D32 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-ed1-x530.google.com with SMTP id 4fb4d7f45d1cf-5066ce4f490so7750161a12.2 for ; Mon, 24 Apr 2023 19:05:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682388340; x=1684980340; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=ZFHJr+B3htK0n/zUECLnD4RgYd3n7BvBEPoSdt9BFQY=; b=Zi5Tny+YTSY5BTEilA7i9vsoTgJIecPcFjvPP3pg0FKTc6Do0Agmo+PMFZHzlo++g4 1LOgamGEzd7+itgTwQ+ZfW3SPET/n/0PsMSktHadxRj6ejktg0KJm6WDhBkRKuW2/51s UWkOdxZrUaqVxDlxQhUlS6Khab+1b85xPYR9wQf7yCmiT/x8kcvvwxPCAVQ9zBq82QTw K6BxjVLU9GBo/+oQkFSDsEOEENCLcVNPS63XvmP0W/xVAIpuxAl8mNty9/QkQUyn8cUx nZyVJj0f1NNt5BCG6FFd6qPVFbGCBj0prFheZrZ2QMJzeJ9zWIPihSPymazEbiIC5yyh 3KBA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682388340; x=1684980340; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZFHJr+B3htK0n/zUECLnD4RgYd3n7BvBEPoSdt9BFQY=; b=aPexM3LsxKq3trkjB3xdy7MS/YOMp9I2TZUkLw4juCp7amLhfJrk4r5BaYLRrmTiEL vfYgja7cNuQU1TuoeLNIiViLAuMYJqm8dxm7YpUi9ELbsAbtagOg1MUmmIpJDRoAQ66h Oj1ntPoCyJyGzTRaUozmAA9tLKII3Qm9CKWtgjv6r6aaX7iI7xqwwDsbN1e1XqAajYD+ bHdgGZRAjzI0qMZbwKkONQZDVcepXweewsBvXsb49/FlY1Dbi7qws2bcGLKjMMBkH+CU 55bPIYQEpUQFpj66wc/HOFi5Vlkf5nqxUjxqFzUSq3kqc9c7hB1C/UdPdsIISezsC65O UIBg== X-Gm-Message-State: AAQBX9e/g+0SbVkoAqdjTGM3MFbJIfu1nGVWxOIaLqUzZDh4uh1Pgb0i VrBaws1dbgzPZ0x2jeD+icwwAIh6VcUSaxX092k= X-Google-Smtp-Source: AKy350YkW7puFB9c088CDEXbKkrQOvJm/XH41kNw0nM5jjDD0eFcr2WTsVrqAOn/Scic94IIctjk0Y5zjznqUKYz4oI= X-Received: by 2002:a17:907:2a51:b0:94e:16d:4bf1 with SMTP id fe17-20020a1709072a5100b0094e016d4bf1mr8524933ejc.66.1682388339650; Mon, 24 Apr 2023 19:05:39 -0700 (PDT) MIME-Version: 1.0 References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230424223045.2066606-1-goldstein.w.n@gmail.com> In-Reply-To: From: Noah Goldstein Date: Mon, 24 Apr 2023 21:05:28 -0500 Message-ID: Subject: Re: [PATCH v2] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` To: "H.J. Lu" Cc: libc-alpha@sourceware.org, carlos@systemhalted.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-9.4 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Mon, Apr 24, 2023 at 5:49=E2=80=AFPM H.J. Lu wrote= : > > On Mon, Apr 24, 2023 at 3:30=E2=80=AFPM Noah Goldstein wrote: > > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 / > > ncores_per_socket'. This patch updates that value to roughly > > 'sizeof_L3 / 2` > > > > The original value (specifically dividing the `ncores_per_socket`) was > > done to limit the amount of other threads' data a `memcpy`/`memset` > > could evict. > > > > Dividing by 'ncores_per_socket', however leads to exceedingly low > > non-temporal threshholds and leads to using non-temporal stores in > > cases where `rep movsb` is multiple times faster. > > > > Furthermore, non-temporal stores are written directly to disk so using > > it at a size much smaller than L3 can place soon to be accessed data > > much further away than it otherwise could be. As well, modern machines > > are able to detect streaming patterns (especially if `rep movsb` is > > used) and provide LRU hints to the memory subsystem. This in affect > > caps the total amount of eviction at 1/cache_assosiativity, far below > > meaningfully thrashing the entire cache. > > > > As best I can tell, the benchmarks that lead this small threshold > > where done comparing non-temporal stores versus standard cacheable > > stores. A better comparison (linked below) is to be `rep movsb` which, > > on the measure systems, is nearly 2x faster than non-temporal stores > > at the low-end of the previous threshold, and within 10% for over > > 100MB copies (well past even the current threshold). In cases with a > > low number of threads competing for bandwidth, `rep movsb` is ~2x > > faster up to `sizeof_L3`. > > > > Because there are still valid concerns about performance of large > > memcpy's using cacheable stores (both direct performance and on the > > system), if `rep movsb` is not available this patch also introduces a > > new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will > > continue to use the old calculation and be used if no ERMS memcpy is > > supported by the target. > > > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable > > stores where done using: > > https://github.com/goldsteinn/memcpy-nt-benchmarks > > > > Sheets results (also available in pdf on the github): > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9q= VuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml > > --- > > manual/tunables.texi | 16 +++- > > sysdeps/x86/cacheinfo.h | 8 +- > > sysdeps/x86/dl-cacheinfo.h | 85 +++++++++++++------ > > sysdeps/x86/dl-diagnostics-cpu.c | 2 + > > sysdeps/x86/dl-tunables.list | 3 + > > sysdeps/x86/include/cpu-features.h | 4 +- > > .../multiarch/memmove-vec-unaligned-erms.S | 12 ++- > > 7 files changed, 98 insertions(+), 32 deletions(-) > > > > diff --git a/manual/tunables.texi b/manual/tunables.texi > > index 130f94b2bc..8320e724f0 100644 > > --- a/manual/tunables.texi > > +++ b/manual/tunables.texi > > @@ -52,6 +52,7 @@ glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483= 647) > > glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff) > > glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xfffffffffff= fffff) > > glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfff= ffffffffffff) > > +glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, ma= x: 0xfffffffffffffff) > > We don't need this. We can use > > if (CPU_FEATURE_USABLE_P (cpu_features, ERMS)) > > to check for ERMS processors. > Ah makes sense. Does that work for FSRM as well? > > glibc.cpu.x86_shstk: > > glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffff= ffffff) > > glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff) > > @@ -486,7 +487,8 @@ thread stack originally backup by Huge Pages to def= ault pages. > > @cindex shared_cache_size tunables > > @cindex tunables, shared_cache_size > > @cindex non_temporal_threshold tunables > > -@cindex tunables, non_temporal_threshold > > +@cindex non_temporal_threshold tunables_no_erms > > +@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_er= ms > > > > @deftp {Tunable namespace} glibc.cpu > > Behavior of @theglibc{} can be tuned to assume specific hardware capab= ilities > > @@ -559,6 +561,18 @@ like memmove and memcpy. > > This tunable is specific to i386 and x86-64. > > @end deftp > > > > +@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms > > +The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to > > +the above, but is used specifically when the ERMS feature is not > > +available. ERMS function are often implemented with optimizations for > > +large streaming workloads. This often makes it a better choice than > > +non-temporal stores for a wider-range of values. When ERMS is not > > +available, however, non-temporal stores become preferable at a much > > +lower threshold. > > + > > +This tunable is specific to i386 and x86-64. > > +@end deftp > > + > > @deftp Tunable glibc.cpu.x86_rep_movsb_threshold > > The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user t= o > > set threshold in bytes to start using "rep movsb". The value must be > > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h > > index ec1bc142c4..1083bd6018 100644 > > --- a/sysdeps/x86/cacheinfo.h > > +++ b/sysdeps/x86/cacheinfo.h > > @@ -35,9 +35,12 @@ long int __x86_data_cache_size attribute_hidden =3D = 32 * 1024; > > long int __x86_shared_cache_size_half attribute_hidden =3D 1024 * 1024= / 2; > > long int __x86_shared_cache_size attribute_hidden =3D 1024 * 1024; > > > > -/* Threshold to use non temporal store. */ > > +/* Threshold to use non temporal store if ERMS is available. */ > > long int __x86_shared_non_temporal_threshold attribute_hidden; > > > > +/* Threshold to use non temporal store if ERMS is not available. */ > > +long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden; > > + > > /* Threshold to use Enhanced REP MOVSB. */ > > long int __x86_rep_movsb_threshold attribute_hidden =3D 2048; > > > > @@ -77,6 +80,9 @@ init_cacheinfo (void) > > __x86_shared_non_temporal_threshold > > =3D cpu_features->non_temporal_threshold; > > > > + __x86_shared_non_temporal_threshold_no_erms > > + =3D cpu_features->non_temporal_threshold_no_erms; > > + > > __x86_rep_movsb_threshold =3D cpu_features->rep_movsb_threshold; > > __x86_rep_stosb_threshold =3D cpu_features->rep_stosb_threshold; > > __x86_rep_movsb_stop_threshold =3D cpu_features->rep_movsb_stop_thr= eshold; > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h > > index ec88945b39..94d5c6183a 100644 > > --- a/sysdeps/x86/dl-cacheinfo.h > > +++ b/sysdeps/x86/dl-cacheinfo.h > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name) > > } > > > > static void > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr= , > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thr= ead_ptr, unsigned int *threads_ptr, > > long int core) > > { > > unsigned int eax; > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsign= ed int *threads_ptr, > > unsigned int family =3D cpu_features->basic.family; > > unsigned int model =3D cpu_features->basic.model; > > long int shared =3D *shared_ptr; > > + long int shared_per_thread =3D *shared_per_thread_ptr; > > unsigned int threads =3D *threads_ptr; > > bool inclusive_cache =3D true; > > bool support_count_mask =3D true; > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsign= ed int *threads_ptr, > > /* Try L2 otherwise. */ > > level =3D 2; > > shared =3D core; > > + shared_per_thread =3D core; > > threads_l2 =3D 0; > > threads_l3 =3D -1; > > } > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsi= gned int *threads_ptr, > > } > > else > > { > > -intel_bug_no_cache_info: > > - /* Assume that all logical threads share the highest cache > > - level. */ > > - threads > > - =3D ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 1= 6) > > - & 0xff); > > - } > > - > > - /* Cap usage of highest cache level to the number of supported > > - threads. */ > > - if (shared > 0 && threads > 0) > > - shared /=3D threads; > > + intel_bug_no_cache_info: > > + /* Assume that all logical threads share the highest cache > > + level. */ > > + threads =3D ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx= >> 16) > > + & 0xff); > > + > > + /* Get per-thread size of highest level cache. */ > > + if (shared_per_thread > 0 && threads > 0) > > + shared_per_thread /=3D threads; > > + } > > } > > > > /* Account for non-inclusive L2 and L3 caches. */ > > if (!inclusive_cache) > > { > > if (threads_l2 > 0) > > - core /=3D threads_l2; > > + shared_per_thread +=3D core / threads_l2; > > shared +=3D core; > > } > > > > *shared_ptr =3D shared; > > + *shared_per_thread_ptr =3D shared_per_thread; > > *threads_ptr =3D threads; > > } > > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > > /* Find out what brand of processor. */ > > long int data =3D -1; > > long int shared =3D -1; > > + long int shared_per_thread =3D -1; > > long int core =3D -1; > > unsigned int threads =3D 0; > > unsigned long int level1_icache_size =3D -1; > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > > data =3D handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features); > > core =3D handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features); > > shared =3D handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features); > > + shared_per_thread =3D shared; > > > > level1_icache_size > > =3D handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features); > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_featu= res) > > level4_cache_size > > =3D handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features); > > > > - get_common_cache_info (&shared, &threads, core); > > + get_common_cache_info (&shared, &shared_per_thread, &threads, co= re); > > } > > else if (cpu_features->basic.kind =3D=3D arch_kind_zhaoxin) > > { > > data =3D handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE); > > core =3D handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE); > > shared =3D handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE); > > + shared_per_thread =3D shared; > > > > level1_icache_size =3D handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE); > > level1_icache_linesize =3D handle_zhaoxin (_SC_LEVEL1_ICACHE_LIN= ESIZE); > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_featu= res) > > level3_cache_assoc =3D handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC); > > level3_cache_linesize =3D handle_zhaoxin (_SC_LEVEL3_CACHE_LINES= IZE); > > > > - get_common_cache_info (&shared, &threads, core); > > + get_common_cache_info (&shared, &shared_per_thread, &threads, co= re); > > } > > else if (cpu_features->basic.kind =3D=3D arch_kind_amd) > > { > > data =3D handle_amd (_SC_LEVEL1_DCACHE_SIZE); > > core =3D handle_amd (_SC_LEVEL2_CACHE_SIZE); > > shared =3D handle_amd (_SC_LEVEL3_CACHE_SIZE); > > + shared_per_thread =3D shared; > > > > level1_icache_size =3D handle_amd (_SC_LEVEL1_ICACHE_SIZE); > > level1_icache_linesize =3D handle_amd (_SC_LEVEL1_ICACHE_LINESIZ= E); > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > > if (shared <=3D 0) > > /* No shared L3 cache. All we have is the L2 cache. */ > > shared =3D core; > > + > > + if (shared_per_thread <=3D 0) > > + shared_per_thread =3D shared; > > } > > > > cpu_features->level1_icache_size =3D level1_icache_size; > > @@ -730,17 +738,24 @@ dl_init_cacheinfo (struct cpu_features *cpu_featu= res) > > cpu_features->level3_cache_linesize =3D level3_cache_linesize; > > cpu_features->level4_cache_size =3D level4_cache_size; > > > > - /* The default setting for the non_temporal threshold is 3/4 of one > > - thread's share of the chip's cache. For most Intel and AMD proces= sors > > - with an initial release date between 2017 and 2020, a thread's ty= pical > > - share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4 > > - threshold leaves 125 KBytes to 500 KBytes of the thread's data > > - in cache after a maximum temporal copy, which will maintain > > - in cache a reasonable portion of the thread's stack and other > > - active data. If the threshold is set higher than one thread's > > - share of the cache, it has a substantial risk of negatively > > - impacting the performance of other threads running on the chip. *= / > > - unsigned long int non_temporal_threshold =3D shared * 3 / 4; > > + /* The default setting for the non_temporal threshold is 1/2 of size > > + of chip's cache. For most Intel and AMD processors with an > > + initial release date between 2017 and 2023, a thread's typical > > + share of the cache is from 18-64MB. Using the 1/2 L3 is meant to > > + estimate the point where non-temporal stores begin outcompeting > > + other methods. As well the point where the fact that non-temporal > > + stores are forced back to disk would already occured to the > > + majority of the lines in the copy. Note, concerns about the > > + entire L3 cache being evicted by the copy are mostly alleviated > > + by the fact that modern HW detects streaming patterns and > > + provides proper LRU hints so that the the maximum thrashing > > + capped at 1/assosiativity. */ > > + unsigned long int non_temporal_threshold =3D shared / 2; > > + /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable s= tores run > > + a higher risk of actually thrashing the cache as they don't have = a HW LRU > > + hint. As well, there performance in highly parallel situations is > > + noticeably worse. */ > > + unsigned long int non_temporal_threshold_no_erms =3D shared_per_thre= ad * 3 / 4; > > /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the= value of > > 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it= is best > > if that operation cannot overflow. Minimum of 0x4040 (16448) beca= use the > > @@ -754,6 +769,11 @@ dl_init_cacheinfo (struct cpu_features *cpu_featur= es) > > else if (non_temporal_threshold > maximum_non_temporal_threshold) > > non_temporal_threshold =3D maximum_non_temporal_threshold; > > > > + if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold) > > + non_temporal_threshold_no_erms =3D minimum_non_temporal_threshold; > > + else if (non_temporal_threshold_no_erms > maximum_non_temporal_thres= hold) > > + non_temporal_threshold_no_erms =3D maximum_non_temporal_threshold; > > + > > /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8. *= / > > unsigned int minimum_rep_movsb_threshold; > > /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for > > @@ -802,6 +822,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_featur= es) > > && tunable_size <=3D maximum_non_temporal_threshold) > > non_temporal_threshold =3D tunable_size; > > > > + tunable_size > > + =3D TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, N= ULL); > > + if (tunable_size > minimum_non_temporal_threshold > > + && tunable_size <=3D maximum_non_temporal_threshold) > > + non_temporal_threshold_no_erms =3D tunable_size; > > + > > tunable_size =3D TUNABLE_GET (x86_rep_movsb_threshold, long int, NUL= L); > > if (tunable_size > minimum_rep_movsb_threshold) > > rep_movsb_threshold =3D tunable_size; > > @@ -817,6 +843,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > > TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_th= reshold, > > minimum_non_temporal_threshold, > > maximum_non_temporal_threshold); > > + TUNABLE_SET_WITH_BOUNDS ( > > + x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_er= ms, > > + minimum_non_temporal_threshold, maximum_non_temporal_threshold); > > TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshol= d, > > minimum_rep_movsb_threshold, SIZE_MAX); > > TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshol= d, 1, > > @@ -837,6 +866,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > > cpu_features->data_cache_size =3D data; > > cpu_features->shared_cache_size =3D shared; > > cpu_features->non_temporal_threshold =3D non_temporal_threshold; > > + cpu_features->non_temporal_threshold_no_erms > > + =3D non_temporal_threshold_no_erms; > > cpu_features->rep_movsb_threshold =3D rep_movsb_threshold; > > cpu_features->rep_stosb_threshold =3D rep_stosb_threshold; > > cpu_features->rep_movsb_stop_threshold =3D rep_movsb_stop_threshold; > > diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnost= ics-cpu.c > > index a1578e4665..5c09472a10 100644 > > --- a/sysdeps/x86/dl-diagnostics-cpu.c > > +++ b/sysdeps/x86/dl-diagnostics-cpu.c > > @@ -83,6 +83,8 @@ _dl_diagnostics_cpu (void) > > cpu_features->shared_cache_size); > > print_cpu_features_value ("non_temporal_threshold", > > cpu_features->non_temporal_threshold); > > + print_cpu_features_value ("non_temporal_threshold_no_erms", > > + cpu_features->non_temporal_threshold_no_erm= s); > > print_cpu_features_value ("rep_movsb_threshold", > > cpu_features->rep_movsb_threshold); > > print_cpu_features_value ("rep_movsb_stop_threshold", > > diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.lis= t > > index feb7004036..aac6341716 100644 > > --- a/sysdeps/x86/dl-tunables.list > > +++ b/sysdeps/x86/dl-tunables.list > > @@ -30,6 +30,9 @@ glibc { > > x86_non_temporal_threshold { > > type: SIZE_T > > } > > + x86_non_temporal_threshold_no_erms { > > + type: SIZE_T > > + } > > x86_rep_movsb_threshold { > > type: SIZE_T > > # Since there is overhead to set up REP MOVSB operation, REP > > diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/c= pu-features.h > > index 40b8129d6a..df6c561eac 100644 > > --- a/sysdeps/x86/include/cpu-features.h > > +++ b/sysdeps/x86/include/cpu-features.h > > @@ -913,8 +913,10 @@ struct cpu_features > > /* Shared cache size for use in memory and string routines, typicall= y > > L2 or L3 size. */ > > unsigned long int shared_cache_size; > > - /* Threshold to use non temporal store. */ > > + /* Threshold to use non temporal store if ERMS is available. */ > > unsigned long int non_temporal_threshold; > > + /* Threshold to use non temporal store if ERMS is not available. */ > > + unsigned long int non_temporal_threshold_no_erms; > > /* Threshold to use "rep movsb". */ > > unsigned long int rep_movsb_threshold; > > /* Threshold to stop using "rep movsb". */ > > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sy= sdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > index d1b92785b0..856c3daf3b 100644 > > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > > @@ -424,8 +424,16 @@ L(more_8x_vec): > > jb L(more_8x_vec_backward_check_nop) > > /* Check if non-temporal move candidate. */ > > #if (defined USE_MULTIARCH || VEC_SIZE =3D=3D 16) && IS_IN (libc) > > - /* Check non-temporal store threshold. */ > > - cmp __x86_shared_non_temporal_threshold(%rip), %RDX_LP > > + /* Check non-temporal store threshold if ERMS is not available. > > + NB: This path is only hit if we jumped here from L(more_2x_v= ec). > > + If we went to L(movsb), then we enter at either the forward = loop > > + directly or go to the backward loop. > > + > > + WARNING: `__x86_shared_non_temporal_threshold_no_erms` shoul= d > > + NEVER be used in a control flow that could come from > > + L(movsb_more_2x_vec) without checking checkout > > + `__x86_rep_movsb_threshold` first. */ > > + cmp __x86_shared_non_temporal_threshold_no_erms(%rip), %RDX= _LP > > ja L(large_memcpy_2x) > > #endif > > /* To reach this point there cannot be overlap and dst > src. S= o > > -- > > 2.34.1 > > > > > -- > H.J.