From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oa1-x36.google.com (mail-oa1-x36.google.com [IPv6:2001:4860:4864:20::36]) by sourceware.org (Postfix) with ESMTPS id E96BB3858D38 for ; Tue, 12 Sep 2023 03:50:25 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org E96BB3858D38 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-oa1-x36.google.com with SMTP id 586e51a60fabf-1c504386374so3851969fac.3 for ; Mon, 11 Sep 2023 20:50:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1694490624; x=1695095424; darn=sourceware.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=606PYCklDISNyjW5JQBmeQIuCy28kzs/Q7jozcadMkU=; b=rLxYk7i4zh8qbHqSOAXeEflruWSD75FHxaOOXcEMt1/iNDBitQCNPu8GqDWI/IJNuX LGxOGKZvieq2uWExrN9Bun4aYkifN6dUon0a+aqfJXPRWXtt0W/PshcMod+kZEd3qmO3 dvp5xRu0q2Dk97fGlw3N8znvz5AmAbNnaSErBt4RgQeqNnTfleFqUIbkQuDeSpOh//kP e5B3bq0G6/92Hz/rWvegrrVe08zTIXMpfSE6HmyCBIPArI+c6VDVz4m++ECaw6k348P1 ZmyfKI7laCMB+gtx0VNAvAsEITgzSQSWFetAe8hcXGMX09ns0Ja0dXaLdVWJG4x6KSyP qPsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694490624; x=1695095424; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=606PYCklDISNyjW5JQBmeQIuCy28kzs/Q7jozcadMkU=; b=M7D/ob4ALUujc+PzuxL8PPPN5W/pkneh/sstVrFPVdjmC6CDUtvykU3yJt3hMzQU2j BDvmJ/BjJw27kXMA3YR3oLdUCpFKqurhQDlwYITIlgtTOShLlu5GBBda76mQHDIdXwrT dVME3juYNQqRJbSHJBfqk81q0sBvCz0QnVWkHgpkkd/JQMDko6EhSSQ3zq0FcN8pEYiL 3xEdeFFmDQ2Q/noWjOTMQhssRFWc3FWV7WHldk+ouIGc2qFZ3tM1cPwK6jyB0hZ3LPMo Mu/syvIj//NLhzN5zzfl+QBZF1pebya030Mc+/iOj5rb3KUsW8XOal4dxU4l14oOjSof DMzg== X-Gm-Message-State: AOJu0Yxrgwpy5LEWyXjv5j3ahLX769t8izIzxjH24xG731KgPrtXzNZa 7Bj5TDrPLB7HvEvuQr8WYAhGNlKcyp1liishQ4TIl1SY X-Google-Smtp-Source: AGHT+IHnLwWDREHHvOZGOjc/hmPNI5gnxKMERawCEw2K955tfGj3CWYZlrfIztb/eD0uUVVh3t7oNwa7TDHrQcFkgbA= X-Received: by 2002:a05:6870:1955:b0:1d0:dc5b:d67 with SMTP id m21-20020a056870195500b001d0dc5b0d67mr14007876oak.42.1694490623604; Mon, 11 Sep 2023 20:50:23 -0700 (PDT) MIME-Version: 1.0 References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230607181803.4154764-1-goldstein.w.n@gmail.com> In-Reply-To: From: Noah Goldstein Date: Mon, 11 Sep 2023 20:50:06 -0700 Message-ID: Subject: Re: [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` To: libc-alpha@sourceware.org Cc: hjl.tools@gmail.com, carlos@systemhalted.org, DJ Delorie , "Carlos O'Donell" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-9.1 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Tue, Sep 5, 2023 at 8:37=E2=80=AFAM Noah Goldstein wrote: > > On Mon, Aug 28, 2023 at 3:02=E2=80=AFPM Noah Goldstein wrote: > > > > On Thu, Aug 24, 2023 at 12:06=E2=80=AFPM Noah Goldstein wrote: > > > > > > On Tue, Aug 22, 2023 at 10:11=E2=80=AFAM Noah Goldstein wrote: > > > > > > > > On Mon, Aug 14, 2023 at 6:00=E2=80=AFPM Noah Goldstein wrote: > > > > > > > > > > On Wed, Jun 7, 2023 at 1:18=E2=80=AFPM Noah Goldstein wrote: > > > > > > > > > > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L= 3 / > > > > > > ncores_per_socket'. This patch updates that value to roughly > > > > > > 'sizeof_L3 / 4` > > > > > > > > > > > > The original value (specifically dividing the `ncores_per_socke= t`) was > > > > > > done to limit the amount of other threads' data a `memcpy`/`mem= set` > > > > > > could evict. > > > > > > > > > > > > Dividing by 'ncores_per_socket', however leads to exceedingly l= ow > > > > > > non-temporal thresholds and leads to using non-temporal stores = in > > > > > > cases where REP MOVSB is multiple times faster. > > > > > > > > > > > > Furthermore, non-temporal stores are written directly to main m= emory > > > > > > so using it at a size much smaller than L3 can place soon to be > > > > > > accessed data much further away than it otherwise could be. As = well, > > > > > > modern machines are able to detect streaming patterns (especial= ly if > > > > > > REP MOVSB is used) and provide LRU hints to the memory subsyste= m. This > > > > > > in affect caps the total amount of eviction at 1/cache_associat= ivity, > > > > > > far below meaningfully thrashing the entire cache. > > > > > > > > > > > > As best I can tell, the benchmarks that lead this small thresho= ld > > > > > > where done comparing non-temporal stores versus standard cachea= ble > > > > > > stores. A better comparison (linked below) is to be REP MOVSB w= hich, > > > > > > on the measure systems, is nearly 2x faster than non-temporal s= tores > > > > > > at the low-end of the previous threshold, and within 10% for ov= er > > > > > > 100MB copies (well past even the current threshold). In cases w= ith a > > > > > > low number of threads competing for bandwidth, REP MOVSB is ~2x= faster > > > > > > up to `sizeof_L3`. > > > > > > > > > > > > The divisor of `4` is a somewhat arbitrary value. From benchmar= ks it > > > > > > seems Skylake and Icelake both prefer a divisor of `2`, but old= er CPUs > > > > > > such as Broadwell prefer something closer to `8`. This patch is= meant > > > > > > to be followed up by another one to make the divisor cpu-specif= ic, but > > > > > > in the meantime (and for easier backporting), this patch settle= s on > > > > > > `4` as a middle-ground. > > > > > > > > > > > > Benchmarks comparing non-temporal stores, REP MOVSB, and cachea= ble > > > > > > stores where done using: > > > > > > https://github.com/goldsteinn/memcpy-nt-benchmarks > > > > > > > > > > > > Sheets results (also available in pdf on the github): > > > > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6t= G_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml > > > > > > Reviewed-by: DJ Delorie > > > > > > Reviewed-by: Carlos O'Donell > > > > > > --- > > > > > > sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++-------= -------- > > > > > > 1 file changed, 43 insertions(+), 27 deletions(-) > > > > > > > > > > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cachei= nfo.h > > > > > > index 877e73d700..3bd3b3ec1b 100644 > > > > > > --- a/sysdeps/x86/dl-cacheinfo.h > > > > > > +++ b/sysdeps/x86/dl-cacheinfo.h > > > > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name) > > > > > > } > > > > > > > > > > > > static void > > > > > > -get_common_cache_info (long int *shared_ptr, unsigned int *thr= eads_ptr, > > > > > > +get_common_cache_info (long int *shared_ptr, long int * shared= _per_thread_ptr, unsigned int *threads_ptr, > > > > > > long int core) > > > > > > { > > > > > > unsigned int eax; > > > > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr= , unsigned int *threads_ptr, > > > > > > unsigned int family =3D cpu_features->basic.family; > > > > > > unsigned int model =3D cpu_features->basic.model; > > > > > > long int shared =3D *shared_ptr; > > > > > > + long int shared_per_thread =3D *shared_per_thread_ptr; > > > > > > unsigned int threads =3D *threads_ptr; > > > > > > bool inclusive_cache =3D true; > > > > > > bool support_count_mask =3D true; > > > > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr= , unsigned int *threads_ptr, > > > > > > /* Try L2 otherwise. */ > > > > > > level =3D 2; > > > > > > shared =3D core; > > > > > > + shared_per_thread =3D core; > > > > > > threads_l2 =3D 0; > > > > > > threads_l3 =3D -1; > > > > > > } > > > > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_p= tr, unsigned int *threads_ptr, > > > > > > } > > > > > > else > > > > > > { > > > > > > -intel_bug_no_cache_info: > > > > > > - /* Assume that all logical threads share the highest= cache > > > > > > - level. */ > > > > > > - threads > > > > > > - =3D ((cpu_features->features[CPUID_INDEX_1].cpuid.= ebx >> 16) > > > > > > - & 0xff); > > > > > > - } > > > > > > - > > > > > > - /* Cap usage of highest cache level to the number of s= upported > > > > > > - threads. */ > > > > > > - if (shared > 0 && threads > 0) > > > > > > - shared /=3D threads; > > > > > > + intel_bug_no_cache_info: > > > > > > + /* Assume that all logical threads share the highest = cache > > > > > > + level. */ > > > > > > + threads =3D ((cpu_features->features[CPUID_INDEX_1].c= puid.ebx >> 16) > > > > > > + & 0xff); > > > > > > + > > > > > > + /* Get per-thread size of highest level cache. */ > > > > > > + if (shared_per_thread > 0 && threads > 0) > > > > > > + shared_per_thread /=3D threads; > > > > > > + } > > > > > > } > > > > > > > > > > > > /* Account for non-inclusive L2 and L3 caches. */ > > > > > > if (!inclusive_cache) > > > > > > { > > > > > > if (threads_l2 > 0) > > > > > > - core /=3D threads_l2; > > > > > > + shared_per_thread +=3D core / threads_l2; > > > > > > shared +=3D core; > > > > > > } > > > > > > > > > > > > *shared_ptr =3D shared; > > > > > > + *shared_per_thread_ptr =3D shared_per_thread; > > > > > > *threads_ptr =3D threads; > > > > > > } > > > > > > > > > > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu= _features) > > > > > > /* Find out what brand of processor. */ > > > > > > long int data =3D -1; > > > > > > long int shared =3D -1; > > > > > > + long int shared_per_thread =3D -1; > > > > > > long int core =3D -1; > > > > > > unsigned int threads =3D 0; > > > > > > unsigned long int level1_icache_size =3D -1; > > > > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu= _features) > > > > > > data =3D handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_featu= res); > > > > > > core =3D handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_featur= es); > > > > > > shared =3D handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_feat= ures); > > > > > > + shared_per_thread =3D shared; > > > > > > > > > > > > level1_icache_size > > > > > > =3D handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features)= ; > > > > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *c= pu_features) > > > > > > level4_cache_size > > > > > > =3D handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features); > > > > > > > > > > > > - get_common_cache_info (&shared, &threads, core); > > > > > > + get_common_cache_info (&shared, &shared_per_thread, &thr= eads, core); > > > > > > } > > > > > > else if (cpu_features->basic.kind =3D=3D arch_kind_zhaoxin) > > > > > > { > > > > > > data =3D handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE); > > > > > > core =3D handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE); > > > > > > shared =3D handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE); > > > > > > + shared_per_thread =3D shared; > > > > > > > > > > > > level1_icache_size =3D handle_zhaoxin (_SC_LEVEL1_ICACHE= _SIZE); > > > > > > level1_icache_linesize =3D handle_zhaoxin (_SC_LEVEL1_IC= ACHE_LINESIZE); > > > > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *c= pu_features) > > > > > > level3_cache_assoc =3D handle_zhaoxin (_SC_LEVEL3_CACHE_= ASSOC); > > > > > > level3_cache_linesize =3D handle_zhaoxin (_SC_LEVEL3_CAC= HE_LINESIZE); > > > > > > > > > > > > - get_common_cache_info (&shared, &threads, core); > > > > > > + get_common_cache_info (&shared, &shared_per_thread, &thr= eads, core); > > > > > > } > > > > > > else if (cpu_features->basic.kind =3D=3D arch_kind_amd) > > > > > > { > > > > > > data =3D handle_amd (_SC_LEVEL1_DCACHE_SIZE); > > > > > > core =3D handle_amd (_SC_LEVEL2_CACHE_SIZE); > > > > > > shared =3D handle_amd (_SC_LEVEL3_CACHE_SIZE); > > > > > > + shared_per_thread =3D shared; > > > > > > > > > > > > level1_icache_size =3D handle_amd (_SC_LEVEL1_ICACHE_SIZ= E); > > > > > > level1_icache_linesize =3D handle_amd (_SC_LEVEL1_ICACHE= _LINESIZE); > > > > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu= _features) > > > > > > if (shared <=3D 0) > > > > > > /* No shared L3 cache. All we have is the L2 cache. = */ > > > > > > shared =3D core; > > > > > > + > > > > > > + if (shared_per_thread <=3D 0) > > > > > > + shared_per_thread =3D shared; > > > > > > } > > > > > > > > > > > > cpu_features->level1_icache_size =3D level1_icache_size; > > > > > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *c= pu_features) > > > > > > cpu_features->level3_cache_linesize =3D level3_cache_linesiz= e; > > > > > > cpu_features->level4_cache_size =3D level4_cache_size; > > > > > > > > > > > > - /* The default setting for the non_temporal threshold is 3/4= of one > > > > > > - thread's share of the chip's cache. For most Intel and AM= D processors > > > > > > - with an initial release date between 2017 and 2020, a thr= ead's typical > > > > > > - share of the cache is from 500 KBytes to 2 MBytes. Using = the 3/4 > > > > > > - threshold leaves 125 KBytes to 500 KBytes of the thread's= data > > > > > > - in cache after a maximum temporal copy, which will mainta= in > > > > > > - in cache a reasonable portion of the thread's stack and o= ther > > > > > > - active data. If the threshold is set higher than one thre= ad's > > > > > > - share of the cache, it has a substantial risk of negative= ly > > > > > > - impacting the performance of other threads running on the= chip. */ > > > > > > - unsigned long int non_temporal_threshold =3D shared * 3 / 4; > > > > > > + /* The default setting for the non_temporal threshold is 1/4= of size > > > > > > + of the chip's cache. For most Intel and AMD processors wi= th an > > > > > > + initial release date between 2017 and 2023, a thread's ty= pical > > > > > > + share of the cache is from 18-64MB. Using the 1/4 L3 is m= eant to > > > > > > + estimate the point where non-temporal stores begin out-co= mpeting > > > > > > + REP MOVSB. As well the point where the fact that non-temp= oral > > > > > > + stores are forced back to main memory would already occur= red to the > > > > > > + majority of the lines in the copy. Note, concerns about t= he > > > > > > + entire L3 cache being evicted by the copy are mostly alle= viated > > > > > > + by the fact that modern HW detects streaming patterns and > > > > > > + provides proper LRU hints so that the maximum thrashing > > > > > > + capped at 1/associativity. */ > > > > > > + unsigned long int non_temporal_threshold =3D shared / 4; > > > > > > + /* If no ERMS, we use the per-thread L3 chunking. Normal cac= heable stores run > > > > > > + a higher risk of actually thrashing the cache as they don= 't have a HW LRU > > > > > > + hint. As well, their performance in highly parallel situa= tions is > > > > > > + noticeably worse. */ > > > > > > + if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS)) > > > > > > + non_temporal_threshold =3D shared_per_thread * 3 / 4; > > > > > > /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-sh= ifts the value of > > > > > > 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4= ) and it is best > > > > > > if that operation cannot overflow. Minimum of 0x4040 (164= 48) because the > > > > > > -- > > > > > > 2.34.1 > > > > > > > > > > > > > > > > Hi All, > > > > > > > > > > I want to backport this series (minus CPUID codes) too 2.28 - 2.3= 7 > > > > > > > > > > The patches I want to backport are: > > > > > > > > > > 1/4 > > > > > ``` > > > > > commit af992e7abdc9049714da76cae1e5e18bc4838fb8 > > > > > Author: Noah Goldstein > > > > > Date: Wed Jun 7 13:18:01 2023 -0500 > > > > > > > > > > x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 = / 4` > > > > > ``` > > > > > > > > > > 2/4 > > > > > ``` > > > > > commit 47f747217811db35854ea06741be3685e8bbd44d > > > > > Author: Noah Goldstein > > > > > Date: Mon Jul 17 23:14:33 2023 -0500 > > > > > > > > > > x86: Fix slight bug in `shared_per_thread` cache size calcula= tion. > > > > > ``` > > > > > > > > > > 3/4 > > > > > ``` > > > > > commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da > > > > > Author: Noah Goldstein > > > > > Date: Tue Jul 18 10:27:59 2023 -0500 > > > > > > > > > > [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound = for > > > > > NT threshold. > > > > > ``` > > > > > > > > > > 4/4 > > > > > ``` > > > > > commit 084fb31bc2c5f95ae0b9e6df4d3cf0ff43471ede (origin/master, > > > > > origin/HEAD, master) > > > > > Author: Noah Goldstein > > > > > Date: Thu Aug 10 19:28:24 2023 -0500 > > > > > > > > > > x86: Fix incorrect scope of setting `shared_per_thread` [BZ# = 30745] > > > > > ``` > > > > > > > > > > The proposed patches are at: > > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/bac= kport-28 > > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/bac= kport-29 > > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/bac= kport-30 > > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/bac= kport-31 > > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/bac= kport-32 > > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/bac= kport-33 > > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/bac= kport-34 > > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/bac= kport-35 > > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/bac= kport-36 > > > > > https://gitlab.com/x86-glibc/glibc/-/commits/users/goldsteinn/bac= kport-37 > > > > > > > > > > I know the protocol is not to normally backport optimizations, bu= t I'd argue > > > > > these are closer to bug fixes for a severe misconfiguration than = a proper > > > > > optimization series. As well, the risk of introducing a correctn= ess related > > > > > bug is exceedingly low. > > > > > > > > > > Typically the type of optimization patch this is discouraged are = the ones > > > > > that actually change a particular function. I.e if these fixes wh= ere directly > > > > > to the memmove implementation. These patches, however, don't touc= h > > > > > any of the memmove code itself, and are just re-tuning a value us= ed by > > > > > memmove which seems categorically different. > > > > > > > > > > The value also only informs memmove strategy. If these patches tu= rn > > > > > out to be deeply buggy and set the new threshold incorrectly, the > > > > > blowback is limited to a bad performance (which we already have), > > > > > and is extremely unlikely to affect correctness in any way. > > > > > > > > > > Thoughts? > > > > > > > > Ping/Any Objections to me backporting? > > > > > > I am going to take the continued lack of objections to mean no one > > > has issue with me backporting these. > > > > > > I will start backporting next week. Will do so piecemeal to give time > > > for issues to emerge before fulling committing. > > > > Have done the 2.37 backport. If nothing comes up this week will proceed > > to 2.36 and further. > Have seen no issue with the 2.37 backport. > Going to backport to 2.36/2.35/2.34. If no issues will complete backporti= ng > through 2.28 next week. I have backported these patches to 2.28-2.33. Expect to be done with the is= sue now.