From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com [IPv6:2a00:1450:4864:20::533]) by sourceware.org (Postfix) with ESMTPS id 4CA92385B50D for ; Thu, 20 Apr 2023 20:23:27 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 4CA92385B50D Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-ed1-x533.google.com with SMTP id 4fb4d7f45d1cf-5066ce4f490so1230924a12.2 for ; Thu, 20 Apr 2023 13:23:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682022205; x=1684614205; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=lIPxoK39/JqDFMD6uLlpqe3ozqd25kLMXbVm/9QQUQ8=; b=fJ8KChK/XM/3szrJvuOEmBvrEVbGax9jXCCPdzCu7ps/JFlFryfMo53hxGaIcddGRc /+ITdLJsS/X0+GnCbbrIj2F5unHV9gNXs6FlaOFbo+w+GmlnqjKUHhPuxSgzEiP8EbPH VLGC6F5amG+ceyKuTDHbeupVkrHz4zYVHww6v8VDGF8uYfVKjq00EfAxVvyTxCB/WxVi K2js2KodMErbFmeBzwyUCA8EkRxtqNavIt7stK2bHEEIAf0uSo3Q1kZIKkSyEyVDwkM0 fqePUU7Eg12qaFL+dQkRDbEXwHhl8yglZvgdjHeB5eCtSPkjewap4WWm1QQsiE+IL6Ix DXwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682022205; x=1684614205; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lIPxoK39/JqDFMD6uLlpqe3ozqd25kLMXbVm/9QQUQ8=; b=c9acdTAjjGt7pvxWtJhYBr0HmzIc6d/wKI/jNYQAEbGzUUahbKv4NrKhDaIvBXEmLX xS5arn4NBjD1H87Evax3jatzX2MpPH5zkV3lLgiTF7phIDJ7iDR2hGRAV0+5BW97RHsr bg5hzLZJig31WOlLPUUKx4w7a89o0KvW3NIyv6VONjD1az/NIsjhfL9/TKeBdK+Urlfm JiTxboAQoslQFOgpVi5EtzbdvKxIoo8DwBZj8EG9sU+12mw2uquiWGgkmEcW5NwzHjxG n8+jHxnHpuEX3moTeZZDXhSzm9YAR1i93A4EWduEaET4RiZfPHVHxsZS9CMWsGvUb4g3 X7Tw== X-Gm-Message-State: AAQBX9dpg3cs8o3mfCTvL0iRkk3Rgp/IaM7SEHwheTSZLeLQ97lgI8Av yr9DwaSWanJFiP9psGMVxqhp8Zb4bCX4ZyTfqqCSFCdl X-Google-Smtp-Source: AKy350YxArSguRyg+nawFDo/CJeyF2blrJoVS+kbevPS4bJezuE/KoRsmKiqLiHSqL5cerd4YxL+BC6RoPc6OuLcmZA= X-Received: by 2002:a05:6402:65a:b0:4fc:9a22:e0d2 with SMTP id u26-20020a056402065a00b004fc9a22e0d2mr2800283edx.14.1682022205285; Thu, 20 Apr 2023 13:23:25 -0700 (PDT) MIME-Version: 1.0 References: <1601072475-22682-1-git-send-email-patrick.mcgehearty@oracle.com> In-Reply-To: From: Noah Goldstein Date: Thu, 20 Apr 2023 15:23:14 -0500 Message-ID: Subject: Re: [PATCH v3] Reversing calculation of __x86_shared_non_temporal_threshold To: "H.J. Lu" Cc: GNU C Library Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-9.5 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Thu, Apr 20, 2023 at 11:18=E2=80=AFAM H.J. Lu wrot= e: > > On Wed, Apr 19, 2023 at 5:27=E2=80=AFPM Noah Goldstein wrote: > > > > On Wed, Apr 19, 2023 at 7:12=E2=80=AFPM H.J. Lu w= rote: > > > > > > On Wed, Apr 19, 2023 at 4:24=E2=80=AFPM Noah Goldstein wrote: > > > > > > > > On Wed, Apr 19, 2023 at 5:43=E2=80=AFPM H.J. Lu wrote: > > > > > > > > > > On Wed, Apr 19, 2023 at 3:30=E2=80=AFPM Noah Goldstein wrote: > > > > > > > > > > > > On Wed, Apr 19, 2023 at 5:26=E2=80=AFPM H.J. Lu wrote: > > > > > > > > > > > > > > ---------- Forwarded message --------- > > > > > > > From: Patrick McGehearty via Libc-alpha > > > > > > > Date: Fri, Sep 25, 2020 at 3:21=E2=80=AFPM > > > > > > > Subject: [PATCH v3] Reversing calculation of __x86_shared_non= _temporal_threshold > > > > > > > To: > > > > > > > > > > > > > > > > > > > > > The __x86_shared_non_temporal_threshold determines when memcp= y on x86 > > > > > > > uses non_temporal stores to avoid pushing other data out of t= he last > > > > > > > level cache. > > > > > > > > > > > > > > This patch proposes to revert the calculation change made by = H.J. Lu's > > > > > > > patch of June 2, 2017. > > > > > > > > > > > > > > H.J. Lu's patch selected a threshold suitable for a single th= read > > > > > > > getting maximum performance. It was tuned using the single th= readed > > > > > > > large memcpy micro benchmark on an 8 core processor. The last= change > > > > > > > changes the threshold from using 3/4 of one thread's share of= the > > > > > > > cache to using 3/4 of the entire cache of a multi-threaded sy= stem > > > > > > > before switching to non-temporal stores. Multi-threaded syste= ms with > > > > > > > more than a few threads are server-class and typically have m= any > > > > > > > active threads. If one thread consumes 3/4 of the available c= ache for > > > > > > > all threads, it will cause other active threads to have data = removed > > > > > > > from the cache. Two examples show the range of the effect. Jo= hn > > > > > > > McCalpin's widely parallel Stream benchmark, which runs in pa= rallel > > > > > > > and fetches data sequentially, saw a 20% slowdown with this p= atch on > > > > > > > an internal system test of 128 threads. This regression was d= iscovered > > > > > > > when comparing OL8 performance to OL7. An example that compa= res > > > > > > > normal stores to non-temporal stores may be found at > > > > > > > https://vgatherps.github.io/2018-09-02-nontemporal/. A simpl= e test > > > > > > > shows performance loss of 400 to 500% due to a failure to use > > > > > > > nontemporal stores. These performance losses are most likely = to occur > > > > > > > when the system load is heaviest and good performance is crit= ical. > > > > > > > > > > > > > > The tunable x86_non_temporal_threshold can be used to overrid= e the > > > > > > > default for the knowledgable user who really wants maximum ca= che > > > > > > > allocation to a single thread in a multi-threaded system. > > > > > > > The manual entry for the tunable has been expanded to provide > > > > > > > more information about its purpose. > > > > > > > > > > > > > > modified: sysdeps/x86/cacheinfo.c > > > > > > > modified: manual/tunables.texi > > > > > > > --- > > > > > > > manual/tunables.texi | 6 +++++- > > > > > > > sysdeps/x86/cacheinfo.c | 16 +++++++++++----- > > > > > > > 2 files changed, 16 insertions(+), 6 deletions(-) > > > > > > > > > > > > > > diff --git a/manual/tunables.texi b/manual/tunables.texi > > > > > > > index b6bb54d..94d4fbd 100644 > > > > > > > --- a/manual/tunables.texi > > > > > > > +++ b/manual/tunables.texi > > > > > > > @@ -364,7 +364,11 @@ set shared cache size in bytes for use i= n memory > > > > > > > and string routines. > > > > > > > > > > > > > > @deftp Tunable glibc.tune.x86_non_temporal_threshold > > > > > > > The @code{glibc.tune.x86_non_temporal_threshold} tunable all= ows the user > > > > > > > -to set threshold in bytes for non temporal store. > > > > > > > +to set threshold in bytes for non temporal store. Non tempor= al stores > > > > > > > +give a hint to the hardware to move data directly to memory = without > > > > > > > +displacing other data from the cache. This tunable is used b= y some > > > > > > > +platforms to determine when to use non temporal stores in op= erations > > > > > > > +like memmove and memcpy. > > > > > > > > > > > > > > This tunable is specific to i386 and x86-64. > > > > > > > @end deftp > > > > > > > diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.= c > > > > > > > index b9444dd..42b468d 100644 > > > > > > > --- a/sysdeps/x86/cacheinfo.c > > > > > > > +++ b/sysdeps/x86/cacheinfo.c > > > > > > > @@ -778,14 +778,20 @@ intel_bug_no_cache_info: > > > > > > > __x86_shared_cache_size =3D shared; > > > > > > > } > > > > > > > > > > > > > > - /* The large memcpy micro benchmark in glibc shows that 6 = times of > > > > > > > - shared cache size is the approximate value above which = non-temporal > > > > > > > - store becomes faster on a 8-core processor. This is th= e 3/4 of the > > > > > > > - total shared cache size. */ > > > > > > > + /* The default setting for the non_temporal threshold is 3= /4 of one > > > > > > > + thread's share of the chip's cache. For most Intel and = AMD processors > > > > > > > + with an initial release date between 2017 and 2020, a t= hread's typical > > > > > > > + share of the cache is from 500 KBytes to 2 MBytes. Usin= g the 3/4 > > > > > > > + threshold leaves 125 KBytes to 500 KBytes of the thread= 's data > > > > > > > + in cache after a maximum temporal copy, which will main= tain > > > > > > > + in cache a reasonable portion of the thread's stack and= other > > > > > > > + active data. If the threshold is set higher than one th= read's > > > > > > > + share of the cache, it has a substantial risk of negati= vely > > > > > > > + impacting the performance of other threads running on t= he chip. */ > > > > > > > __x86_shared_non_temporal_threshold > > > > > > > =3D (cpu_features->non_temporal_threshold !=3D 0 > > > > > > > ? cpu_features->non_temporal_threshold > > > > > > > - : __x86_shared_cache_size * threads * 3 / 4); > > > > > > > + : __x86_shared_cache_size * 3 / 4); > > > > > > > } > > > > > > > > > > > > > > #endif > > > > > > > -- > > > > > > > 1.8.3.1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > H.J. > > > > > > > > > > > > > > > > > > I am looking into re-tuning the NT store threshold which appear= s to be > > > > > > too low in many cases. > > > > > > > > > > > > I've played around with some micro-benchmarks: > > > > > > https://github.com/goldsteinn/memcpy-nt-benchmarks > > > > > > > > > > > > I am finding that for the most part, ERMS stays competitive wit= h > > > > > > NT-Stores even as core count increases with heavy read workload= s going > > > > > > on on other threads. > > > > > > See: https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/ma= ster/results-skx-pdf/skx-memcpy-4--read.pdf > > > > > > > > > > > > I saw: https://vgatherps.github.io/2018-09-02-nontemporal/ alth= ough > > > > > > it's not clear how to reproduce the results in the blog. I also= see it > > > > > > was only comparing vs standard temporal stores, not ERMS. > > > > > > > > > > > > Does anyone know of benchmarks or an application that can highl= ight > > > > > > the L3 clobbering issues brought up in this patch? > > > > > > > > > > You can try this: > > > > > > > > > > https://github.com/jeffhammond/STREAM > > > > > > > > That's the same as a normal memcpy benchmark no? Its just calling > > > > something like `tuned_STREAM_Copy()` (memcpy) in a loop maybe > > > > scattered with some other reads. Similar to what I was running to g= et: > > > > https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/master/resu= lts-skx-pdf/skx-memcpy-4--read.pdf > > > > > > tuned_STREAM_Copy doesn't use OPENMP pragma: > > > > > > #ifdef TUNED > > > tuned_STREAM_Copy(); > > > #else > > > #pragma omp parallel for > > > for (j=3D0; j > > c[j] =3D a[j]; > > > #endif > > > > > > It is single-threaded. > > > > > ``` > > #define do_copy //do_copy_erms / do_copy_nt > > void tuned_STREAM_Copy() > > { > > ssize_t j; > > #pragma omp parallel for > > for (j=3D0; j > do_copy(c + j * THREAD_CHUNK_SIZE, a + j * > > THREAD_CHUNK_SIZE, THREAD_CHUNK_SIZE); > > } > > ``` > > > > > > Either way on my ICL using the benchmark: > > > > > > > > ``` > > > > ERMS (L3) > > > > Function Best Rate MB/s Avg time Min time Max time > > > > Copy: 323410.5 0.001262 0.001245 0.001285 > > > > Scale: 26367.3 0.017114 0.015271 0.029576 > > > > Add: 29635.9 0.022948 0.020380 0.032384 > > > > Triad: 29401.0 0.021522 0.020543 0.024977 > > > > > > > > NT (L3) > > > > Function Best Rate MB/s Avg time Min time Max time > > > > Copy: 285375.1 0.001421 0.001411 0.001443 > > > > Scale: 26457.3 0.015358 0.015219 0.015730 > > > > Add: 29753.9 0.020656 0.020299 0.022881 > > > > Triad: 29594.0 0.020732 0.020409 0.022240 > > > > > > > > > > > > ERMS (L3 / 2) > > > > Function Best Rate MB/s Avg time Min time Max time > > > > Copy: 431049.0 0.000620 0.000467 0.001749 > > > > Scale: 27071.0 0.007996 0.007437 0.010018 > > > > Add: 31005.5 0.009864 0.009740 0.010432 > > > > Triad: 30359.7 0.010061 0.009947 0.010434 > > > > > > > > NT (L3 / 2) > > > > Function Best Rate MB/s Avg time Min time Max time > > > > Copy: 277315.2 0.000746 0.000726 0.000803 > > > > Scale: 27511.1 0.007540 0.007318 0.008739 > > > > Add: 30423.9 0.010116 0.009926 0.011031 > > > > Triad: 30430.5 0.009980 0.009924 0.010097 > > > > ``` > > > > Seems to suggest ERMS is favorable. > > > > > > If we don't have a workload to support the current threshold, should we > restore the old threshold for processors with ERMS? How about L3/2? > > > -- > H.J.