From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yw1-x112c.google.com (mail-yw1-x112c.google.com [IPv6:2607:f8b0:4864:20::112c]) by sourceware.org (Postfix) with ESMTPS id 5BB243858D37 for ; Thu, 20 Apr 2023 16:18:19 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 5BB243858D37 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-yw1-x112c.google.com with SMTP id 00721157ae682-54ee0b73e08so50870977b3.0 for ; Thu, 20 Apr 2023 09:18:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682007499; x=1684599499; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=gKG/SBssR+YiOUn18AJ5VMqLip/YkGwkO93VSd2DDyM=; b=r3Hw7CFrPUbwdN8nRxu/jjNEpiLMem5DQhvWuZf52VIasKFanQMjVN281y9qWib6sv KZFA7OvQWu9q5ie9u9spM+R1CnBcMM2ao08NGiiQjFzmyyzbR2N4bPOCLgoEQRKwkgom uoYafSMg0pLCZmGDmEkuNDKg33EMPnoJOh0nbE1PZ42jwidmaqbajiMp2pRagOobadZo lOiBgsSCRvRAsdGs0NVo3L+Gdfn81CRQTOeT8JHXc3k554gsZhHXdBLFY+iyE+b0Exa0 LUtZHNRRpiDt/nMr4jhBuBOX+SdphcaLDJ/drawnJp6IDVFwqMwG76mxCKyDvakClne3 Mz3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682007499; x=1684599499; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gKG/SBssR+YiOUn18AJ5VMqLip/YkGwkO93VSd2DDyM=; b=g2/OvurHH/aU6nC/bfaDEpYXTJL5dSSFdrgIMxDTwkFrml+nd2gUJ5kzCEViQtjMFI rYSaztTokJyvf+t8IhDfRl7BVCGwIg8fbkllq6GDVSys8IbqiI6fRlhXKIt1H0hdmljX kLc4NmkyuATLs6EkkCer9AVCzHd+xsKfgK9f/c7euNeGQ0Nsys1/RzXiymkl12dDvthY dpQYNIr2PEg28Iiz9cAO33l3uj3r/ddzN5hF70n13jDivEvNi2HDgf7Y0TqWNjGqD4Wk iDZx1ezGcORz+EKNUAynQ7Js9oxqqpCeRkZne1WQpWxXYvvYfJPPsLieth1RyO18sWDs nGAA== X-Gm-Message-State: AAQBX9ewaNZkFXnp9zj/tI/79s5P2YiD7uZPNxpsYlrcI6JVk3DJkrPu We7ZPxzD1l3iVOkPCOfLzwXWC/Sz2lKdCk4ydsgN5CVJ X-Google-Smtp-Source: AKy350ZlEh3VipHfehAVj3jYhbXxHA6Ed8GqJySvuPgunjSE02erswQe9tWBv2+x8jh40md8Jm+7ZPSLX41TPlOUZq0= X-Received: by 2002:a81:4419:0:b0:54f:9364:608f with SMTP id r25-20020a814419000000b0054f9364608fmr927448ywa.2.1682007497639; Thu, 20 Apr 2023 09:18:17 -0700 (PDT) MIME-Version: 1.0 References: <1601072475-22682-1-git-send-email-patrick.mcgehearty@oracle.com> In-Reply-To: From: "H.J. Lu" Date: Thu, 20 Apr 2023 09:17:41 -0700 Message-ID: Subject: Re: [PATCH v3] Reversing calculation of __x86_shared_non_temporal_threshold To: Noah Goldstein Cc: GNU C Library Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-3021.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Wed, Apr 19, 2023 at 5:27=E2=80=AFPM Noah Goldstein wrote: > > On Wed, Apr 19, 2023 at 7:12=E2=80=AFPM H.J. Lu wro= te: > > > > On Wed, Apr 19, 2023 at 4:24=E2=80=AFPM Noah Goldstein wrote: > > > > > > On Wed, Apr 19, 2023 at 5:43=E2=80=AFPM H.J. Lu = wrote: > > > > > > > > On Wed, Apr 19, 2023 at 3:30=E2=80=AFPM Noah Goldstein wrote: > > > > > > > > > > On Wed, Apr 19, 2023 at 5:26=E2=80=AFPM H.J. Lu wrote: > > > > > > > > > > > > ---------- Forwarded message --------- > > > > > > From: Patrick McGehearty via Libc-alpha > > > > > > Date: Fri, Sep 25, 2020 at 3:21=E2=80=AFPM > > > > > > Subject: [PATCH v3] Reversing calculation of __x86_shared_non_t= emporal_threshold > > > > > > To: > > > > > > > > > > > > > > > > > > The __x86_shared_non_temporal_threshold determines when memcpy = on x86 > > > > > > uses non_temporal stores to avoid pushing other data out of the= last > > > > > > level cache. > > > > > > > > > > > > This patch proposes to revert the calculation change made by H.= J. Lu's > > > > > > patch of June 2, 2017. > > > > > > > > > > > > H.J. Lu's patch selected a threshold suitable for a single thre= ad > > > > > > getting maximum performance. It was tuned using the single thre= aded > > > > > > large memcpy micro benchmark on an 8 core processor. The last c= hange > > > > > > changes the threshold from using 3/4 of one thread's share of t= he > > > > > > cache to using 3/4 of the entire cache of a multi-threaded syst= em > > > > > > before switching to non-temporal stores. Multi-threaded systems= with > > > > > > more than a few threads are server-class and typically have man= y > > > > > > active threads. If one thread consumes 3/4 of the available cac= he for > > > > > > all threads, it will cause other active threads to have data re= moved > > > > > > from the cache. Two examples show the range of the effect. John > > > > > > McCalpin's widely parallel Stream benchmark, which runs in para= llel > > > > > > and fetches data sequentially, saw a 20% slowdown with this pat= ch on > > > > > > an internal system test of 128 threads. This regression was dis= covered > > > > > > when comparing OL8 performance to OL7. An example that compare= s > > > > > > normal stores to non-temporal stores may be found at > > > > > > https://vgatherps.github.io/2018-09-02-nontemporal/. A simple = test > > > > > > shows performance loss of 400 to 500% due to a failure to use > > > > > > nontemporal stores. These performance losses are most likely to= occur > > > > > > when the system load is heaviest and good performance is critic= al. > > > > > > > > > > > > The tunable x86_non_temporal_threshold can be used to override = the > > > > > > default for the knowledgable user who really wants maximum cach= e > > > > > > allocation to a single thread in a multi-threaded system. > > > > > > The manual entry for the tunable has been expanded to provide > > > > > > more information about its purpose. > > > > > > > > > > > > modified: sysdeps/x86/cacheinfo.c > > > > > > modified: manual/tunables.texi > > > > > > --- > > > > > > manual/tunables.texi | 6 +++++- > > > > > > sysdeps/x86/cacheinfo.c | 16 +++++++++++----- > > > > > > 2 files changed, 16 insertions(+), 6 deletions(-) > > > > > > > > > > > > diff --git a/manual/tunables.texi b/manual/tunables.texi > > > > > > index b6bb54d..94d4fbd 100644 > > > > > > --- a/manual/tunables.texi > > > > > > +++ b/manual/tunables.texi > > > > > > @@ -364,7 +364,11 @@ set shared cache size in bytes for use in = memory > > > > > > and string routines. > > > > > > > > > > > > @deftp Tunable glibc.tune.x86_non_temporal_threshold > > > > > > The @code{glibc.tune.x86_non_temporal_threshold} tunable allow= s the user > > > > > > -to set threshold in bytes for non temporal store. > > > > > > +to set threshold in bytes for non temporal store. Non temporal= stores > > > > > > +give a hint to the hardware to move data directly to memory wi= thout > > > > > > +displacing other data from the cache. This tunable is used by = some > > > > > > +platforms to determine when to use non temporal stores in oper= ations > > > > > > +like memmove and memcpy. > > > > > > > > > > > > This tunable is specific to i386 and x86-64. > > > > > > @end deftp > > > > > > diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c > > > > > > index b9444dd..42b468d 100644 > > > > > > --- a/sysdeps/x86/cacheinfo.c > > > > > > +++ b/sysdeps/x86/cacheinfo.c > > > > > > @@ -778,14 +778,20 @@ intel_bug_no_cache_info: > > > > > > __x86_shared_cache_size =3D shared; > > > > > > } > > > > > > > > > > > > - /* The large memcpy micro benchmark in glibc shows that 6 ti= mes of > > > > > > - shared cache size is the approximate value above which no= n-temporal > > > > > > - store becomes faster on a 8-core processor. This is the = 3/4 of the > > > > > > - total shared cache size. */ > > > > > > + /* The default setting for the non_temporal threshold is 3/4= of one > > > > > > + thread's share of the chip's cache. For most Intel and AM= D processors > > > > > > + with an initial release date between 2017 and 2020, a thr= ead's typical > > > > > > + share of the cache is from 500 KBytes to 2 MBytes. Using = the 3/4 > > > > > > + threshold leaves 125 KBytes to 500 KBytes of the thread's= data > > > > > > + in cache after a maximum temporal copy, which will mainta= in > > > > > > + in cache a reasonable portion of the thread's stack and o= ther > > > > > > + active data. If the threshold is set higher than one thre= ad's > > > > > > + share of the cache, it has a substantial risk of negative= ly > > > > > > + impacting the performance of other threads running on the= chip. */ > > > > > > __x86_shared_non_temporal_threshold > > > > > > =3D (cpu_features->non_temporal_threshold !=3D 0 > > > > > > ? cpu_features->non_temporal_threshold > > > > > > - : __x86_shared_cache_size * threads * 3 / 4); > > > > > > + : __x86_shared_cache_size * 3 / 4); > > > > > > } > > > > > > > > > > > > #endif > > > > > > -- > > > > > > 1.8.3.1 > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > > H.J. > > > > > > > > > > > > > > > I am looking into re-tuning the NT store threshold which appears = to be > > > > > too low in many cases. > > > > > > > > > > I've played around with some micro-benchmarks: > > > > > https://github.com/goldsteinn/memcpy-nt-benchmarks > > > > > > > > > > I am finding that for the most part, ERMS stays competitive with > > > > > NT-Stores even as core count increases with heavy read workloads = going > > > > > on on other threads. > > > > > See: https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/mast= er/results-skx-pdf/skx-memcpy-4--read.pdf > > > > > > > > > > I saw: https://vgatherps.github.io/2018-09-02-nontemporal/ althou= gh > > > > > it's not clear how to reproduce the results in the blog. I also s= ee it > > > > > was only comparing vs standard temporal stores, not ERMS. > > > > > > > > > > Does anyone know of benchmarks or an application that can highlig= ht > > > > > the L3 clobbering issues brought up in this patch? > > > > > > > > You can try this: > > > > > > > > https://github.com/jeffhammond/STREAM > > > > > > That's the same as a normal memcpy benchmark no? Its just calling > > > something like `tuned_STREAM_Copy()` (memcpy) in a loop maybe > > > scattered with some other reads. Similar to what I was running to get= : > > > https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/master/result= s-skx-pdf/skx-memcpy-4--read.pdf > > > > tuned_STREAM_Copy doesn't use OPENMP pragma: > > > > #ifdef TUNED > > tuned_STREAM_Copy(); > > #else > > #pragma omp parallel for > > for (j=3D0; j > c[j] =3D a[j]; > > #endif > > > > It is single-threaded. > > > ``` > #define do_copy //do_copy_erms / do_copy_nt > void tuned_STREAM_Copy() > { > ssize_t j; > #pragma omp parallel for > for (j=3D0; j do_copy(c + j * THREAD_CHUNK_SIZE, a + j * > THREAD_CHUNK_SIZE, THREAD_CHUNK_SIZE); > } > ``` > > > > Either way on my ICL using the benchmark: > > > > > > ``` > > > ERMS (L3) > > > Function Best Rate MB/s Avg time Min time Max time > > > Copy: 323410.5 0.001262 0.001245 0.001285 > > > Scale: 26367.3 0.017114 0.015271 0.029576 > > > Add: 29635.9 0.022948 0.020380 0.032384 > > > Triad: 29401.0 0.021522 0.020543 0.024977 > > > > > > NT (L3) > > > Function Best Rate MB/s Avg time Min time Max time > > > Copy: 285375.1 0.001421 0.001411 0.001443 > > > Scale: 26457.3 0.015358 0.015219 0.015730 > > > Add: 29753.9 0.020656 0.020299 0.022881 > > > Triad: 29594.0 0.020732 0.020409 0.022240 > > > > > > > > > ERMS (L3 / 2) > > > Function Best Rate MB/s Avg time Min time Max time > > > Copy: 431049.0 0.000620 0.000467 0.001749 > > > Scale: 27071.0 0.007996 0.007437 0.010018 > > > Add: 31005.5 0.009864 0.009740 0.010432 > > > Triad: 30359.7 0.010061 0.009947 0.010434 > > > > > > NT (L3 / 2) > > > Function Best Rate MB/s Avg time Min time Max time > > > Copy: 277315.2 0.000746 0.000726 0.000803 > > > Scale: 27511.1 0.007540 0.007318 0.008739 > > > Add: 30423.9 0.010116 0.009926 0.011031 > > > Triad: 30430.5 0.009980 0.009924 0.010097 > > > ``` > > > Seems to suggest ERMS is favorable. > > > If we don't have a workload to support the current threshold, should we restore the old threshold for processors with ERMS? --=20 H.J.