From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com [IPv6:2a00:1450:4864:20::529]) by sourceware.org (Postfix) with ESMTPS id 1696A3858D33 for ; Wed, 19 Apr 2023 23:24:57 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 1696A3858D33 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-ed1-x529.google.com with SMTP id 4fb4d7f45d1cf-504eb1155d3so1826247a12.1 for ; Wed, 19 Apr 2023 16:24:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681946696; x=1684538696; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=mBGMsgC4mcGGBEvp5SHkoyQbcYP4KL3cUTTka01uPVk=; b=CqHnCrs3WxPWHtg+RdwFkeoDeAUhz2DBNrEp4WO46K94qoCN3s5JXTiIj7Fq2GF8kI XvuWj6fM8PFkd+A/KbfnKdRzHCxPUxaA4NbA1bENUkSlbJKyi2mH5fHGDFvjQHU9fDob fwqwZSVENB9JoqteeRcL3P4ZES/K11W0izVhSimWoCYS4lXvubpDJEQ5P0tlXi/eHP4F +bVqLUrCrVe1l20WP8F572ocgCJ7plr4sjovQO3gl719raSWuOv5VaEboKdjAV9lVloN EmULYNgem27wiC3dOu5a7Rq8XhrlTYky4bRYz/Zt0ZwzbrAsVzQVjjTg/3P7AhHRoASY tLlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681946696; x=1684538696; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mBGMsgC4mcGGBEvp5SHkoyQbcYP4KL3cUTTka01uPVk=; b=LFpgtM91hRSZkB7b+nNTdLmSJNJfctfCppTfKx9+neqv8EwCLzF9VunWTP8yoLdf5V s3zoFra6ilZYiCeTilAmxpMkAd4kHg9ubpe4UEGpuFQdL3bf78KxDCu29JfnJrvvFZag IjI1PyFePeIuEWOrLsqWaiqf9ProRcdRn9J76j0UIqxSQgsgmQZ56aDI66+xp98V2aIj QBNy73jOeaiVze9diqE/IdRp6QE2K13JjhPve73s1dh1BPwqrs6wBJ4KPTCU/ZNfI6ak 4k45VLkWdZS9PIg0iJ71pqDVhR4zgSZY4I8kJ03VzFgTP92CIgW27AUpFd1KWB8PmuQk GZlg== X-Gm-Message-State: AAQBX9cPtNMYR+Znsk5j4VfYx8esQhaXFGVqaqnO48Rbs0rLiuQC4kxA Oi1P5qfBET1A7Pi8PXP4ruGgJK/q3ny9PLJBWbENItzcyTk= X-Google-Smtp-Source: AKy350bjLim8qVNRkvqZJUaUKKYuqmI1Q1XOmvHy52EZOlf6w13n/v9QgWHJTmla6zBoBKAT5LNZrE3TwtSPw7Qp82I= X-Received: by 2002:a05:6402:27d0:b0:506:bda9:fcb9 with SMTP id c16-20020a05640227d000b00506bda9fcb9mr4169940ede.4.1681946695584; Wed, 19 Apr 2023 16:24:55 -0700 (PDT) MIME-Version: 1.0 References: <1601072475-22682-1-git-send-email-patrick.mcgehearty@oracle.com> In-Reply-To: From: Noah Goldstein Date: Wed, 19 Apr 2023 18:24:43 -0500 Message-ID: Subject: Re: [PATCH v3] Reversing calculation of __x86_shared_non_temporal_threshold To: "H.J. Lu" Cc: GNU C Library Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-9.5 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Wed, Apr 19, 2023 at 5:43=E2=80=AFPM H.J. Lu wrote= : > > On Wed, Apr 19, 2023 at 3:30=E2=80=AFPM Noah Goldstein wrote: > > > > On Wed, Apr 19, 2023 at 5:26=E2=80=AFPM H.J. Lu w= rote: > > > > > > ---------- Forwarded message --------- > > > From: Patrick McGehearty via Libc-alpha > > > Date: Fri, Sep 25, 2020 at 3:21=E2=80=AFPM > > > Subject: [PATCH v3] Reversing calculation of __x86_shared_non_tempora= l_threshold > > > To: > > > > > > > > > The __x86_shared_non_temporal_threshold determines when memcpy on x86 > > > uses non_temporal stores to avoid pushing other data out of the last > > > level cache. > > > > > > This patch proposes to revert the calculation change made by H.J. Lu'= s > > > patch of June 2, 2017. > > > > > > H.J. Lu's patch selected a threshold suitable for a single thread > > > getting maximum performance. It was tuned using the single threaded > > > large memcpy micro benchmark on an 8 core processor. The last change > > > changes the threshold from using 3/4 of one thread's share of the > > > cache to using 3/4 of the entire cache of a multi-threaded system > > > before switching to non-temporal stores. Multi-threaded systems with > > > more than a few threads are server-class and typically have many > > > active threads. If one thread consumes 3/4 of the available cache for > > > all threads, it will cause other active threads to have data removed > > > from the cache. Two examples show the range of the effect. John > > > McCalpin's widely parallel Stream benchmark, which runs in parallel > > > and fetches data sequentially, saw a 20% slowdown with this patch on > > > an internal system test of 128 threads. This regression was discovere= d > > > when comparing OL8 performance to OL7. An example that compares > > > normal stores to non-temporal stores may be found at > > > https://vgatherps.github.io/2018-09-02-nontemporal/. A simple test > > > shows performance loss of 400 to 500% due to a failure to use > > > nontemporal stores. These performance losses are most likely to occur > > > when the system load is heaviest and good performance is critical. > > > > > > The tunable x86_non_temporal_threshold can be used to override the > > > default for the knowledgable user who really wants maximum cache > > > allocation to a single thread in a multi-threaded system. > > > The manual entry for the tunable has been expanded to provide > > > more information about its purpose. > > > > > > modified: sysdeps/x86/cacheinfo.c > > > modified: manual/tunables.texi > > > --- > > > manual/tunables.texi | 6 +++++- > > > sysdeps/x86/cacheinfo.c | 16 +++++++++++----- > > > 2 files changed, 16 insertions(+), 6 deletions(-) > > > > > > diff --git a/manual/tunables.texi b/manual/tunables.texi > > > index b6bb54d..94d4fbd 100644 > > > --- a/manual/tunables.texi > > > +++ b/manual/tunables.texi > > > @@ -364,7 +364,11 @@ set shared cache size in bytes for use in memory > > > and string routines. > > > > > > @deftp Tunable glibc.tune.x86_non_temporal_threshold > > > The @code{glibc.tune.x86_non_temporal_threshold} tunable allows the = user > > > -to set threshold in bytes for non temporal store. > > > +to set threshold in bytes for non temporal store. Non temporal store= s > > > +give a hint to the hardware to move data directly to memory without > > > +displacing other data from the cache. This tunable is used by some > > > +platforms to determine when to use non temporal stores in operations > > > +like memmove and memcpy. > > > > > > This tunable is specific to i386 and x86-64. > > > @end deftp > > > diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c > > > index b9444dd..42b468d 100644 > > > --- a/sysdeps/x86/cacheinfo.c > > > +++ b/sysdeps/x86/cacheinfo.c > > > @@ -778,14 +778,20 @@ intel_bug_no_cache_info: > > > __x86_shared_cache_size =3D shared; > > > } > > > > > > - /* The large memcpy micro benchmark in glibc shows that 6 times of > > > - shared cache size is the approximate value above which non-temp= oral > > > - store becomes faster on a 8-core processor. This is the 3/4 of= the > > > - total shared cache size. */ > > > + /* The default setting for the non_temporal threshold is 3/4 of on= e > > > + thread's share of the chip's cache. For most Intel and AMD proc= essors > > > + with an initial release date between 2017 and 2020, a thread's = typical > > > + share of the cache is from 500 KBytes to 2 MBytes. Using the 3/= 4 > > > + threshold leaves 125 KBytes to 500 KBytes of the thread's data > > > + in cache after a maximum temporal copy, which will maintain > > > + in cache a reasonable portion of the thread's stack and other > > > + active data. If the threshold is set higher than one thread's > > > + share of the cache, it has a substantial risk of negatively > > > + impacting the performance of other threads running on the chip.= */ > > > __x86_shared_non_temporal_threshold > > > =3D (cpu_features->non_temporal_threshold !=3D 0 > > > ? cpu_features->non_temporal_threshold > > > - : __x86_shared_cache_size * threads * 3 / 4); > > > + : __x86_shared_cache_size * 3 / 4); > > > } > > > > > > #endif > > > -- > > > 1.8.3.1 > > > > > > > > > > > > -- > > > H.J. > > > > > > I am looking into re-tuning the NT store threshold which appears to be > > too low in many cases. > > > > I've played around with some micro-benchmarks: > > https://github.com/goldsteinn/memcpy-nt-benchmarks > > > > I am finding that for the most part, ERMS stays competitive with > > NT-Stores even as core count increases with heavy read workloads going > > on on other threads. > > See: https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/master/res= ults-skx-pdf/skx-memcpy-4--read.pdf > > > > I saw: https://vgatherps.github.io/2018-09-02-nontemporal/ although > > it's not clear how to reproduce the results in the blog. I also see it > > was only comparing vs standard temporal stores, not ERMS. > > > > Does anyone know of benchmarks or an application that can highlight > > the L3 clobbering issues brought up in this patch? > > You can try this: > > https://github.com/jeffhammond/STREAM That's the same as a normal memcpy benchmark no? Its just calling something like `tuned_STREAM_Copy()` (memcpy) in a loop maybe scattered with some other reads. Similar to what I was running to get: https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/master/results-skx-= pdf/skx-memcpy-4--read.pdf Either way on my ICL using the benchmark: ``` ERMS (L3) Function Best Rate MB/s Avg time Min time Max time Copy: 323410.5 0.001262 0.001245 0.001285 Scale: 26367.3 0.017114 0.015271 0.029576 Add: 29635.9 0.022948 0.020380 0.032384 Triad: 29401.0 0.021522 0.020543 0.024977 NT (L3) Function Best Rate MB/s Avg time Min time Max time Copy: 285375.1 0.001421 0.001411 0.001443 Scale: 26457.3 0.015358 0.015219 0.015730 Add: 29753.9 0.020656 0.020299 0.022881 Triad: 29594.0 0.020732 0.020409 0.022240 ERMS (L3 / 2) Function Best Rate MB/s Avg time Min time Max time Copy: 431049.0 0.000620 0.000467 0.001749 Scale: 27071.0 0.007996 0.007437 0.010018 Add: 31005.5 0.009864 0.009740 0.010432 Triad: 30359.7 0.010061 0.009947 0.010434 NT (L3 / 2) Function Best Rate MB/s Avg time Min time Max time Copy: 277315.2 0.000746 0.000726 0.000803 Scale: 27511.1 0.007540 0.007318 0.008739 Add: 30423.9 0.010116 0.009926 0.011031 Triad: 30430.5 0.009980 0.009924 0.010097 ``` Seems to suggest ERMS is favorable. > > -- > H.J.