From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yb1-xb2f.google.com (mail-yb1-xb2f.google.com [IPv6:2607:f8b0:4864:20::b2f]) by sourceware.org (Postfix) with ESMTPS id 2E51C3858C00 for ; Wed, 19 Apr 2023 22:43:53 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 2E51C3858C00 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-yb1-xb2f.google.com with SMTP id v9so1079733ybm.0 for ; Wed, 19 Apr 2023 15:43:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681944232; x=1684536232; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=PRQ4vBJZXbNOzr4csx92o2NjFT34HjZN4GTzQmoCgLg=; b=PVpOcfhweRn7zvgTX4h4qlRiLZgf9tlLre83SnqmyhqvGWI0fUpiJdyo7Y5jUVnhBl 5i/eS/N9s6bTpkMz1plP1Eng+CE6uGmYUCynB8Y4zsTyGav6gEXCXhle3i62cR7x1tui /pX2elYbPXi19tirL70Em8bkfZz5R+07xD+5Ey2vfdMe6oAvuZ92mRgYkmNCwYcBrp7r Z/4PrFlInJsmsX6tYLR6jIE2Modl8DN+b66DEVsioBMmBJbaMkAXQaeh/qzvNKxkMfDr LxKRGgONxVPIw+KvDTJcsjVewx2ZYLJR265ZJsPFg8bdzX1c0TMS+3D/mY9T/SP6CLVq wqmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681944232; x=1684536232; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PRQ4vBJZXbNOzr4csx92o2NjFT34HjZN4GTzQmoCgLg=; b=bpwVPMM8RFHxh5KRifqqFkHLusT15qAUZXuUVvTjv8rlDQGq8jm/DkV3R78fcBKTRo jCEubkPiG/TLXyHrUvz6HoNNTbwhXhmUvxqrGwYExPbpKgO2oIAO5PoCCnHbs1uCUFiM cfJu1FSnL10bUFt9LcKLXjH0tPSJI4BGAIx5ZwDjjUacw35JLBeM6tSY1fObZroSjpB3 awkE1/IwyvHMsXS4R19fi2i7a0kc8DaqtnrmrvS0FlXa8b37eNnZQYKDCk1J6eMziCa8 abNdHbsnxgoLK4zlyBock/W/nns8ER1+R7S9Ccu4NOW1+53ytkVKDbzT+wRAuswLnO03 Jcag== X-Gm-Message-State: AAQBX9eXeGa/sLFa3Cw+htxGqOErpR64EjbE8OC2Cukr4ZEQQiY6uMmR god5N29I8id66zbvYfl5G1sOpqJqhP/UJFgonCE= X-Google-Smtp-Source: AKy350YWhb539JmUNwdgEmwJhpD7iJXwza5oyHeER6crzKwqGGgEa0Tg9RGEuzrGDZS44kkq7jZl9Lfjg2vdhk4Lj8g= X-Received: by 2002:a25:6e41:0:b0:b77:676c:773c with SMTP id j62-20020a256e41000000b00b77676c773cmr863700ybc.2.1681944232522; Wed, 19 Apr 2023 15:43:52 -0700 (PDT) MIME-Version: 1.0 References: <1601072475-22682-1-git-send-email-patrick.mcgehearty@oracle.com> In-Reply-To: From: "H.J. Lu" Date: Wed, 19 Apr 2023 15:43:16 -0700 Message-ID: Subject: Re: [PATCH v3] Reversing calculation of __x86_shared_non_temporal_threshold To: Noah Goldstein Cc: GNU C Library Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-3022.2 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Wed, Apr 19, 2023 at 3:30=E2=80=AFPM Noah Goldstein wrote: > > On Wed, Apr 19, 2023 at 5:26=E2=80=AFPM H.J. Lu wro= te: > > > > ---------- Forwarded message --------- > > From: Patrick McGehearty via Libc-alpha > > Date: Fri, Sep 25, 2020 at 3:21=E2=80=AFPM > > Subject: [PATCH v3] Reversing calculation of __x86_shared_non_temporal_= threshold > > To: > > > > > > The __x86_shared_non_temporal_threshold determines when memcpy on x86 > > uses non_temporal stores to avoid pushing other data out of the last > > level cache. > > > > This patch proposes to revert the calculation change made by H.J. Lu's > > patch of June 2, 2017. > > > > H.J. Lu's patch selected a threshold suitable for a single thread > > getting maximum performance. It was tuned using the single threaded > > large memcpy micro benchmark on an 8 core processor. The last change > > changes the threshold from using 3/4 of one thread's share of the > > cache to using 3/4 of the entire cache of a multi-threaded system > > before switching to non-temporal stores. Multi-threaded systems with > > more than a few threads are server-class and typically have many > > active threads. If one thread consumes 3/4 of the available cache for > > all threads, it will cause other active threads to have data removed > > from the cache. Two examples show the range of the effect. John > > McCalpin's widely parallel Stream benchmark, which runs in parallel > > and fetches data sequentially, saw a 20% slowdown with this patch on > > an internal system test of 128 threads. This regression was discovered > > when comparing OL8 performance to OL7. An example that compares > > normal stores to non-temporal stores may be found at > > https://vgatherps.github.io/2018-09-02-nontemporal/. A simple test > > shows performance loss of 400 to 500% due to a failure to use > > nontemporal stores. These performance losses are most likely to occur > > when the system load is heaviest and good performance is critical. > > > > The tunable x86_non_temporal_threshold can be used to override the > > default for the knowledgable user who really wants maximum cache > > allocation to a single thread in a multi-threaded system. > > The manual entry for the tunable has been expanded to provide > > more information about its purpose. > > > > modified: sysdeps/x86/cacheinfo.c > > modified: manual/tunables.texi > > --- > > manual/tunables.texi | 6 +++++- > > sysdeps/x86/cacheinfo.c | 16 +++++++++++----- > > 2 files changed, 16 insertions(+), 6 deletions(-) > > > > diff --git a/manual/tunables.texi b/manual/tunables.texi > > index b6bb54d..94d4fbd 100644 > > --- a/manual/tunables.texi > > +++ b/manual/tunables.texi > > @@ -364,7 +364,11 @@ set shared cache size in bytes for use in memory > > and string routines. > > > > @deftp Tunable glibc.tune.x86_non_temporal_threshold > > The @code{glibc.tune.x86_non_temporal_threshold} tunable allows the us= er > > -to set threshold in bytes for non temporal store. > > +to set threshold in bytes for non temporal store. Non temporal stores > > +give a hint to the hardware to move data directly to memory without > > +displacing other data from the cache. This tunable is used by some > > +platforms to determine when to use non temporal stores in operations > > +like memmove and memcpy. > > > > This tunable is specific to i386 and x86-64. > > @end deftp > > diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c > > index b9444dd..42b468d 100644 > > --- a/sysdeps/x86/cacheinfo.c > > +++ b/sysdeps/x86/cacheinfo.c > > @@ -778,14 +778,20 @@ intel_bug_no_cache_info: > > __x86_shared_cache_size =3D shared; > > } > > > > - /* The large memcpy micro benchmark in glibc shows that 6 times of > > - shared cache size is the approximate value above which non-tempor= al > > - store becomes faster on a 8-core processor. This is the 3/4 of t= he > > - total shared cache size. */ > > + /* The default setting for the non_temporal threshold is 3/4 of one > > + thread's share of the chip's cache. For most Intel and AMD proces= sors > > + with an initial release date between 2017 and 2020, a thread's ty= pical > > + share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4 > > + threshold leaves 125 KBytes to 500 KBytes of the thread's data > > + in cache after a maximum temporal copy, which will maintain > > + in cache a reasonable portion of the thread's stack and other > > + active data. If the threshold is set higher than one thread's > > + share of the cache, it has a substantial risk of negatively > > + impacting the performance of other threads running on the chip. *= / > > __x86_shared_non_temporal_threshold > > =3D (cpu_features->non_temporal_threshold !=3D 0 > > ? cpu_features->non_temporal_threshold > > - : __x86_shared_cache_size * threads * 3 / 4); > > + : __x86_shared_cache_size * 3 / 4); > > } > > > > #endif > > -- > > 1.8.3.1 > > > > > > > > -- > > H.J. > > > I am looking into re-tuning the NT store threshold which appears to be > too low in many cases. > > I've played around with some micro-benchmarks: > https://github.com/goldsteinn/memcpy-nt-benchmarks > > I am finding that for the most part, ERMS stays competitive with > NT-Stores even as core count increases with heavy read workloads going > on on other threads. > See: https://github.com/goldsteinn/memcpy-nt-benchmarks/blob/master/resul= ts-skx-pdf/skx-memcpy-4--read.pdf > > I saw: https://vgatherps.github.io/2018-09-02-nontemporal/ although > it's not clear how to reproduce the results in the blog. I also see it > was only comparing vs standard temporal stores, not ERMS. > > Does anyone know of benchmarks or an application that can highlight > the L3 clobbering issues brought up in this patch? You can try this: https://github.com/jeffhammond/STREAM --=20 H.J.