From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com [IPv6:2a00:1450:4864:20::62b]) by sourceware.org (Postfix) with ESMTPS id 984A63858D32 for ; Mon, 24 Apr 2023 22:30:43 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 984A63858D32 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-ej1-x62b.google.com with SMTP id a640c23a62f3a-956ff2399c9so856568666b.3 for ; Mon, 24 Apr 2023 15:30:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682375442; x=1684967442; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=D7+7mOUQIPckwSxYFT/FVGzS7KuCCnTSFo2hx+0ezl4=; b=YbRFFqa/ok43y/mOCw/L8UTZMDJgfNDnhX9k/O0KuFJwVXzljkwGsRoDPIaI8dGCuK kQxkSppQB798OX8cPwpkYBzL2cfEJ/lB4VcVVChGz+G69CgKgP1fXJkqVeiw6dVCF+/S Ch7aiQA9UDoepjW/pdlm1Az641UPjnBtDYxKGGfynjXtEqv5w8RAM/h5a6aukWWCi8gM 5GW0yx7iy4qGoFJRg7HuveJ6fMGz5QByuNBQuLLwLiWrd/Kid0uDPx2Y9GpZ/D9zb7W+ b0OKqQLMN1vpvPIBYCNjyflSrchJoFMgmlhzx4GvX7srrhjOMXQgdIlXx4jBSo6lY741 epKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682375442; x=1684967442; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D7+7mOUQIPckwSxYFT/FVGzS7KuCCnTSFo2hx+0ezl4=; b=cvrYDspoFOLqd1c0JELAlq1HfewhtLXXpzPl6cPGOXTCDNnXx4ow+NuXjEiEM0r9j1 UPRxfqgOTCIeKeK7FUDSZqeWIZZ5oZrUFVBzg4dyXRoUlwdH7SO3VuXNQeWOCokEECE4 Zr7cDhDO21qHYtArjuKAfMsFLupkPNv/uhRbkV9VRP8wYflvjSLaRqEZvAxv+T1HKzpb tZPmqQkr4LWyguG9Ifv6PfkCx+ipFDElM8hTEx8n4hfR8Agri8eeSBr2Saydut7se9BR lfvkZ97GSrZ0SZ/QJ6TPvv5PTMvOaEOsi5ySQbY79XQubIrq0sZ5W64ofmOUSvFjTppI VDLA== X-Gm-Message-State: AAQBX9eA47khVz5W2XGmp/GZOhkosxzx32Uh4QR0WXSx/uMIN/sFxtYf hW2Kj8wQKrCpm5pGrCMxrCORPckilsxtLaO0PAmQl+Gl X-Google-Smtp-Source: AKy350atCjuFycMbd3bxu7rMoTU9ZKzydvdcey+fgL2PNHENUGg7HkrG89ZRLUkPyLm0eQhzfV4DDCXgau7559LCM94= X-Received: by 2002:a17:907:8a25:b0:957:1789:c38c with SMTP id sc37-20020a1709078a2500b009571789c38cmr13209051ejc.38.1682375442074; Mon, 24 Apr 2023 15:30:42 -0700 (PDT) MIME-Version: 1.0 References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> In-Reply-To: From: Noah Goldstein Date: Mon, 24 Apr 2023 17:30:30 -0500 Message-ID: Subject: Re: [PATCH v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` To: "H.J. Lu" Cc: libc-alpha@sourceware.org, carlos@systemhalted.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-9.7 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Mon, Apr 24, 2023 at 3:44=E2=80=AFPM H.J. Lu wrote= : > > On Mon, Apr 24, 2023 at 11:34=E2=80=AFAM Noah Goldstein wrote: > > > > On Mon, Apr 24, 2023 at 1:10=E2=80=AFPM H.J. Lu w= rote: > > > > > > On Sun, Apr 23, 2023 at 10:03=E2=80=AFPM Noah Goldstein wrote: > > > > > > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 / > > > > ncores_per_socket'. This patch updates that value to roughly > > > > 'sizeof_L3 / 2` > > > > > > > > The original value (specifically dividing the `ncores_per_socket`) = was > > > > done to limit the amount of other threads' data a `memcpy`/`memset` > > > > could evict. > > > > > > > > Dividing by 'ncores_per_socket', however leads to exceedingly low > > > > non-temporal threshholds and leads to using non-temporal stores in > > > > cases where `rep movsb` is multiple times faster. > > > > > > > > Furthermore, non-temporal stores are written directly to disk so us= ing > > > > > > Why is "disk" here? > > I mean main-memory. Will update in V2. > > > > > > > it at a size much smaller than L3 can place soon to be accessed dat= a > > > > much further away than it otherwise could be. As well, modern machi= nes > > > > are able to detect streaming patterns (especially if `rep movsb` is > > > > used) and provide LRU hints to the memory subsystem. This in affect > > > > caps the total amount of eviction at 1/cache_assosiativity, far bel= ow > > > > meaningfully thrashing the entire cache. > > > > > > > > As best I can tell, the benchmarks that lead this small threshold > > > > where done comparing non-temporal stores versus standard cacheable > > > > stores. A better comparison (linked below) is to be `rep movsb` whi= ch, > > > > on the measure systems, is nearly 2x faster than non-temporal store= s > > > > at the low-end of the previous threshold, and within 10% for over > > > > 100MB copies (well past even the current threshold). In cases with = a > > > > low number of threads competing for bandwidth, `rep movsb` is ~2x > > > > faster up to `sizeof_L3`. > > > > > > > > > > Should we limit it to processors with ERMS (Enhanced REP MOVSB/STOSB= )? > > > > > Think that would probably make sense. We see more meaningful regression > > for larger sizes when using standard store loop. Think /nthreads is > > still too small. > > How about > > if ERMS: L3/2 > > else: L3 / (2 * sqrt(nthreads)) ? > > I think we should leave the non-ERMS case unchanged. Done > > > > > > > > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable > > > > stores where done using: > > > > https://github.com/goldsteinn/memcpy-nt-benchmarks > > > > > > > > Sheets results (also available in pdf on the github): > > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E9= 0m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml > > > > --- > > > > sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++--------------------= - > > > > 1 file changed, 14 insertions(+), 21 deletions(-) > > > > > > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.= h > > > > index ec88945b39..f25309dbc8 100644 > > > > --- a/sysdeps/x86/dl-cacheinfo.h > > > > +++ b/sysdeps/x86/dl-cacheinfo.h > > > > @@ -604,20 +604,11 @@ intel_bug_no_cache_info: > > > > =3D ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx = >> 16) > > > > & 0xff); > > > > } > > > > - > > > > - /* Cap usage of highest cache level to the number of suppo= rted > > > > - threads. */ > > > > - if (shared > 0 && threads > 0) > > > > - shared /=3D threads; > > > > } > > > > > > > > /* Account for non-inclusive L2 and L3 caches. */ > > > > if (!inclusive_cache) > > > > - { > > > > - if (threads_l2 > 0) > > > > - core /=3D threads_l2; > > > > - shared +=3D core; > > > > - } > > > > + shared +=3D core; > > > > > > > > *shared_ptr =3D shared; > > > > *threads_ptr =3D threads; > > > > @@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_f= eatures) > > > > cpu_features->level3_cache_linesize =3D level3_cache_linesize; > > > > cpu_features->level4_cache_size =3D level4_cache_size; > > > > > > > > - /* The default setting for the non_temporal threshold is 3/4 of = one > > > > - thread's share of the chip's cache. For most Intel and AMD pr= ocessors > > > > - with an initial release date between 2017 and 2020, a thread'= s typical > > > > - share of the cache is from 500 KBytes to 2 MBytes. Using the = 3/4 > > > > - threshold leaves 125 KBytes to 500 KBytes of the thread's dat= a > > > > - in cache after a maximum temporal copy, which will maintain > > > > - in cache a reasonable portion of the thread's stack and other > > > > - active data. If the threshold is set higher than one thread's > > > > - share of the cache, it has a substantial risk of negatively > > > > - impacting the performance of other threads running on the chi= p. */ > > > > - unsigned long int non_temporal_threshold =3D shared * 3 / 4; > > > > + /* The default setting for the non_temporal threshold is 1/2 of = size > > > > + of chip's cache. For most Intel and AMD processors with an > > > > + initial release date between 2017 and 2023, a thread's typica= l > > > > + share of the cache is from 18-64MB. Using the 1/2 L3 is meant= to > > > > + estimate the point where non-temporal stores begin outcompeti= ng > > > > + other methods. As well the point where the fact that non-temp= oral > > > > + stores are forced back to disk would already occured to the > > > > + majority of the lines in the copy. Note, concerns about the > > > > + entire L3 cache being evicted by the copy are mostly alleviat= ed > > > > + by the fact that modern HW detects streaming patterns and > > > > + provides proper LRU hints so that the the maximum thrashing > > > > + capped at 1/assosiativity. */ > > > > + unsigned long int non_temporal_threshold =3D shared / 2; > > > > /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts= the value of > > > > 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) an= d it is best > > > > if that operation cannot overflow. Minimum of 0x4040 (16448) = because the > > > > -- > > > > 2.34.1 > > > > > > > > > > > > > -- > > > H.J. > > > > -- > H.J.