From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oo1-xc29.google.com (mail-oo1-xc29.google.com [IPv6:2607:f8b0:4864:20::c29]) by sourceware.org (Postfix) with ESMTPS id 961C4385661E for ; Wed, 7 Jun 2023 18:19:37 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 961C4385661E Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-oo1-xc29.google.com with SMTP id 006d021491bc7-55afbc16183so2257651eaf.3 for ; Wed, 07 Jun 2023 11:19:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1686161976; x=1688753976; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=KkawNY1Kn8qCIP3MI3A1RCexHBlGWTfjCT7n0LLgUB8=; b=lEG9br9RjIzAS3ZsQaIP6VQtQhGOuMx0FKb2msUuxhbUrpSZg+vWShhfDMQ2iOmsgo vukkn/UikBLlxNpehqVuRr4I3SaBBtMBlmeAfqgb2T2hGAk/jL5QJnR6j3y+u0s3WUZ9 kb+4bAr4exbdX3wIOw4HYV9wL74tW2/0Wkv2S0sUcel7DiU98FM6iKBcsX4o+oyv0e/2 5LuqD8+Vm9MsyW8Y21sxC1Hry1cD8fZb7JkQmG1K/CCZ5lJejOcvhQ9ZHAJWGdWv/2Ai 0iVCoahuea6isR7czyvp3pDwol/RZiJ7Y5XGFZ23X5z9bxLusj8uqs77yHUJsaClT73C YoUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686161976; x=1688753976; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KkawNY1Kn8qCIP3MI3A1RCexHBlGWTfjCT7n0LLgUB8=; b=FPLKG4hVEpbZacM+7ZusHzONffyoklIVmLOBkNgk5klhgFevfmIsZss/m8EWJPuBKA apn66RkjvzmdQUheKcAaXUXImk41QhxFST/8x3adzkcnAttHD3uhBZhWqhaiQw3ErGeu YoQi87LcV6N99QK0hLnX4EPWP/QrUOu9Cw2pFYlvnP42pt933geB2VJZ73fBmdWpZL3p BQ0AyY4GJzf03/MnOFJHYbhGZmPzd2zYLmju3DckBRfdliFVfLT2Q44ZdC8Cu93UkO3V 54Fzazr080pPQHruLXjYyYM+qpgP2LooggY/AtQ+yjV1rE1iGPAs+aTQ1u5DZBo1exZD XFwg== X-Gm-Message-State: AC+VfDzuEs4EeHwjN2ip5n4/HkIVH6Hn6SfNwIRXsEYsjNncDd5qPzWr kzs/Oov6FKMhQDMwXLMK6dAY9Iliu0IvWsRjbZRjgeRa X-Google-Smtp-Source: ACHHUZ6WNRaMD5e6x/HlMaQjEi49UE1iNE71CDfEuvYO1Q1SW/dZLznjU3iv8+G1OrCnsRj7y7uUM+UVeh+so2p9rBM= X-Received: by 2002:a4a:ba03:0:b0:558:ae68:e92a with SMTP id b3-20020a4aba03000000b00558ae68e92amr3770887oop.0.1686161976104; Wed, 07 Jun 2023 11:19:36 -0700 (PDT) MIME-Version: 1.0 References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230607181803.4154764-1-goldstein.w.n@gmail.com> In-Reply-To: <20230607181803.4154764-1-goldstein.w.n@gmail.com> From: Noah Goldstein Date: Wed, 7 Jun 2023 13:19:24 -0500 Message-ID: Subject: Re: [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` To: libc-alpha@sourceware.org Cc: hjl.tools@gmail.com, carlos@systemhalted.org, DJ Delorie , "Carlos O'Donell" Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-8.6 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Wed, Jun 7, 2023 at 1:18=E2=80=AFPM Noah Goldstein wrote: > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 / > ncores_per_socket'. This patch updates that value to roughly > 'sizeof_L3 / 4` > > The original value (specifically dividing the `ncores_per_socket`) was > done to limit the amount of other threads' data a `memcpy`/`memset` > could evict. > > Dividing by 'ncores_per_socket', however leads to exceedingly low > non-temporal thresholds and leads to using non-temporal stores in > cases where REP MOVSB is multiple times faster. > > Furthermore, non-temporal stores are written directly to main memory > so using it at a size much smaller than L3 can place soon to be > accessed data much further away than it otherwise could be. As well, > modern machines are able to detect streaming patterns (especially if > REP MOVSB is used) and provide LRU hints to the memory subsystem. This > in affect caps the total amount of eviction at 1/cache_associativity, > far below meaningfully thrashing the entire cache. > > As best I can tell, the benchmarks that lead this small threshold > where done comparing non-temporal stores versus standard cacheable > stores. A better comparison (linked below) is to be REP MOVSB which, > on the measure systems, is nearly 2x faster than non-temporal stores > at the low-end of the previous threshold, and within 10% for over > 100MB copies (well past even the current threshold). In cases with a > low number of threads competing for bandwidth, REP MOVSB is ~2x faster > up to `sizeof_L3`. > > The divisor of `4` is a somewhat arbitrary value. From benchmarks it > seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs > such as Broadwell prefer something closer to `8`. This patch is meant > to be followed up by another one to make the divisor cpu-specific, but > in the meantime (and for easier backporting), this patch settles on > `4` as a middle-ground. > > Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable > stores where done using: > https://github.com/goldsteinn/memcpy-nt-benchmarks > > Sheets results (also available in pdf on the github): > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVu= FiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml > Reviewed-by: DJ Delorie > Reviewed-by: Carlos O'Donell > --- > sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++--------------- > 1 file changed, 43 insertions(+), 27 deletions(-) > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h > index 877e73d700..3bd3b3ec1b 100644 > --- a/sysdeps/x86/dl-cacheinfo.h > +++ b/sysdeps/x86/dl-cacheinfo.h > @@ -407,7 +407,7 @@ handle_zhaoxin (int name) > } > > static void > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, > +get_common_cache_info (long int *shared_ptr, long int * shared_per_threa= d_ptr, unsigned int *threads_ptr, > long int core) > { > unsigned int eax; > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned= int *threads_ptr, > unsigned int family =3D cpu_features->basic.family; > unsigned int model =3D cpu_features->basic.model; > long int shared =3D *shared_ptr; > + long int shared_per_thread =3D *shared_per_thread_ptr; > unsigned int threads =3D *threads_ptr; > bool inclusive_cache =3D true; > bool support_count_mask =3D true; > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned= int *threads_ptr, > /* Try L2 otherwise. */ > level =3D 2; > shared =3D core; > + shared_per_thread =3D core; > threads_l2 =3D 0; > threads_l3 =3D -1; > } > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsign= ed int *threads_ptr, > } > else > { > -intel_bug_no_cache_info: > - /* Assume that all logical threads share the highest cache > - level. */ > - threads > - =3D ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16) > - & 0xff); > - } > - > - /* Cap usage of highest cache level to the number of supported > - threads. */ > - if (shared > 0 && threads > 0) > - shared /=3D threads; > + intel_bug_no_cache_info: > + /* Assume that all logical threads share the highest cache > + level. */ > + threads =3D ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >= > 16) > + & 0xff); > + > + /* Get per-thread size of highest level cache. */ > + if (shared_per_thread > 0 && threads > 0) > + shared_per_thread /=3D threads; > + } > } > > /* Account for non-inclusive L2 and L3 caches. */ > if (!inclusive_cache) > { > if (threads_l2 > 0) > - core /=3D threads_l2; > + shared_per_thread +=3D core / threads_l2; > shared +=3D core; > } > > *shared_ptr =3D shared; > + *shared_per_thread_ptr =3D shared_per_thread; > *threads_ptr =3D threads; > } > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) > /* Find out what brand of processor. */ > long int data =3D -1; > long int shared =3D -1; > + long int shared_per_thread =3D -1; > long int core =3D -1; > unsigned int threads =3D 0; > unsigned long int level1_icache_size =3D -1; > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) > data =3D handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features); > core =3D handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features); > shared =3D handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features); > + shared_per_thread =3D shared; > > level1_icache_size > =3D handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features); > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > level4_cache_size > =3D handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features); > > - get_common_cache_info (&shared, &threads, core); > + get_common_cache_info (&shared, &shared_per_thread, &threads, core= ); > } > else if (cpu_features->basic.kind =3D=3D arch_kind_zhaoxin) > { > data =3D handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE); > core =3D handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE); > shared =3D handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE); > + shared_per_thread =3D shared; > > level1_icache_size =3D handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE); > level1_icache_linesize =3D handle_zhaoxin (_SC_LEVEL1_ICACHE_LINES= IZE); > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > level3_cache_assoc =3D handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC); > level3_cache_linesize =3D handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZ= E); > > - get_common_cache_info (&shared, &threads, core); > + get_common_cache_info (&shared, &shared_per_thread, &threads, core= ); > } > else if (cpu_features->basic.kind =3D=3D arch_kind_amd) > { > data =3D handle_amd (_SC_LEVEL1_DCACHE_SIZE); > core =3D handle_amd (_SC_LEVEL2_CACHE_SIZE); > shared =3D handle_amd (_SC_LEVEL3_CACHE_SIZE); > + shared_per_thread =3D shared; > > level1_icache_size =3D handle_amd (_SC_LEVEL1_ICACHE_SIZE); > level1_icache_linesize =3D handle_amd (_SC_LEVEL1_ICACHE_LINESIZE)= ; > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) > if (shared <=3D 0) > /* No shared L3 cache. All we have is the L2 cache. */ > shared =3D core; > + > + if (shared_per_thread <=3D 0) > + shared_per_thread =3D shared; > } > > cpu_features->level1_icache_size =3D level1_icache_size; > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > cpu_features->level3_cache_linesize =3D level3_cache_linesize; > cpu_features->level4_cache_size =3D level4_cache_size; > > - /* The default setting for the non_temporal threshold is 3/4 of one > - thread's share of the chip's cache. For most Intel and AMD processo= rs > - with an initial release date between 2017 and 2020, a thread's typi= cal > - share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4 > - threshold leaves 125 KBytes to 500 KBytes of the thread's data > - in cache after a maximum temporal copy, which will maintain > - in cache a reasonable portion of the thread's stack and other > - active data. If the threshold is set higher than one thread's > - share of the cache, it has a substantial risk of negatively > - impacting the performance of other threads running on the chip. */ > - unsigned long int non_temporal_threshold =3D shared * 3 / 4; > + /* The default setting for the non_temporal threshold is 1/4 of size > + of the chip's cache. For most Intel and AMD processors with an > + initial release date between 2017 and 2023, a thread's typical > + share of the cache is from 18-64MB. Using the 1/4 L3 is meant to > + estimate the point where non-temporal stores begin out-competing > + REP MOVSB. As well the point where the fact that non-temporal > + stores are forced back to main memory would already occurred to the > + majority of the lines in the copy. Note, concerns about the > + entire L3 cache being evicted by the copy are mostly alleviated > + by the fact that modern HW detects streaming patterns and > + provides proper LRU hints so that the maximum thrashing > + capped at 1/associativity. */ > + unsigned long int non_temporal_threshold =3D shared / 4; > + /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable sto= res run > + a higher risk of actually thrashing the cache as they don't have a = HW LRU > + hint. As well, their performance in highly parallel situations is > + noticeably worse. */ > + if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS)) > + non_temporal_threshold =3D shared_per_thread * 3 / 4; > /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the v= alue of > 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it i= s best > if that operation cannot overflow. Minimum of 0x4040 (16448) becaus= e the > -- > 2.34.1 > Now that Carlos, DJ, and HJ have signed off on this and Carlos + DJ have reproduced the results, I'm going to push this shortly. Thank you all for the review!