From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [IPv6:2a00:1450:4864:20::532]) by sourceware.org (Postfix) with ESMTPS id CB6273858C74 for ; Tue, 25 Apr 2023 21:46:05 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org CB6273858C74 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com Received: by mail-ed1-x532.google.com with SMTP id 4fb4d7f45d1cf-5066ce4f725so9278068a12.1 for ; Tue, 25 Apr 2023 14:46:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1682459164; x=1685051164; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=rfXMKSpACyXYKH0J/xhbiHtCSqP0TmbPuqXXalYH/R8=; b=iXh/DnhUxXMrZn8RzzH+8tCUOxkVDb0ujZUw/1HhlXBmUEd5WEp+uI2TqBeV8VYJwv 5ZSnP2rL8C65n+FYZkjmvMaFdqAAbApezbQZs35m8ma5LCpbd6b48Zc6MPTcQ/eueHEI l3u6bVopvcsoS7Ap5cwdfuyZNxGDuzDC9B/yIWVmnYP7OM3j717b8fSKi7seW3gDjeYF ncB3tIqTuEnom1/T8B9nkHoqoRv2KYaKpdPKcJLd1HrfpTqqOYcMhrN0DKjy56lBpjUl S0UtoJp0Ixw++kip5+JMQPQayfyb/8GvB6D/bwZtEhg1/450uzOVv91kh+M8M4V39RTW YTUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682459164; x=1685051164; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rfXMKSpACyXYKH0J/xhbiHtCSqP0TmbPuqXXalYH/R8=; b=TRrY2tQx7cns5gmGopNPj20/7o5v+FHSKOKcdE+0U6db2NrOr7QEoeGhGy3nDdQrrP KzxJa2nSNGGDf5Ag2OK7acTt/QFN88fp5aWQrjjIGncMXMmZhJdrNCJFf6/sX/Cm+kAz DaLEz3NxhoNVKC3lHy2nUQMr07UOhmZc1vOKU1KstRiPXM0NXqJvMmBfqI+20AdyqKYi yzV7JhKyI7e+qqExmr92iI5dCH39Oecbtd7IxY+TiTcDBJ5ypQXiFFCbR3V7XrBcS1Ey sGlcGLJ4iRZoRONHeZQdzc6F9qHANjmBpV3Kq2nzgLa/mSug9d3PteajxgchHEJXQjEt /8dA== X-Gm-Message-State: AAQBX9eiZrL/6Q/bV8e4syyB2sBYtls3uFMBT40qpzdkDu7+Makk1iZr xAbgdO+iiw8qzLKCS69oApt1fpuYByT7citxQ50B1Xeg X-Google-Smtp-Source: AKy350Y34GpkBVCmyi6ErvbOYbB7gDHmWLHQyXYOkuFqqTzQohLys6CiULThECO9Z29y1McdxdefH+m3CCQYNrW9nec= X-Received: by 2002:a05:6402:6d1:b0:506:a192:d739 with SMTP id n17-20020a05640206d100b00506a192d739mr15180908edy.41.1682459164154; Tue, 25 Apr 2023 14:46:04 -0700 (PDT) MIME-Version: 1.0 References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230425034320.2136731-1-goldstein.w.n@gmail.com> In-Reply-To: From: Noah Goldstein Date: Tue, 25 Apr 2023 16:45:52 -0500 Message-ID: Subject: Re: [PATCH v3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` To: "H.J. Lu" Cc: libc-alpha@sourceware.org, carlos@systemhalted.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: On Tue, Apr 25, 2023 at 12:43=E2=80=AFPM H.J. Lu wrot= e: > > On Mon, Apr 24, 2023 at 8:43=E2=80=AFPM Noah Goldstein wrote: > > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 / > > ncores_per_socket'. This patch updates that value to roughly > > 'sizeof_L3 / 2` > > > > The original value (specifically dividing the `ncores_per_socket`) was > > done to limit the amount of other threads' data a `memcpy`/`memset` > > could evict. > > > > Dividing by 'ncores_per_socket', however leads to exceedingly low > > non-temporal threshholds and leads to using non-temporal stores in > > cases where `rep movsb` is multiple times faster. > > > > Furthermore, non-temporal stores are written directly to disk so using > > it at a size much smaller than L3 can place soon to be accessed data > > much further away than it otherwise could be. As well, modern machines > > are able to detect streaming patterns (especially if `rep movsb` is > > used) and provide LRU hints to the memory subsystem. This in affect > > caps the total amount of eviction at 1/cache_assosiativity, far below > > meaningfully thrashing the entire cache. > > > > As best I can tell, the benchmarks that lead this small threshold > > where done comparing non-temporal stores versus standard cacheable > > stores. A better comparison (linked below) is to be `rep movsb` which, > > on the measure systems, is nearly 2x faster than non-temporal stores > > at the low-end of the previous threshold, and within 10% for over > > 100MB copies (well past even the current threshold). In cases with a > > low number of threads competing for bandwidth, `rep movsb` is ~2x > > faster up to `sizeof_L3`. > > > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable > > stores where done using: > > https://github.com/goldsteinn/memcpy-nt-benchmarks > > > > Sheets results (also available in pdf on the github): > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9q= VuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml > > --- > > sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++--------------- > > 1 file changed, 43 insertions(+), 27 deletions(-) > > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h > > index ec88945b39..df75fbd868 100644 > > --- a/sysdeps/x86/dl-cacheinfo.h > > +++ b/sysdeps/x86/dl-cacheinfo.h > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name) > > } > > > > static void > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr= , > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thr= ead_ptr, unsigned int *threads_ptr, > > long int core) > > { > > unsigned int eax; > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsign= ed int *threads_ptr, > > unsigned int family =3D cpu_features->basic.family; > > unsigned int model =3D cpu_features->basic.model; > > long int shared =3D *shared_ptr; > > + long int shared_per_thread =3D *shared_per_thread_ptr; > > unsigned int threads =3D *threads_ptr; > > bool inclusive_cache =3D true; > > bool support_count_mask =3D true; > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsign= ed int *threads_ptr, > > /* Try L2 otherwise. */ > > level =3D 2; > > shared =3D core; > > + shared_per_thread =3D core; > > threads_l2 =3D 0; > > threads_l3 =3D -1; > > } > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsi= gned int *threads_ptr, > > } > > else > > { > > -intel_bug_no_cache_info: > > - /* Assume that all logical threads share the highest cache > > - level. */ > > - threads > > - =3D ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 1= 6) > > - & 0xff); > > - } > > - > > - /* Cap usage of highest cache level to the number of supported > > - threads. */ > > - if (shared > 0 && threads > 0) > > - shared /=3D threads; > > + intel_bug_no_cache_info: > > + /* Assume that all logical threads share the highest cache > > + level. */ > > + threads =3D ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx= >> 16) > > + & 0xff); > > + > > + /* Get per-thread size of highest level cache. */ > > + if (shared_per_thread > 0 && threads > 0) > > + shared_per_thread /=3D threads; > > + } > > } > > > > /* Account for non-inclusive L2 and L3 caches. */ > > if (!inclusive_cache) > > { > > if (threads_l2 > 0) > > - core /=3D threads_l2; > > + shared_per_thread +=3D core / threads_l2; > > shared +=3D core; > > } > > > > *shared_ptr =3D shared; > > + *shared_per_thread_ptr =3D shared_per_thread; > > *threads_ptr =3D threads; > > } > > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > > /* Find out what brand of processor. */ > > long int data =3D -1; > > long int shared =3D -1; > > + long int shared_per_thread =3D -1; > > long int core =3D -1; > > unsigned int threads =3D 0; > > unsigned long int level1_icache_size =3D -1; > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > > data =3D handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features); > > core =3D handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features); > > shared =3D handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features); > > + shared_per_thread =3D shared; > > > > level1_icache_size > > =3D handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features); > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_featu= res) > > level4_cache_size > > =3D handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features); > > > > - get_common_cache_info (&shared, &threads, core); > > + get_common_cache_info (&shared, &shared_per_thread, &threads, co= re); > > } > > else if (cpu_features->basic.kind =3D=3D arch_kind_zhaoxin) > > { > > data =3D handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE); > > core =3D handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE); > > shared =3D handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE); > > + shared_per_thread =3D shared; > > > > level1_icache_size =3D handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE); > > level1_icache_linesize =3D handle_zhaoxin (_SC_LEVEL1_ICACHE_LIN= ESIZE); > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_featu= res) > > level3_cache_assoc =3D handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC); > > level3_cache_linesize =3D handle_zhaoxin (_SC_LEVEL3_CACHE_LINES= IZE); > > > > - get_common_cache_info (&shared, &threads, core); > > + get_common_cache_info (&shared, &shared_per_thread, &threads, co= re); > > } > > else if (cpu_features->basic.kind =3D=3D arch_kind_amd) > > { > > data =3D handle_amd (_SC_LEVEL1_DCACHE_SIZE); > > core =3D handle_amd (_SC_LEVEL2_CACHE_SIZE); > > shared =3D handle_amd (_SC_LEVEL3_CACHE_SIZE); > > + shared_per_thread =3D shared; > > > > level1_icache_size =3D handle_amd (_SC_LEVEL1_ICACHE_SIZE); > > level1_icache_linesize =3D handle_amd (_SC_LEVEL1_ICACHE_LINESIZ= E); > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_feature= s) > > if (shared <=3D 0) > > /* No shared L3 cache. All we have is the L2 cache. */ > > shared =3D core; > > + > > + if (shared_per_thread <=3D 0) > > + shared_per_thread =3D shared; > > } > > > > cpu_features->level1_icache_size =3D level1_icache_size; > > @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_featu= res) > > cpu_features->level3_cache_linesize =3D level3_cache_linesize; > > cpu_features->level4_cache_size =3D level4_cache_size; > > > > - /* The default setting for the non_temporal threshold is 3/4 of one > > - thread's share of the chip's cache. For most Intel and AMD proces= sors > > - with an initial release date between 2017 and 2020, a thread's ty= pical > > - share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4 > > - threshold leaves 125 KBytes to 500 KBytes of the thread's data > > - in cache after a maximum temporal copy, which will maintain > > - in cache a reasonable portion of the thread's stack and other > > - active data. If the threshold is set higher than one thread's > > - share of the cache, it has a substantial risk of negatively > > - impacting the performance of other threads running on the chip. *= / > > - unsigned long int non_temporal_threshold =3D shared * 3 / 4; > > + /* The default setting for the non_temporal threshold is 1/2 of size > > + of chip's cache. For most Intel and AMD processors with an > > + initial release date between 2017 and 2023, a thread's typical > > + share of the cache is from 18-64MB. Using the 1/2 L3 is meant to > > + estimate the point where non-temporal stores begin outcompeting > > + other methods. As well the point where the fact that non-temporal > I think we should just say REP MOVSB. done. > > + stores are forced back to disk would already occured to the > ^^^^^ main memory. done > > + majority of the lines in the copy. Note, concerns about the > > + entire L3 cache being evicted by the copy are mostly alleviated > > + by the fact that modern HW detects streaming patterns and > > + provides proper LRU hints so that the the maximum thrashing > ^^^ Dup. done > > + capped at 1/assosiativity. */ > associativity Done > > + unsigned long int non_temporal_threshold =3D shared / 2; > > + /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable s= tores run > > + a higher risk of actually thrashing the cache as they don't have = a HW LRU > > + hint. As well, there performance in highly parallel situations is > the I think there? > > + noticeably worse. */ > > + if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS)) > > + non_temporal_threshold =3D shared_per_thread * 3 / 4; > > /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the= value of > > 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it= is best > > if that operation cannot overflow. Minimum of 0x4040 (16448) beca= use the > > -- > > 2.34.1 > > > > > -- > H.J.