From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 7844) id 5781D3858434; Wed, 19 Jul 2023 03:35:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 5781D3858434 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1689737702; bh=G1cztl/uR9Sa2R0ARDNYtorS5lViOrMncnzyPgEUSQQ=; h=From:To:Subject:Date:From; b=K5ygnkxGrHD3C97hqNsLKl2LMTNlhZvXV2RFsbD6+OVOacyTKFe9nLkI8BJbVVPIM jwBeEfm6P8dkCtrb5HSGDGfGxIxG7x2QCQMX4COs2/Pa9oLTdx3YzcPtRW1jb7NZT8 IZLnw6vpdVHRRYrdWF+kDclBM704PKPD5VbEO0vg= Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: Noah Goldstein To: glibc-cvs@sourceware.org Subject: [glibc] [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for NT threshold. X-Act-Checkin: glibc X-Git-Author: Noah Goldstein X-Git-Refname: refs/heads/master X-Git-Oldrev: 47f747217811db35854ea06741be3685e8bbd44d X-Git-Newrev: 8b9a0af8ca012217bf90d1dc0694f85b49ae09da Message-Id: <20230719033502.5781D3858434@sourceware.org> Date: Wed, 19 Jul 2023 03:35:02 +0000 (GMT) List-Id: https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=8b9a0af8ca012217bf90d1dc0694f85b49ae09da commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da Author: Noah Goldstein Date: Tue Jul 18 10:27:59 2023 -0500 [PATCH v1] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for NT threshold. On some machines we end up with incomplete cache information. This can make the new calculation of `sizeof(total-L3)/custom-divisor` end up lower than intended (and lower than the prior value). So reintroduce the old bound as a lower bound to avoid potentially regressing code where we don't have complete information to make the decision. Reviewed-by: DJ Delorie Diff: --- sysdeps/x86/dl-cacheinfo.h | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index 43be2c1229..cd4d0351ae 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -745,8 +745,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) /* The default setting for the non_temporal threshold is [1/8, 1/2] of size of the chip's cache (depending on `cachesize_non_temporal_divisor` which - is microarch specific. The default is 1/4). For most Intel and AMD - processors with an initial release date between 2017 and 2023, a thread's + is microarch specific. The default is 1/4). For most Intel processors + with an initial release date between 2017 and 2023, a thread's typical share of the cache is from 18-64MB. Using a reasonable size fraction of L3 is meant to estimate the point where non-temporal stores begin out-competing REP MOVSB. As well the point where the fact that @@ -757,12 +757,21 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) the maximum thrashing capped at 1/associativity. */ unsigned long int non_temporal_threshold = shared / cachesize_non_temporal_divisor; + + /* If the computed non_temporal_threshold <= 3/4 * per-thread L3, we most + likely have incorrect/incomplete cache info in which case, default to + 3/4 * per-thread L3 to avoid regressions. */ + unsigned long int non_temporal_threshold_lowbound + = shared_per_thread * 3 / 4; + if (non_temporal_threshold < non_temporal_threshold_lowbound) + non_temporal_threshold = non_temporal_threshold_lowbound; + /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run a higher risk of actually thrashing the cache as they don't have a HW LRU hint. As well, their performance in highly parallel situations is noticeably worse. */ if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS)) - non_temporal_threshold = shared_per_thread * 3 / 4; + non_temporal_threshold = non_temporal_threshold_lowbound; /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best if that operation cannot overflow. Minimum of 0x4040 (16448) because the