From: "H.J. Lu" <hjl.tools@gmail.com>
To: Sajan Karumanchi <sajan.karumanchi@amd.com>
Cc: Florian Weimer <fweimer@redhat.com>,
GNU C Library <libc-alpha@sourceware.org>,
"Carlos O'Donell" <carlos@redhat.com>,
Premachandra Mallappa <premachandra.mallappa@amd.com>
Subject: Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
Date: Mon, 1 Feb 2021 09:05:34 -0800 [thread overview]
Message-ID: <CAMe9rOqc3=5Bdrqvg5PULZs_MOdJC5M4ZBrfxze=RHEa5_NYaQ@mail.gmail.com> (raw)
In-Reply-To: <20210122101850.3028846-1-sajan.karumanchi@amd.com>
On Fri, Jan 22, 2021 at 2:19 AM <sajan.karumanchi@amd.com> wrote:
>
> From: Sajan Karumanchi <sajan.karumanchi@amd.com>
>
> In the process of optimizing memcpy for AMD machines, we have found the
> vector move operations are outperforming enhanced REP MOVSB for data
> transfers above the L2 cache size on Zen3 architectures.
> To handle this use case, we are adding an upper bound parameter on
> enhanced REP MOVSB:'__x86_rep_movsb_stop_threshold'.
> As per large-bench results, we are configuring this parameter to the
> L2 cache size for AMD machines and applicable from Zen3 architecture
> supporting the ERMS feature.
> For architectures other than AMD, it is the computed value of
> non-temporal threshold parameter.
>
> Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
> ---
> sysdeps/x86/cacheinfo.h | 4 ++++
> sysdeps/x86/dl-cacheinfo.h | 15 ++++++++++++++-
> sysdeps/x86/include/cpu-features.h | 2 ++
> .../x86_64/multiarch/memmove-vec-unaligned-erms.S | 7 +++++--
> 4 files changed, 25 insertions(+), 3 deletions(-)
>
> diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> index 68c253542f..0f0ca7c08c 100644
> --- a/sysdeps/x86/cacheinfo.h
> +++ b/sysdeps/x86/cacheinfo.h
> @@ -54,6 +54,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
> /* Threshold to use Enhanced REP STOSB. */
> long int __x86_rep_stosb_threshold attribute_hidden = 2048;
>
> +/* Threshold to stop using Enhanced REP MOVSB. */
> +long int __x86_rep_movsb_stop_threshold attribute_hidden;
> +
> static void
> init_cacheinfo (void)
> {
> @@ -79,5 +82,6 @@ init_cacheinfo (void)
>
> __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
> __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
> + __x86_rep_movsb_stop_threshold = cpu_features->rep_movsb_stop_threshold;
> }
> #endif
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index a31fa0783a..374ba82467 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -704,7 +704,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> int max_cpuid_ex;
> long int data = -1;
> long int shared = -1;
> - long int core;
> + long int core = -1;
> unsigned int threads = 0;
> unsigned long int level1_icache_size = -1;
> unsigned long int level1_dcache_size = -1;
> @@ -886,6 +886,18 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> #endif
> }
>
> + unsigned long int rep_movsb_stop_threshold;
> + /* ERMS feature is implemented from AMD Zen3 architecture and it is
> + performing poorly for data above L2 cache size. Henceforth, adding
> + an upper bound threshold parameter to limit the usage of Enhanced
> + REP MOVSB operations and setting its value to L2 cache size. */
> + if (cpu_features->basic.kind == arch_kind_amd)
> + rep_movsb_stop_threshold = core;
> + /* Setting the upper bound of ERMS to the computed value of
> + non-temporal threshold for architectures other than AMD. */
> + else
> + rep_movsb_stop_threshold = non_temporal_threshold;
> +
> /* The default threshold to use Enhanced REP STOSB. */
> unsigned long int rep_stosb_threshold = 2048;
>
> @@ -935,4 +947,5 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> cpu_features->non_temporal_threshold = non_temporal_threshold;
> cpu_features->rep_movsb_threshold = rep_movsb_threshold;
> cpu_features->rep_stosb_threshold = rep_stosb_threshold;
> + cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
> }
> diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> index 624736b40e..475e877294 100644
> --- a/sysdeps/x86/include/cpu-features.h
> +++ b/sysdeps/x86/include/cpu-features.h
> @@ -870,6 +870,8 @@ struct cpu_features
> unsigned long int non_temporal_threshold;
> /* Threshold to use "rep movsb". */
> unsigned long int rep_movsb_threshold;
> + /* Threshold to stop using "rep movsb". */
> + unsigned long int rep_movsb_stop_threshold;
> /* Threshold to use "rep stosb". */
> unsigned long int rep_stosb_threshold;
> /* _SC_LEVEL1_ICACHE_SIZE. */
> diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> index 0980c95378..50bb1fccb2 100644
> --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> @@ -30,7 +30,10 @@
> load and aligned store. Load the last 4 * VEC and first VEC
> before the loop and store them after the loop to support
> overlapping addresses.
> - 6. If size >= __x86_shared_non_temporal_threshold and there is no
> + 6. On machines with ERMS feature, if size greater than equal or to
> + __x86_rep_movsb_threshold and less than
> + __x86_rep_movsb_stop_threshold, then REP MOVSB will be used.
> + 7. If size >= __x86_shared_non_temporal_threshold and there is no
> overlap between destination and source, use non-temporal store
> instead of aligned store. */
>
> @@ -240,7 +243,7 @@ L(return):
> ret
>
> L(movsb):
> - cmp __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> + cmp __x86_rep_movsb_stop_threshold(%rip), %RDX_LP
> jae L(more_8x_vec)
> cmpq %rsi, %rdi
> jb 1f
> --
> 2.25.1
>
LGTM. OK for 2.34.
Thanks.
--
H.J.
next prev parent reply other threads:[~2021-02-01 17:06 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-07 16:22 sajan.karumanchi
2021-01-08 14:03 ` Florian Weimer
2021-01-11 10:46 ` Karumanchi, Sajan
2021-01-18 17:07 ` Florian Weimer
2021-01-18 17:10 ` Adhemerval Zanella
2021-01-22 10:18 ` sajan.karumanchi
2021-02-01 17:05 ` H.J. Lu [this message]
2022-04-27 23:38 ` Sunil Pandey
2021-01-11 10:43 sajan.karumanchi
2021-01-11 17:27 ` H.J. Lu
2021-01-12 18:56 ` Karumanchi, Sajan
2021-01-12 20:04 [PATCH 1/1] " H.J. Lu
2021-01-13 15:18 ` [PATCH] " sajan.karumanchi
2021-01-13 15:26 ` H.J. Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAMe9rOqc3=5Bdrqvg5PULZs_MOdJC5M4ZBrfxze=RHEa5_NYaQ@mail.gmail.com' \
--to=hjl.tools@gmail.com \
--cc=carlos@redhat.com \
--cc=fweimer@redhat.com \
--cc=libc-alpha@sourceware.org \
--cc=premachandra.mallappa@amd.com \
--cc=sajan.karumanchi@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).