From: Sunil Pandey <skpgkp2@gmail.com>
To: "H.J. Lu" <hjl.tools@gmail.com>, libc-stable@sourceware.org
Cc: Noah Goldstein <goldstein.w.n@gmail.com>,
GNU C Library <libc-alpha@sourceware.org>
Subject: Re: [PATCH v4 5/5] x86: Double size of ERMS rep_movsb_threshold in dl-cacheinfo.h
Date: Fri, 22 Apr 2022 18:42:55 -0700 [thread overview]
Message-ID: <CAMAf5_chbZmEgLvZs_OaiHy7WUNam3BOPOFEa22epT37h02JyQ@mail.gmail.com> (raw)
In-Reply-To: <CAMe9rOrV0zMLp-H3D4jUYvZHo_-XvxmtUwHK=UkX-ZiQAN5PWw@mail.gmail.com>
On Sat, Nov 6, 2021 at 12:11 PM H.J. Lu via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> On Sat, Nov 6, 2021 at 11:36 AM Noah Goldstein via Libc-alpha
> <libc-alpha@sourceware.org> wrote:
> >
> > No bug.
> >
> > This patch doubles the rep_movsb_threshold when using ERMS. Based on
> > benchmarks the vector copy loop, especially now that it handles 4k
> > aliasing, is better for these medium ranged.
> >
> > On Skylake with ERMS:
> >
> > Size, Align1, Align2, dst>src,(rep movsb) / (vec copy)
> > 4096, 0, 0, 0, 0.975
> > 4096, 0, 0, 1, 0.953
> > 4096, 12, 0, 0, 0.969
> > 4096, 12, 0, 1, 0.872
> > 4096, 44, 0, 0, 0.979
> > 4096, 44, 0, 1, 0.83
> > 4096, 0, 12, 0, 1.006
> > 4096, 0, 12, 1, 0.989
> > 4096, 0, 44, 0, 0.739
> > 4096, 0, 44, 1, 0.942
> > 4096, 12, 12, 0, 1.009
> > 4096, 12, 12, 1, 0.973
> > 4096, 44, 44, 0, 0.791
> > 4096, 44, 44, 1, 0.961
> > 4096, 2048, 0, 0, 0.978
> > 4096, 2048, 0, 1, 0.951
> > 4096, 2060, 0, 0, 0.986
> > 4096, 2060, 0, 1, 0.963
> > 4096, 2048, 12, 0, 0.971
> > 4096, 2048, 12, 1, 0.941
> > 4096, 2060, 12, 0, 0.977
> > 4096, 2060, 12, 1, 0.949
> > 8192, 0, 0, 0, 0.85
> > 8192, 0, 0, 1, 0.845
> > 8192, 13, 0, 0, 0.937
> > 8192, 13, 0, 1, 0.939
> > 8192, 45, 0, 0, 0.932
> > 8192, 45, 0, 1, 0.927
> > 8192, 0, 13, 0, 0.621
> > 8192, 0, 13, 1, 0.62
> > 8192, 0, 45, 0, 0.53
> > 8192, 0, 45, 1, 0.516
> > 8192, 13, 13, 0, 0.664
> > 8192, 13, 13, 1, 0.659
> > 8192, 45, 45, 0, 0.593
> > 8192, 45, 45, 1, 0.575
> > 8192, 2048, 0, 0, 0.854
> > 8192, 2048, 0, 1, 0.834
> > 8192, 2061, 0, 0, 0.863
> > 8192, 2061, 0, 1, 0.857
> > 8192, 2048, 13, 0, 0.63
> > 8192, 2048, 13, 1, 0.629
> > 8192, 2061, 13, 0, 0.627
> > 8192, 2061, 13, 1, 0.62
> > ---
> > sysdeps/x86/dl-cacheinfo.h | 8 +++++---
> > sysdeps/x86/dl-tunables.list | 26 +++++++++++++++-----------
> > 2 files changed, 20 insertions(+), 14 deletions(-)
> >
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index e6c94dfd02..2e43e67e4f 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -866,12 +866,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8. */
> > unsigned int minimum_rep_movsb_threshold;
> > #endif
> > - /* NB: The default REP MOVSB threshold is 2048 * (VEC_SIZE / 16). */
> > + /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
> > + VEC_SIZE == 64 or 32. For VEC_SIZE == 16, the default REP MOVSB
> > + threshold is 2048 * (VEC_SIZE / 16). */
> > unsigned int rep_movsb_threshold;
> > if (CPU_FEATURE_USABLE_P (cpu_features, AVX512F)
> > && !CPU_FEATURE_PREFERRED_P (cpu_features, Prefer_No_AVX512))
> > {
> > - rep_movsb_threshold = 2048 * (64 / 16);
> > + rep_movsb_threshold = 4096 * (64 / 16);
> > #if HAVE_TUNABLES
> > minimum_rep_movsb_threshold = 64 * 8;
> > #endif
> > @@ -879,7 +881,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > else if (CPU_FEATURE_PREFERRED_P (cpu_features,
> > AVX_Fast_Unaligned_Load))
> > {
> > - rep_movsb_threshold = 2048 * (32 / 16);
> > + rep_movsb_threshold = 4096 * (32 / 16);
> > #if HAVE_TUNABLES
> > minimum_rep_movsb_threshold = 32 * 8;
> > #endif
> > diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
> > index dd6e1d65c9..419313804d 100644
> > --- a/sysdeps/x86/dl-tunables.list
> > +++ b/sysdeps/x86/dl-tunables.list
> > @@ -32,17 +32,21 @@ glibc {
> > }
> > x86_rep_movsb_threshold {
> > type: SIZE_T
> > - # Since there is overhead to set up REP MOVSB operation, REP MOVSB
> > - # isn't faster on short data. The memcpy micro benchmark in glibc
> > - # shows that 2KB is the approximate value above which REP MOVSB
> > - # becomes faster than SSE2 optimization on processors with Enhanced
> > - # REP MOVSB. Since larger register size can move more data with a
> > - # single load and store, the threshold is higher with larger register
> > - # size. Note: Since the REP MOVSB threshold must be greater than 8
> > - # times of vector size and the default value is 2048 * (vector size
> > - # / 16), the default value and the minimum value must be updated at
> > - # run-time. NB: Don't set the default value since we can't tell if
> > - # the tunable value is set by user or not [BZ #27069].
> > + # Since there is overhead to set up REP MOVSB operation, REP
> > + # MOVSB isn't faster on short data. The memcpy micro benchmark
> > + # in glibc shows that 2KB is the approximate value above which
> > + # REP MOVSB becomes faster than SSE2 optimization on processors
> > + # with Enhanced REP MOVSB. Since larger register size can move
> > + # more data with a single load and store, the threshold is
> > + # higher with larger register size. Micro benchmarks show AVX
> > + # REP MOVSB becomes faster apprximately at 8KB. The AVX512
> > + # threshold is extrapolated to 16KB. For machines with FSRM the
> > + # threshold is universally set at 2112 bytes. Note: Since the
> > + # REP MOVSB threshold must be greater than 8 times of vector
> > + # size and the default value is 4096 * (vector size / 16), the
> > + # default value and the minimum value must be updated at
> > + # run-time. NB: Don't set the default value since we can't tell
> > + # if the tunable value is set by user or not [BZ #27069].
> > minval: 1
> > }
> > x86_rep_stosb_threshold {
> > --
> > 2.25.1
> >
>
> LGTM.
>
> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
>
> Thanks.
>
> --
> H.J.
I would like to backport this patch to release branches.
Any comments or objections?
--Sunil
prev parent reply other threads:[~2022-04-23 1:43 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20211101054952.2349590-1-goldstein.w.n@gmail.com>
[not found] ` <20211106183322.3129442-1-goldstein.w.n@gmail.com>
[not found] ` <20211106183322.3129442-4-goldstein.w.n@gmail.com>
[not found] ` <CAMe9rOpwXMHyL14KEakP+-DsdbgOcsza35+d_vMotmq5NLJGVQ@mail.gmail.com>
2022-04-23 1:41 ` [PATCH v4 4/5] x86: Optimize memmove-vec-unaligned-erms.S Sunil Pandey
[not found] ` <20211106183322.3129442-5-goldstein.w.n@gmail.com>
[not found] ` <CAMe9rOrV0zMLp-H3D4jUYvZHo_-XvxmtUwHK=UkX-ZiQAN5PWw@mail.gmail.com>
2022-04-23 1:42 ` Sunil Pandey [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAMAf5_chbZmEgLvZs_OaiHy7WUNam3BOPOFEa22epT37h02JyQ@mail.gmail.com \
--to=skpgkp2@gmail.com \
--cc=goldstein.w.n@gmail.com \
--cc=hjl.tools@gmail.com \
--cc=libc-alpha@sourceware.org \
--cc=libc-stable@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).