From: Noah Goldstein <goldstein.w.n@gmail.com>
To: "H.J. Lu" <hjl.tools@gmail.com>
Cc: GNU C Library <libc-alpha@sourceware.org>,
"Carlos O'Donell" <carlos@systemhalted.org>
Subject: Re: [PATCH v1 5/5] x86: Double size of ERMS rep_movsb_threshold in dl-cacheinfo.h
Date: Sat, 6 Nov 2021 12:38:09 -0500 [thread overview]
Message-ID: <CAFUsyf+QgxYWhR0NZTMos0kft=rcQm0dSyHcc00RfUHqUWa3Jw@mail.gmail.com> (raw)
In-Reply-To: <CAMe9rOqQRPu-PPS-agq-9y1BbD41ut+wZ7R05ud+yjEeYtfUDw@mail.gmail.com>
On Sat, Nov 6, 2021 at 7:05 AM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Fri, Nov 5, 2021 at 9:39 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Fri, Nov 5, 2021 at 9:32 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> > >
> > > On Mon, Nov 01, 2021 at 12:49:52AM -0500, Noah Goldstein wrote:
> > > > No bug.
> > > >
> > > > This patch doubles the rep_movsb_threshold when using ERMS. Based on
> > > > benchmarks the vector copy loop, especially now that it handles 4k
> > > > aliasing, is better for these medium ranged.
> > > >
> > > > On Skylake with ERMS:
> > > >
> > > > Size, Align1, Align2, dst>src,(rep movsb) / (vec copy)
> > > > 4096, 0, 0, 0, 0.975
> > > > 4096, 0, 0, 1, 0.953
> > > > 4096, 12, 0, 0, 0.969
> > > > 4096, 12, 0, 1, 0.872
> > > > 4096, 44, 0, 0, 0.979
> > > > 4096, 44, 0, 1, 0.83
> > > > 4096, 0, 12, 0, 1.006
> > > > 4096, 0, 12, 1, 0.989
> > > > 4096, 0, 44, 0, 0.739
> > > > 4096, 0, 44, 1, 0.942
> > > > 4096, 12, 12, 0, 1.009
> > > > 4096, 12, 12, 1, 0.973
> > > > 4096, 44, 44, 0, 0.791
> > > > 4096, 44, 44, 1, 0.961
> > > > 4096, 2048, 0, 0, 0.978
> > > > 4096, 2048, 0, 1, 0.951
> > > > 4096, 2060, 0, 0, 0.986
> > > > 4096, 2060, 0, 1, 0.963
> > > > 4096, 2048, 12, 0, 0.971
> > > > 4096, 2048, 12, 1, 0.941
> > > > 4096, 2060, 12, 0, 0.977
> > > > 4096, 2060, 12, 1, 0.949
> > > > 8192, 0, 0, 0, 0.85
> > > > 8192, 0, 0, 1, 0.845
> > > > 8192, 13, 0, 0, 0.937
> > > > 8192, 13, 0, 1, 0.939
> > > > 8192, 45, 0, 0, 0.932
> > > > 8192, 45, 0, 1, 0.927
> > > > 8192, 0, 13, 0, 0.621
> > > > 8192, 0, 13, 1, 0.62
> > > > 8192, 0, 45, 0, 0.53
> > > > 8192, 0, 45, 1, 0.516
> > > > 8192, 13, 13, 0, 0.664
> > > > 8192, 13, 13, 1, 0.659
> > > > 8192, 45, 45, 0, 0.593
> > > > 8192, 45, 45, 1, 0.575
> > > > 8192, 2048, 0, 0, 0.854
> > > > 8192, 2048, 0, 1, 0.834
> > > > 8192, 2061, 0, 0, 0.863
> > > > 8192, 2061, 0, 1, 0.857
> > > > 8192, 2048, 13, 0, 0.63
> > > > 8192, 2048, 13, 1, 0.629
> > > > 8192, 2061, 13, 0, 0.627
> > > > 8192, 2061, 13, 1, 0.62
> > > > ---
> > > > sysdeps/x86/dl-cacheinfo.h | 9 ++++++---
> > > > 1 file changed, 6 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > index e6c94dfd02..712b7c7fd0 100644
> > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > @@ -871,7 +871,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > if (CPU_FEATURE_USABLE_P (cpu_features, AVX512F)
> > > > && !CPU_FEATURE_PREFERRED_P (cpu_features, Prefer_No_AVX512))
> > > > {
> > > > - rep_movsb_threshold = 2048 * (64 / 16);
> > > > + rep_movsb_threshold = 4096 * (64 / 16);
> > >
> > > Please also update the default of x86_rep_stosb_threshold in
> >
> > Do you know what to set it at?
>
> Oops. I meant
ah. Fixed.
>
> x86_rep_movsb_threshold {
> type: SIZE_T
> # Since there is overhead to set up REP MOVSB operation, REP MOVSB
> # isn't faster on short data. The memcpy micro benchmark in glibc
> # shows that 2KB is the approximate value above which REP MOVSB
> # becomes faster than SSE2 optimization on processors with Enhanced
> # REP MOVSB. Since larger register size can move more data with a
> # single load and store, the threshold is higher with larger register
> # size. Note: Since the REP MOVSB threshold must be greater than 8
> # times of vector size and the default value is 2048 * (vector size
>
> ^^^^^^^
> # / 16), the default value and the minimum value must be updated at
> # run-time. NB: Don't set the default value since we can't tell if
> # the tunable value is set by user or not [BZ #27069].
> minval: 1
> }
>
> > I haven't tested recently but last time I checked stosb was significantly
> > better even for smaller values than movsb. Think it warrants another patch
> > as the numbers in this commit are for movsb and I don't think the two are
> > necessarily 1-1.
> >
> > >
> > > sysdeps/x86/dl-tunables.list
> > >
> > > > #if HAVE_TUNABLES
> > > > minimum_rep_movsb_threshold = 64 * 8;
> > > > #endif
> > > > @@ -879,14 +879,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > else if (CPU_FEATURE_PREFERRED_P (cpu_features,
> > > > AVX_Fast_Unaligned_Load))
> > > > {
> > > > - rep_movsb_threshold = 2048 * (32 / 16);
> > > > + rep_movsb_threshold = 4096 * (32 / 16);
> > > > #if HAVE_TUNABLES
> > > > minimum_rep_movsb_threshold = 32 * 8;
> > > > #endif
> > > > }
> > > > else
> > > > {
> > > > - rep_movsb_threshold = 2048 * (16 / 16);
> > > > + rep_movsb_threshold = 4096 * (16 / 16);
> > > > #if HAVE_TUNABLES
> > > > minimum_rep_movsb_threshold = 16 * 8;
> > > > #endif
> > > > @@ -896,6 +896,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > > if (CPU_FEATURE_USABLE_P (cpu_features, FSRM))
> > > > rep_movsb_threshold = 2112;
> > > >
> > > > +
> > > > +
> > > > +
> > >
> > > Please don't add these blank lines.
> > Fixed.
> >
> >
> > >
> > > > unsigned long int rep_movsb_stop_threshold;
> > > > /* ERMS feature is implemented from AMD Zen3 architecture and it is
> > > > performing poorly for data above L2 cache size. Henceforth, adding
> > > > --
> > > > 2.25.1
> > > >
> > >
> > > Thanks.
> > >
> > > H.J.
>
>
>
> --
> H.J.
next prev parent reply other threads:[~2021-11-06 17:38 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-01 5:49 [PATCH v1 1/5] string: Make tests birdirectional test-memcpy.c Noah Goldstein
2021-11-01 5:49 ` [PATCH v1 2/5] benchtests: Add additional cases to bench-memcpy.c and bench-memmove.c Noah Goldstein
2021-11-06 2:27 ` H.J. Lu
2021-11-01 5:49 ` [PATCH v1 3/5] benchtests: Add partial overlap case in bench-memmove-walk.c Noah Goldstein
2021-11-06 2:28 ` H.J. Lu
2021-11-01 5:49 ` [PATCH v1 4/5] x86: Optimize memmove-vec-unaligned-erms.S Noah Goldstein
2021-11-01 5:52 ` Noah Goldstein
2021-11-06 2:29 ` H.J. Lu
2021-11-01 5:49 ` [PATCH v1 5/5] x86: Double size of ERMS rep_movsb_threshold in dl-cacheinfo.h Noah Goldstein
2021-11-06 2:31 ` H.J. Lu
2021-11-06 4:39 ` Noah Goldstein
2021-11-06 12:04 ` H.J. Lu
2021-11-06 17:38 ` Noah Goldstein [this message]
2021-11-06 2:27 ` [PATCH v1 1/5] string: Make tests birdirectional test-memcpy.c H.J. Lu
2021-11-06 4:39 ` [PATCH v2 " Noah Goldstein
2021-11-06 4:39 ` [PATCH v2 2/5] benchtests: Add additional cases to bench-memcpy.c and bench-memmove.c Noah Goldstein
2021-11-06 4:39 ` [PATCH v2 3/5] benchtests: Add partial overlap case in bench-memmove-walk.c Noah Goldstein
2021-11-06 4:39 ` [PATCH v2 4/5] x86: Optimize memmove-vec-unaligned-erms.S Noah Goldstein
2021-11-06 4:39 ` [PATCH v2 5/5] x86: Double size of ERMS rep_movsb_threshold in dl-cacheinfo.h Noah Goldstein
2021-11-06 17:37 ` [PATCH v3 1/5] string: Make tests birdirectional test-memcpy.c Noah Goldstein
2021-11-06 17:37 ` [PATCH v3 2/5] benchtests: Add additional cases to bench-memcpy.c and bench-memmove.c Noah Goldstein
2021-11-06 17:37 ` [PATCH v3 3/5] benchtests: Add partial overlap case in bench-memmove-walk.c Noah Goldstein
2021-11-06 17:37 ` [PATCH v3 4/5] x86: Optimize memmove-vec-unaligned-erms.S Noah Goldstein
2021-11-06 17:37 ` [PATCH v3 5/5] x86: Double size of ERMS rep_movsb_threshold in dl-cacheinfo.h Noah Goldstein
2021-11-06 17:56 ` H.J. Lu
2021-11-06 18:11 ` Noah Goldstein
2021-11-06 18:21 ` H.J. Lu
2021-11-06 18:34 ` Noah Goldstein
2021-11-06 18:33 ` [PATCH v4 1/5] string: Make tests birdirectional test-memcpy.c Noah Goldstein
2021-11-06 18:33 ` [PATCH v4 2/5] benchtests: Add additional cases to bench-memcpy.c and bench-memmove.c Noah Goldstein
2021-11-06 19:12 ` H.J. Lu
2021-11-06 18:33 ` [PATCH v4 3/5] benchtests: Add partial overlap case in bench-memmove-walk.c Noah Goldstein
2021-11-06 19:11 ` H.J. Lu
2021-11-06 18:33 ` [PATCH v4 4/5] x86: Optimize memmove-vec-unaligned-erms.S Noah Goldstein
2021-11-06 19:11 ` H.J. Lu
2022-04-23 1:41 ` Sunil Pandey
2021-11-06 18:33 ` [PATCH v4 5/5] x86: Double size of ERMS rep_movsb_threshold in dl-cacheinfo.h Noah Goldstein
2021-11-06 19:10 ` H.J. Lu
2022-04-23 1:42 ` Sunil Pandey
2021-11-06 19:12 ` [PATCH v4 1/5] string: Make tests birdirectional test-memcpy.c H.J. Lu
2021-11-06 21:20 ` Noah Goldstein
2021-11-07 13:53 ` H.J. Lu
2021-12-07 21:10 ` Stafford Horne
2021-12-07 21:36 ` Noah Goldstein
2021-12-07 22:07 ` Stafford Horne
2021-12-07 22:13 ` Noah Goldstein
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAFUsyf+QgxYWhR0NZTMos0kft=rcQm0dSyHcc00RfUHqUWa3Jw@mail.gmail.com' \
--to=goldstein.w.n@gmail.com \
--cc=carlos@systemhalted.org \
--cc=hjl.tools@gmail.com \
--cc=libc-alpha@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).