public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: sajan.karumanchi@amd.com
To: hjl.tools@gmail.com
Cc: Sajan Karumanchi <sajan.karumanchi@amd.com>,
	libc-alpha@sourceware.org, carlos@redhat.com, fweimer@redhat.com,
	Premachandra Mallappa <premachandra.mallappa@amd.com>
Subject: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.
Date: Wed, 13 Jan 2021 20:48:52 +0530	[thread overview]
Message-ID: <20210113151852.431112-1-sajan.karumanchi@amd.com> (raw)
In-Reply-To: <CAMe9rOq2yFS5k=dgmUX8h=d-6YXKUdjipUqE6C7-Z6T8isSZeg@mail.gmail.com>

From: Sajan Karumanchi <sajan.karumanchi@amd.com>

In the process of optimizing memcpy for AMD machines, we have found the
vector move operations are outperforming enhanced REP MOVSB for data
transfers above the L2 cache size on Zen3 architectures.
To handle this use case, we are adding an upper bound parameter on
enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
As per large-bench results, we are configuring this parameter to the
L2 cache size for AMD machines and applicable from Zen3 architecture
supporting the ERMS feature.
For architectures other than AMD, it is the computed value of
non-temporal threshold parameter.

Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
---
 sysdeps/x86/cacheinfo.h                            | 14 ++++++++++++++
 .../x86_64/multiarch/memmove-vec-unaligned-erms.S  |  8 ++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
index 00d2d8a52a..f20b7fea4f 100644
--- a/sysdeps/x86/cacheinfo.h
+++ b/sysdeps/x86/cacheinfo.h
@@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
 /* Threshold to use Enhanced REP STOSB.  */
 long int __x86_rep_stosb_threshold attribute_hidden = 2048;
 
+/* Threshold to stop using Enhanced REP MOVSB.  */
+long int __x86_max_rep_movsb_threshold attribute_hidden;
+
 static void
 get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
 		       long int core)
@@ -351,6 +354,11 @@ init_cacheinfo (void)
 	      /* Account for exclusive L2 and L3 caches.  */
 	      shared += core;
             }
+	  /* ERMS feature is implemented from Zen3 architecture and it is
+	     performing poorly for data above L2 cache size. Henceforth, adding
+	     an upper bound threshold parameter to limit the usage of Enhanced
+	     REP MOVSB operations and setting its value to L2 cache size.  */
+	  __x86_max_rep_movsb_threshold = core;
       }
     }
 
@@ -423,6 +431,12 @@ init_cacheinfo (void)
   else
     __x86_rep_movsb_threshold = rep_movsb_threshold;
 
+  /* Setting the upper bound of ERMS to the known default value of
+     non-temporal threshold for architectures other than AMD.  */
+  if (cpu_features->basic.kind != arch_kind_amd)
+    __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold;
+
+
 # if HAVE_TUNABLES
   __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
 # endif
diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
index 0980c95378..c7e75bfbda 100644
--- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
+++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
@@ -30,7 +30,10 @@
       load and aligned store.  Load the last 4 * VEC and first VEC
       before the loop and store them after the loop to support
       overlapping addresses.
-   6. If size >= __x86_shared_non_temporal_threshold and there is no
+   6. On machines with ERMS feature, if size greater than equal or to
+      __x86_rep_movsb_threshold and less than
+      __x86_max_rep_movsb_threshold, then REP MOVSB will be used.
+   7. If size >= __x86_shared_non_temporal_threshold and there is no
       overlap between destination and source, use non-temporal store
       instead of aligned store.  */
 
@@ -240,7 +243,8 @@ L(return):
 	ret
 
 L(movsb):
-	cmp	__x86_shared_non_temporal_threshold(%rip), %RDX_LP
+	/* Avoid REP MOVSB for sizes above max threshold.  */
+	cmp     __x86_max_rep_movsb_threshold(%rip), %RDX_LP
 	jae	L(more_8x_vec)
 	cmpq	%rsi, %rdi
 	jb	1f
-- 
2.25.1


  reply	other threads:[~2021-01-13 15:19 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-11 10:43 sajan.karumanchi
2021-01-11 17:27 ` H.J. Lu
2021-01-12 18:48   ` [PATCH 1/1] " sajan.karumanchi
2021-01-12 20:04     ` H.J. Lu
2021-01-13 15:18       ` sajan.karumanchi [this message]
2021-01-13 15:26         ` [PATCH] " H.J. Lu
2021-01-12 18:56   ` Karumanchi, Sajan
  -- strict thread matches above, loose matches on Subject: below --
2021-01-07 16:22 sajan.karumanchi
2021-01-08 14:03 ` Florian Weimer
2021-01-11 10:46   ` Karumanchi, Sajan
2021-01-18 17:07     ` Florian Weimer
2021-01-18 17:10       ` Adhemerval Zanella
2021-01-22 10:18       ` sajan.karumanchi
2021-02-01 17:05         ` H.J. Lu
2022-04-27 23:38           ` Sunil Pandey

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210113151852.431112-1-sajan.karumanchi@amd.com \
    --to=sajan.karumanchi@amd.com \
    --cc=carlos@redhat.com \
    --cc=fweimer@redhat.com \
    --cc=hjl.tools@gmail.com \
    --cc=libc-alpha@sourceware.org \
    --cc=premachandra.mallappa@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).