From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi1-x229.google.com (mail-oi1-x229.google.com [IPv6:2607:f8b0:4864:20::229]) by sourceware.org (Postfix) with ESMTPS id 5152A385702E for ; Mon, 11 Jan 2021 17:27:50 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 5152A385702E Received: by mail-oi1-x229.google.com with SMTP id w124so104797oia.6 for ; Mon, 11 Jan 2021 09:27:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=N2J5yVSj9v25BvsflOqbdNfVkw0FAVLmzisDKIhdUSU=; b=gjWXra93MjAlA9VVyXgd/bERmHFo58YLmPB0g1qI158sa1Xk4FfJUJc/NqiUTxWGiL MJ81ptM1WyWflBgTbDZCalhVUL43Jp401jmYEEG5j/CTFpuP4zy2gxJr7A8F65I869+N IVNIJQW4vWG9W/x1MpQNVI4RPSvqOhtRDbvH+qFB1tQp/WWAxcRg4fUOCRSQiV43jHEN l/agN4xM5E58Il7QqOgPl/m9fH3QFSnzK9he5cCR189vwXuKZy3x51gxi4BBzXe/9YL2 ljXg87J/Be2syodSDNSE1bMidkn0/QnEK3XjpJ757QbZYbaonrLmPba3RWxeSZKlU8mN 5GXQ== X-Gm-Message-State: AOAM532XyIjtxWNYJzqq0CxPtkiJj/fW59OB67bphw5ygA3IeT4DbdQS 0Dv7uqW03q33CxijaKjvKf6MxVBDdZo1GV2yD2w= X-Google-Smtp-Source: ABdhPJyGTj6wY9L9qgfKtFHHnZdrik1Rc2BadkZszUlsFTZ0fI4NwNKd+oW91uqi2jMhPq40sVBK07JIrRyItXSmmKM= X-Received: by 2002:aca:f5d3:: with SMTP id t202mr277504oih.25.1610386069708; Mon, 11 Jan 2021 09:27:49 -0800 (PST) MIME-Version: 1.0 References: <20210111104301.205094-1-sajan.karumanchi@amd.com> In-Reply-To: <20210111104301.205094-1-sajan.karumanchi@amd.com> From: "H.J. Lu" Date: Mon, 11 Jan 2021 09:27:13 -0800 Message-ID: Subject: Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB. To: Sajan Karumanchi Cc: GNU C Library , "Carlos O'Donell" , Florian Weimer , "Mallappa, Premachandra" Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-3036.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 11 Jan 2021 17:27:51 -0000 On Mon, Jan 11, 2021 at 2:43 AM wrote: > > From: Sajan Karumanchi > > In the process of optimizing memcpy for AMD machines, we have found the > vector move operations are outperforming enhanced REP MOVSB for data > transfers above the L2 cache size on Zen3 architectures. > To handle this use case, we are adding an upper bound parameter on > enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'. > As per large-bench results, we are configuring this parameter to the > L2 cache size for AMD machines and applicable from Zen3 architecture > supporting the ERMS feature. > For architectures other than AMD, it is the computed value of > non-temporal threshold parameter. > > Reviewed-by: Premachandra Mallappa > --- > sysdeps/x86/cacheinfo.h | 14 ++++++++++++++ > .../x86_64/multiarch/memmove-vec-unaligned-erms.S | 2 +- > 2 files changed, 15 insertions(+), 1 deletion(-) > > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h > index 00d2d8a52a..00c3a823f0 100644 > --- a/sysdeps/x86/cacheinfo.h > +++ b/sysdeps/x86/cacheinfo.h > @@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048; > /* Threshold to use Enhanced REP STOSB. */ > long int __x86_rep_stosb_threshold attribute_hidden = 2048; > > +/* Threshold to stop using Enhanced REP MOVSB. */ > +long int __x86_max_rep_movsb_threshold attribute_hidden = 512 * 1024; The default should be the same as __x86_shared_non_temporal_threshold. > static void > get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, > long int core) > @@ -351,6 +354,11 @@ init_cacheinfo (void) > /* Account for exclusive L2 and L3 caches. */ > shared += core; > } > + /* ERMS feature is implemented from Zen3 architecture and it is > + performing poorly for data above L2 cache size. Henceforth, adding > + an upper bound threshold parameter to limit the usage of Enhanced > + REP MOVSB operations and setting its value to L2 cache size. */ > + __x86_max_rep_movsb_threshold = core; > } > } > > @@ -423,6 +431,12 @@ init_cacheinfo (void) > else > __x86_rep_movsb_threshold = rep_movsb_threshold; > > + /* Setting the upper bound of ERMS to the known default value of > + non-temporal threshold for architectures other than AMD. */ > + if (cpu_features->basic.kind != arch_kind_amd) > + __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold; > + > + > # if HAVE_TUNABLES > __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold; > # endif > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > index 0980c95378..5682e7a9fd 100644 > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S > @@ -240,7 +240,7 @@ L(return): > ret > > L(movsb): > - cmp __x86_shared_non_temporal_threshold(%rip), %RDX_LP > + cmp __x86_max_rep_movsb_threshold(%rip), %RDX_LP Please add some comments here and update the algorithm at the beginning of the function. > jae L(more_8x_vec) > cmpq %rsi, %rdi > jb 1f > -- > 2.25.1 > -- H.J.