From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 33541 invoked by alias); 12 May 2017 21:21:01 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Received: (qmail 33469 invoked by uid 89); 12 May 2017 21:21:01 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.3 required=5.0 tests=AWL,BAYES_00,FREEMAIL_FROM,RCVD_IN_DNSWL_LOW,RCVD_IN_SORBS_SPAM,SPF_PASS autolearn=ham version=3.3.2 spammy=measurement, Hx-languages-length:2979, representative, perfect X-HELO: mail-qt0-f182.google.com X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=o9GHyfunUHOTPjl7GyEAVRwhvlymt6/ir7VDDvXsD28=; b=oyQpJm38HlMx+6RKxnTgQmrm0fXvxgopgoxUzXiEjfQbb/wTdhlcaFDNJKS0B6INwV 1FxOBpu5vXAJBBQfJNR8qIew3MS1GSB7TKvytiROXzzsXZuBeox82ozP9FPhQcfxtWSE 1reds9v9YPxKqHPDTgn5akTczm70luunWYAcrtlxHahVUWw3mo3OUOSuq5fwgTtri++h 7AdgBsg7Za9kzOow7x4au4Ocw7UQZNWmrfNEgeahoD4aVdZ3oQ7J9TVhzrFF2ilLIJn0 sMkFME7LTh6gMG56Y9u/077K7vXo/SFh4LDMCk2E4BoFAYM9LeRmyMhlthq1GB0am7Js XIKA== X-Gm-Message-State: AODbwcBlXFrK17V1ONvxbiSqTVY5gZRmVmoyWoEGkyQHx1IaiBhcshcK CfG2cUNi+Xhcv0fJdj9jfSvpMcHdWgvG X-Received: by 10.200.45.60 with SMTP id n57mr6145935qta.15.1494624060192; Fri, 12 May 2017 14:21:00 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: References: <9c563a4b-424b-242f-b82f-4650ab2637f7@redhat.com> <28e34264-e8c5-5570-c48c-9125893808b2@redhat.com> From: "H.J. Lu" Date: Fri, 12 May 2017 21:21:00 -0000 Message-ID: Subject: Re: memcpy performance regressions 2.19 -> 2.24(5) To: Erich Elsen Cc: "Carlos O'Donell" , GNU C Library Content-Type: text/plain; charset="UTF-8" X-SW-Source: 2017-05/txt/msg00422.txt.bz2 On Fri, May 12, 2017 at 1:21 PM, H.J. Lu wrote: > On Fri, May 12, 2017 at 12:43 PM, Erich Elsen wrote: >> HJ - yes, the benchmark still shows the same behavior. I did have to modify >> the build to add -std=c++11. > > I updated hjl/x86/optimize branch with memcpy_benchmark2.cc > to change its output for easy comparison. Please take a look to see > if it is still valid. > > H.J. >> Carlos - Maybe the first step is to add a tunable that allows for selection >> of the non-temporal-store size threshold without changing the implementation >> that is selected. I can work on submitting this patch. There are /* The large memcpy micro benchmark in glibc shows that 6 times of shared cache size is the approximate value above which non-temporal store becomes faster. */ __x86_shared_non_temporal_threshold = __x86_shared_cache_size * 6; I did the measurement on a 8-core processor. 6 / 8 is .75 of the shared cache. But on processors with 56 cores, 6 / 56 may be too small. H.J. >> On Wed, May 10, 2017 at 7:17 PM, Carlos O'Donell wrote: >>> >>> On 05/10/2017 01:33 PM, H.J. Lu wrote: >>> > On Tue, May 9, 2017 at 4:48 PM, Erich Elsen wrote: >>> >> store is a net win even though it causes a 2-3x decrease in single >>> >> threaded performance for some processors? Or how else is the decision >>> >> about the threshold made? >>> > >>> > There is no perfect number to make everyone happy. I am open >>> > to suggestion to improve the compromise. >>> > >>> > H.J. >>> >>> I agree with H.J., there is a compromise to be made here. Having a single >>> process thrash the box by taking all of the memory bandwidth might be >>> sensible for a microservice, but glibc has to default to something that >>> works well on average. >>> >>> With the new tunables infrastructure we can start talking about ways in >>> which a tunable could influence IFUNC selection though, allowing users >>> some kind of choice into tweaking for single-threaded or multi-threaded, >>> single-user or multi-user etc. >>> >>> What I would like to see as the output of any discussion is a set of >>> microbenchmarks (benchtests/) added to glibc that are the distillation >>> of whatever workload we're talking about here. This is crucial to the >>> community having a way to test from release-to-release that we don't >>> regress performance. >>> >>> Unless you want to sign up to test your workload at every release then >>> we need this kind of microbenchmark addition. And microbenchmarks are >>> dead-easy to integrate with glibc so most people should have no excuse. >>> >>> The hardware vendors and distros who want particular performance tests >>> are putting such tests in place (representative of their users), and >>> direct >>> end-users who want particular performance are also adding tests. >>> >>> -- >>> Cheers, >>> Carlos. >> >> > > > > -- > H.J. -- H.J.