From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 31448 invoked by alias); 3 Sep 2013 18:57:28 -0000 Mailing-List: contact libc-ports-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: libc-ports-owner@sourceware.org Received: (qmail 31439 invoked by uid 89); 3 Sep 2013 18:57:28 -0000 Received: from popelka.ms.mff.cuni.cz (HELO popelka.ms.mff.cuni.cz) (195.113.20.131) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 03 Sep 2013 18:57:28 +0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-0.7 required=5.0 tests=AWL,BAYES_00,FREEMAIL_FROM,SPF_NEUTRAL autolearn=no version=3.3.2 X-HELO: popelka.ms.mff.cuni.cz Received: from domone.kolej.mff.cuni.cz (popelka.ms.mff.cuni.cz [195.113.20.131]) by popelka.ms.mff.cuni.cz (Postfix) with ESMTPS id BBF12699E0; Tue, 3 Sep 2013 20:57:21 +0200 (CEST) Received: by domone.kolej.mff.cuni.cz (Postfix, from userid 1000) id 9444E5F822; Tue, 3 Sep 2013 20:57:21 +0200 (CEST) Date: Tue, 03 Sep 2013 18:57:00 -0000 From: =?utf-8?B?T25kxZllaiBCw61sa2E=?= To: Carlos O'Donell Cc: Will Newton , "libc-ports@sourceware.org" , Patch Tracking , Siddhesh Poyarekar Subject: Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance. Message-ID: <20130903185721.GA3876@domone.kolej.mff.cuni.cz> References: <520894D5.7060207@linaro.org> <5220D30B.9080306@redhat.com> <5220F1F0.80501@redhat.com> <52260BD0.6090805@redhat.com> <20130903173710.GA2028@domone.kolej.mff.cuni.cz> <522621E2.6020903@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <522621E2.6020903@redhat.com> User-Agent: Mutt/1.5.20 (2009-06-14) X-IsSubscribed: yes X-SW-Source: 2013-09/txt/msg00024.txt.bz2 On Tue, Sep 03, 2013 at 01:52:34PM -0400, Carlos O'Donell wrote: > On 09/03/2013 01:37 PM, Ondřej Bílka wrote: > >> We have one, it's the glibc microbenchmark, and we want to expand it, > >> otherwise when ACME comes with their patch for ARM and breaks performance > >> for targets that Linaro cares about I have no way to reject the patch > >> objectively :-) > >> > > Carlos, you are asking for impossible. When you publish benchmark people > > will try to maximize benchmark number. After certain point this becomes > > possible only by employing shady accounting: Move part of time to place > > wehre it will not be measured by benchmark (for example by having > > function that is 4kb large, on benchmarks it will fit into instruction > > cache but that does not happen in reality). > > What is it that I'm asking that is impossible? > Having static set of benchmarks that can say if implementation is improvement. We are shooting to moving target, architectures change and as what we write will code that will come to users with considerable delay and factors we used for decision will change in meantime. Once implementation reach certain quality question what is better becomes dependent on program used. Until we could decide from profile feedback we will lose some percents by having to use single implementation. > > Taking care of common factors that can cause that is about ten times > > more complex than whole system benchmarking, analysis will be quite > > difficult as you will get twenty numbers and you will need to decide > > which ones could made real impact and which wont. > > Sorry, could you clarify this a bit more, exactly what is ten times > more complex? > Having benchmark suite that will catch all relevant factors that can affect performance. Some are hard to qualify for them we need to know how average program stresses resources. Take instruction cache usage, a function will occupy cache lines and we can accurately measure probability and cost of cache misses inside function. What is hard to estimate is how this will affect rest of program. For this we would need to know average probability that cache line will be referenced in future. > If we have N tests and they produce N numbers, for a given target, > for a given device, for a given workload, there is a set of importance > weights on N that should give you some kind of relevance. > You are jumping to case when we will have these weights. Problematic part is getting those. > We should be able to come up with some kind of framework from which > we can clearly say "this patch is better than this other patch", even > if not automated, it should be possible to reason from the results, > and that reasoning recorded as a discussion on this list. > What is possible is to say that some patch is significantly worse based on some criteria. There is lot of gray area where decision is unclear. > >>> The key advantage of the cortex-strings framework is that it allows > >>> graphing the results of benchmarks. Often changes to string function > >>> performance can only really be analysed graphically as otherwise you > >>> end up with a huge soup of numbers, some going up, some going down and > >>> it is very hard to separate the signal from the noise. > >> > >> I disagree strongly. You *must* come up with a measurable answer and > >> looking at a graph is never a solution I'm going to accept. > >> > > You can have that opinion. > > Looking at performance graphs is most powerful technique how to > > understand performance. I got most of my improvements from analyzing > > these. > > That is a different use for the graphs. I do not disagree that graphing > is a *powerful* way to display information and using that information to > produce a new routine is useful. What I disagree with is using such graphs > to argue qualitatively that your patch is better than the existing > implementation. > > There is always a quantitative way to say X is better than Y, but it > requires breaking down your expectations and documenting them e.g. > should be faster with X alignment on sizes from N bytes to M bytes, and > then ranking based on those criteria. > > >> You need to statistically analyze the numbers, assign weights to ranges, > >> and come up with some kind of number that evaluates the results based > >> on *some* formula. That is the only way we are going to keep moving > >> performance forward (against some kind of criteria). > >> > > These accurate assigning weigths is best done by taking program running > > it and measuring time. Without taking this into account weigths will not > > tell much, as you will likely just optimize cold code at expense of hot > > code. > > I don't disagree with you here. > > Cheers, > Carlos.