From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 15822 invoked by alias); 4 Sep 2013 07:27:50 -0000 Mailing-List: contact libc-ports-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Post: List-Help: , Sender: libc-ports-owner@sourceware.org Received: (qmail 15810 invoked by uid 89); 4 Sep 2013 07:27:50 -0000 Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 04 Sep 2013 07:27:50 +0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-4.2 required=5.0 tests=AWL,BAYES_00,RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: mx1.redhat.com Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r847RSs0031697 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Wed, 4 Sep 2013 03:27:28 -0400 Received: from spoyarek.pnq.redhat.com (dhcp193-137.pnq.redhat.com [10.65.193.137]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id r847RLUk008527 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO); Wed, 4 Sep 2013 03:27:24 -0400 Date: Wed, 04 Sep 2013 07:27:00 -0000 From: Siddhesh Poyarekar To: "Carlos O'Donell" Cc: =?utf-8?B?T25kxZllaiBCw61sa2E=?= , Will Newton , "libc-ports@sourceware.org" , Patch Tracking Subject: Re: [PATCH] sysdeps/arm/armv7/multiarch/memcpy_impl.S: Improve performance. Message-ID: <20130904073008.GA4306@spoyarek.pnq.redhat.com> References: <5220D30B.9080306@redhat.com> <5220F1F0.80501@redhat.com> <52260BD0.6090805@redhat.com> <20130903173710.GA2028@domone.kolej.mff.cuni.cz> <522621E2.6020903@redhat.com> <20130903185721.GA3876@domone.kolej.mff.cuni.cz> <5226354D.8000006@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5226354D.8000006@redhat.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-SW-Source: 2013-09/txt/msg00033.txt.bz2 On Tue, Sep 03, 2013 at 03:15:25PM -0400, Carlos O'Donell wrote: > I agree. The eventual goal of the project is to have some kind of > whole system benchmarking that allows users to feed in their profiles > and allow us as developers to see what users are doing with our library. > > Just like CPU designers feed in a whole distribution of applications > and look at the probability of instruction selection and tweak instruction > to microcode mappings. > > I am willing to accept a certain error in the process as long as I know > we are headed in the right direction. If we all disagree about the > direction we are going in then we should talk about it. > > I see: > > microbenchmarks -> whole system benchmarks -> profile driven optimizations I've mentioned this before - microbenchmarks are not a way to whole system benchmarks in that they don't replace system benchmarks. We need to work on both in parallel because both have different goals. A microbenchmark would have parameters such as alignment, size and cache pressure to determine how an implementation scales. These are generic numbers (i.e. they're not tied to specific high level workloads) that a developer can use to design their programs. Whole system benchmarks however work at a different level. They would give an average case number that describes how a specific recipe impacts performance of a set of programs. An administrator would use these to tweak the system for the workload. > I would be happy to accept a patch that does: > * Shows the benchmark numbers. > * Explains relevant factors not caught by the benchmark that affect > performance, what they are, and why the patch should go in. > > My goal is to increase the quality of the written rationales for > performance related submissions. Agreed. In fact, this should go in as a large comment in the implementation itself. Someone had mentioned in the past (was it Torvald?) that every assembly implementation we write should be as verbose in comments as it can possibly be so that there is no ambiguity about the rationale for selection of specific instruction sequences over others. > >> If we have N tests and they produce N numbers, for a given target, > >> for a given device, for a given workload, there is a set of importance > >> weights on N that should give you some kind of relevance. > >> > > You are jumping to case when we will have these weights. Problematic > > part is getting those. > > I agree. > > It's hard to know the weights without having an intuitive understanding > of the applications you're running on your system and what's relevant > for their performance. 1. Assume aligned input. Nothing should take (any noticeable) performance away from align copies/moves 2. Scale with size 3. Provide acceptable performance for unaligned sizes without penalizing the aligned case 4. Measure the effect of dcache pressure on function performance 5. Measure effect of icache pressure on function performance. Depending on the actual cost of cache misses on different processors, the icache/dcache miss cost would either have higher or lower weight but for 1-3, I'd go in that order of priorities with little concern for unaligned cases. Siddhesh