From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 29455 invoked by alias); 21 Dec 2012 10:36:38 -0000 Received: (qmail 29355 invoked by uid 22791); 21 Dec 2012 10:36:38 -0000 X-SWARE-Spam-Status: No, hits=-1.8 required=5.0 tests=AWL,BAYES_00,KHOP_RCVD_UNTRUST,RCVD_IN_DNSWL_LOW,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from nikam.ms.mff.cuni.cz (HELO nikam.ms.mff.cuni.cz) (195.113.20.16) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 21 Dec 2012 10:36:29 +0000 Received: by nikam.ms.mff.cuni.cz (Postfix, from userid 16202) id 65A1A542481; Fri, 21 Dec 2012 11:36:28 +0100 (CET) Date: Fri, 21 Dec 2012 10:36:00 -0000 From: Jan Hubicka To: Richard Biener Cc: Jan Hubicka , Andrew Pinski , Rong Xu , GCC Patches , David Li , reply@codereview.appspotmail.com Subject: Re: [google 4.7] atomic update of profile counters (issue6965050) Message-ID: <20121221103628.GB7055@kam.mff.cuni.cz> References: <20121219200828.73DB9106927@rong.mtv.corp.google.com> <20121220162054.GA26643@atrey.karlin.mff.cuni.cz> <20121221091338.GC15548@kam.mff.cuni.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.20 (2009-06-14) Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2012-12/txt/msg01305.txt.bz2 > On Fri, Dec 21, 2012 at 10:13 AM, Jan Hubicka wrote: > >> On Thu, Dec 20, 2012 at 8:20 AM, Jan Hubicka wrote: > >> >> On Wed, Dec 19, 2012 at 4:29 PM, Andrew Pinski wrote: > >> >> > > >> >> > On Wed, Dec 19, 2012 at 12:08 PM, Rong Xu wrote: > >> >> > > Hi, > >> >> > > > >> >> > > This patch adds the supprot of atomic update the profile counters. > >> >> > > Tested with google internal benchmarks and fdo kernel build. > >> >> > > >> >> > I think you should use the __atomic_ functions instead of __sync_ > >> >> > functions as they allow better performance for simple counters as you > >> >> > can use __ATOMIC_RELAXED. > >> >> > >> >> You are right. I think __ATOMIC_RELAXED should be OK here. > >> >> Thanks for the suggestion. > >> >> > >> >> > > >> >> > And this would be useful for the trunk also. I was going to implement > >> >> > this exact thing this week but some other important stuff came up. > >> >> > >> >> I'll post trunk patch later. > >> > > >> > Yes, I like that patch, too. Even if the costs are quite high (and this is why > >> > atomic updates was sort of voted down in the past) the alternative of using TLS > >> > has problems with too-much per-thread memory. > >> > >> Actually sometimes (on some processors) atomic increments are cheaper > >> than doing a regular incremental. Mainly because there is an > >> instruction which can handle it in the L2 cache rather than populating > >> the L1. Octeon is one such processor where this is true. > > > > One reason for large divergence may be the fact that we optimize the counter > > update code. Perhaps declaring counters volatile will prevent load/store motion > > and reduce the racing, too. > > Well, that will make it slower, too. The best benchmark to check is tramp3d > for all this stuff. I remember that ICC when it had a function call for each > counter update was about 100000x slower instrumented than w/o instrumentation > (that is, I never waited long enough to make it finish even one iteration ...) > > Thus, it's very important that counter updates are subject to loop > invariant / store > motion (and SCEV const-prop)! GCC does a wonderful job here at the moment, > please do not regress here. Well, this feature is enabled by user switch. I do not thing we should change the default behaviour... Which makes me to ask, the patch is very isolated (i.e. enabled by command line only) and has obvious value for end user. Would it be fine for stage3? Honza