From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 5312 invoked by alias); 21 Jan 2015 08:34:39 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 5219 invoked by uid 89); 21 Jan 2015 08:34:31 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.1 required=5.0 tests=AWL,BAYES_00,SPF_HELO_PASS,SPF_PASS,T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Wed, 21 Jan 2015 08:34:20 +0000 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id t0L8YGun013387 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 21 Jan 2015 03:34:16 -0500 Received: from tucnak.zalov.cz (ovpn-116-98.ams2.redhat.com [10.36.116.98]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t0L8YEqL012448 (version=TLSv1/SSLv3 cipher=AES128-GCM-SHA256 bits=128 verify=NO); Wed, 21 Jan 2015 03:34:15 -0500 Received: from tucnak.zalov.cz (localhost [127.0.0.1]) by tucnak.zalov.cz (8.14.9/8.14.9) with ESMTP id t0L8YCHG006493; Wed, 21 Jan 2015 09:34:13 +0100 Received: (from jakub@localhost) by tucnak.zalov.cz (8.14.9/8.14.9/Submit) id t0L8YAdS006492; Wed, 21 Jan 2015 09:34:10 +0100 Date: Wed, 21 Jan 2015 09:02:00 -0000 From: Jakub Jelinek To: Dmitry Vyukov Cc: Mike Stump , Bernd Edlinger , "gcc-patches@gcc.gnu.org" Subject: Re: [PATCH] Fix sporadic failure in g++.dg/tsan/aligned_vs_unaligned_race.C Message-ID: <20150121083410.GL1746@tucnak.redhat.com> Reply-To: Jakub Jelinek References: <6E94E5C6-78C3-41E8-9B1B-ADF20347412B@comcast.net> <20150108192916.GM1405@tucnak.redhat.com> <20150108212726.GO1405@tucnak.redhat.com> <20150109153621.GC1405@tucnak.redhat.com> <3A94EBAB-74E7-4281-811C-3ADA0C4CE413@comcast.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) X-IsSubscribed: yes X-SW-Source: 2015-01/txt/msg01883.txt.bz2 On Wed, Jan 21, 2015 at 12:23:34PM +0400, Dmitry Vyukov wrote: > Hi Mike, > > Yes, I can quantify the cost. Is it very high. > > Here is the patch that I used: > > --- rtl/tsan_rtl.cc (revision 226644) > +++ rtl/tsan_rtl.cc (working copy) > @@ -709,7 +709,11 @@ > ALWAYS_INLINE USED > void MemoryAccess(ThreadState *thr, uptr pc, uptr addr, > int kAccessSizeLog, bool kAccessIsWrite, bool kIsAtomic) { > u64 *shadow_mem = (u64*)MemToShadow(addr); > + > + atomic_fetch_add((atomic_uint64_t*)shadow_mem, 0, memory_order_acq_rel); And the cost of adding that atomic_fetch_add guarded by if (__builtin_expect (someCondition, 0)) ? If that doesn't slow down the non-deterministic default case too much, that would allow users to choose what they prefer - much faster unreliable and slower deterministic. Then for the gcc testsuite we could opt for the latter. > + > > On the standard tsan benchmark that does 8-byte writes: > before: > [ OK ] DISABLED_BENCH.Mop8Write (1161 ms) > after: > [ OK ] DISABLED_BENCH.Mop8Write (5085 ms) > > So that's 338% slowdown. Jakub