From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 119762 invoked by alias); 2 Sep 2015 13:17:56 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 119740 invoked by uid 89); 2 Sep 2015 13:17:56 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.3 required=5.0 tests=AWL,BAYES_00,KAM_LAZY_DOMAIN_SECURITY,SPF_HELO_PASS,T_RP_MATCHES_RCVD autolearn=no version=3.3.2 X-Spam-User: qpsmtpd, 2 recipients X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Wed, 02 Sep 2015 13:17:55 +0000 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (Postfix) with ESMTPS id E8E51461D3; Wed, 2 Sep 2015 13:17:53 +0000 (UTC) Received: from localhost (ovpn-116-43.ams2.redhat.com [10.36.116.43]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t82DHrIM023337; Wed, 2 Sep 2015 09:17:53 -0400 Date: Wed, 02 Sep 2015 13:17:00 -0000 From: Jonathan Wakely To: Dmitry Vyukov Cc: GCC Patches , libstdc++@gcc.gnu.org, Alexander Potapenko , Kostya Serebryany , Torvald Riegel Subject: Re: [Patch, libstdc++] Fix data races in basic_string implementation Message-ID: <20150902131752.GJ2631@redhat.com> References: <20150901142713.GG2631@redhat.com> <20150901150847.GH2631@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: X-Clacks-Overhead: GNU Terry Pratchett User-Agent: Mutt/1.5.23 (2014-03-12) X-SW-Source: 2015-09/txt/msg00149.txt.bz2 On 01/09/15 17:42 +0200, Dmitry Vyukov wrote: >On Tue, Sep 1, 2015 at 5:08 PM, Jonathan Wakely wrote: >> On 01/09/15 16:56 +0200, Dmitry Vyukov wrote: >>> >>> I don't understand how a new gcc may not support __atomic builtins on >>> ints. How it is even possible? That's a portable API provided by >>> recent gcc's... >> >> >> The built-in function is always defined, but it might expand to a call >> to an external function in libatomic, and it would be a regression for >> code using std::string to start requiring libatomic (although maybe it >> would be necessary if it's the only way to make the code correct). >> >> I don't know if there are any targets that define __GTHREADS and also >> don't support __atomic_load(int*, ...) without libatomic. If such >> targets exist then adding a new configure check that only depends on >> __atomic_load(int*, ...) would mean we keep supporting those targets. >> >> Another option would be to simply do: >> >> bool >> _M_is_shared() const _GLIBCXX_NOEXCEPT >> #if defined(__GTHREADS) >> + { return __atomic_load(&this->_M_refcount, __ATOMIC_ACQUIRE) > 0; } >> +#else >> { return this->_M_refcount > 0; } >> +#endif >> >> and see if anyone complains! > >I like this option! >If a platform uses multithreading and has non-inlined atomic loads, >then the way to fix this is to provide inlined atomic loads rather >than to fix all call sites. > >Attaching new patch. Please take another look. This looks good. Torvald suggested that it would be useful to add a similar comment to the release operation in _M_dispose, so that both sides of the release-acquire are similarly documented. Could you add that and provide a suitable ChangeLog entry? Thanks! >Index: include/bits/basic_string.h >=================================================================== >--- include/bits/basic_string.h (revision 227363) >+++ include/bits/basic_string.h (working copy) >@@ -2601,11 +2601,32 @@ > > bool > _M_is_leaked() const _GLIBCXX_NOEXCEPT >- { return this->_M_refcount < 0; } >+ { >+#if defined(__GTHREADS) >+ // _M_refcount is mutated concurrently by _M_refcopy/_M_dispose, >+ // so we need to use an atomic load. However, _M_is_leaked >+ // predicate does not change concurrently (i.e. the string is either >+ // leaked or not), so a relaxed load is enough. >+ return __atomic_load_n(&this->_M_refcount, __ATOMIC_RELAXED) < 0; >+#else >+ return this->_M_refcount < 0; >+#endif >+ } > > bool > _M_is_shared() const _GLIBCXX_NOEXCEPT >- { return this->_M_refcount > 0; } >+ { >+#if defined(__GTHREADS) >+ // _M_refcount is mutated concurrently by _M_refcopy/_M_dispose, >+ // so we need to use an atomic load. Another thread can drop last >+ // but one reference concurrently with this check, so we need this >+ // load to be acquire to synchronize with release fetch_and_add in >+ // _M_dispose. >+ return __atomic_load_n(&this->_M_refcount, __ATOMIC_ACQUIRE) > 0; >+#else >+ return this->_M_refcount > 0; >+#endif >+ } > > void > _M_set_leaked() _GLIBCXX_NOEXCEPT