From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by sourceware.org (Postfix) with ESMTP id A7E873951C26 for ; Tue, 24 Nov 2020 22:10:28 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org A7E873951C26 Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-555-_1_QW8GfOTuJkuE4GbmKuA-1; Tue, 24 Nov 2020 17:10:24 -0500 X-MC-Unique: _1_QW8GfOTuJkuE4GbmKuA-1 Received: by mail-qv1-f69.google.com with SMTP id r5so322988qvr.15 for ; Tue, 24 Nov 2020 14:10:24 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=vXFva3+jGTtd7mOJ6j6FqbV61rrTgKYHQOEc+nuhr6w=; b=g0mjarJUrs4TilDFwUthr5/wBi035WTIgCWpdgwl7t6mncf3d0a5XmC6uw7NirjnSz LJ2g0scICEYqsZTheJCuCvzO7dvB8IzGZysHfeSOjXeVSqRSqMOXVxAkiJdcw0ptXV8y iVWr1mLwK6i7EEJp0YrKURGZ9/mRG0MduCK4S5TwazHncaHLID6h6aWKHb5ncozHnQW5 s6fUC2PrdQ4A32BidS2M5v22EhJ2Ht7e1LTvPOxx61tiHAQbKv9ffzcV8CZ8W9VVogfl 1N0UjqQGTF4niF3I6k1f1MEBl6NgZm56w84j/Znkkgq4oaj3cQlGeYUQMaVCuiHL79lJ Sj8A== X-Gm-Message-State: AOAM533WLQFUpjCuVP6iRzxIG6u7yBnKapviqASmMtazgdXnxRAi7qJI bUyOLmyvRBOCqIXhXL+IAaiKxletdpBjVG3ezYR4+KwVtOWs1AuPpoUbyaRLWB1gspwviU96BI7 HQ1i4msVqPmzjSK5JSw== X-Received: by 2002:a0c:e548:: with SMTP id n8mr798144qvm.52.1606255823799; Tue, 24 Nov 2020 14:10:23 -0800 (PST) X-Google-Smtp-Source: ABdhPJykduUiQskXq8rLjt3u8oZATdQ7ztAr5PUrmyV96B2Gu/qt0IQL3iF2zl6+30yzlxwHNRDLRQ== X-Received: by 2002:a0c:e548:: with SMTP id n8mr798118qvm.52.1606255823476; Tue, 24 Nov 2020 14:10:23 -0800 (PST) Received: from [192.168.1.148] (209-6-216-142.s141.c3-0.smr-cbr1.sbo-smr.ma.cable.rcncustomer.com. [209.6.216.142]) by smtp.gmail.com with ESMTPSA id r48sm489543qtr.21.2020.11.24.14.10.22 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 24 Nov 2020 14:10:22 -0800 (PST) Subject: Re: [PATCH] Avoid atomic for guard acquire when that is expensive To: Bernd Edlinger , "Richard Earnshaw (lists)" , "gcc-patches@gcc.gnu.org" , Ramana Radhakrishnan , Nathan Sidwell , Christophe Lyon References: <8383d817-8622-4d1f-9564-8c10131db664@arm.com> From: Jason Merrill Message-ID: <37023468-e7b3-0c38-265d-1065637e953e@redhat.com> Date: Tue, 24 Nov 2020 17:10:13 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.1 MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, KAM_MANYTO, NICE_REPLY_A, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 24 Nov 2020 22:10:29 -0000 On 11/22/20 3:05 AM, Bernd Edlinger wrote: > Hi, > > this avoids the need to use -fno-threadsafe-statics on > arm-none-eabi or working around that problem by supplying > a dummy __sync_synchronize function which might > just lead to silent code failure of the worst kind > (non-reproducable, racy) at runtime, as was pointed out > on previous discussions here. > > When the atomic access involves a call to __sync_synchronize > it is better to call __cxa_guard_acquire unconditionally, > since it handles the atomics too, or is a non-threaded > implementation when there is no gthread support for this target. > > This fixes also a bug for the ARM EABI big-endian target, > that is, previously the wrong bit was checked. Instead of a new target macro, can't you follow fold_builtin_atomic_always_lock_free/can_atomic_load_p? Jason