From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 120117 invoked by alias); 22 May 2015 13:40:42 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 120105 invoked by uid 89); 22 May 2015 13:40:41 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.7 required=5.0 tests=AWL,BAYES_50,KAM_LAZY_DOMAIN_SECURITY,SPF_HELO_PASS,T_RP_MATCHES_RCVD autolearn=no version=3.3.2 X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Fri, 22 May 2015 13:40:40 +0000 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id t4MDecmQ030142 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 22 May 2015 09:40:38 -0400 Received: from [10.10.116.39] ([10.10.116.39]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t4MDea6f025980; Fri, 22 May 2015 09:40:37 -0400 Message-ID: <555F31CF.6060201@redhat.com> Date: Fri, 22 May 2015 13:49:00 -0000 From: Jason Merrill User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: Ramana Radhakrishnan , "gcc-patches@gcc.gnu.org" CC: David Edelsohn , wilson@tuliptree.org, Steve Ellcey , Richard Henderson Subject: Re: [RFC / CFT] PR c++/66192 - Remove TARGET_RELAXED_ORDERING and use load acquires. References: <555F1143.4070606@foss.arm.com> <555F11B6.1070001@foss.arm.com> In-Reply-To: <555F11B6.1070001@foss.arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit X-SW-Source: 2015-05/txt/msg02119.txt.bz2 On 05/22/2015 07:23 AM, Ramana Radhakrishnan wrote: > + /* Load the guard value only through an atomic acquire load. */ > + guard = build_atomic_load (guard, MEMMODEL_ACQUIRE); > + > /* Check to see if the GUARD is zero. */ > guard = get_guard_bits (guard); I wonder if these calls should be reversed, to express that we're only trying to atomically load a byte (on non-ARM targets)? > + tree orig_src = src; > + tree t, addr, val; > + unsigned int size; > + int fncode; > + > + size = tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (src))); > + > + fncode = BUILT_IN_ATOMIC_LOAD_N + exact_log2 (size) + 1; > + t = builtin_decl_implicit ((enum built_in_function) fncode); > + > + addr = build1 (ADDR_EXPR, ptr_type, src); > + val = build_call_expr (t, 2, addr, mem_model); > + > + /* First reinterpret the loaded bits in the original type of the load, > + then convert to the expected result type. */ > + t = fold_build1 (VIEW_CONVERT_EXPR, TREE_TYPE (src), val); > + return convert (TREE_TYPE (orig_src), t); I don't see anything that changes src here. Jason