From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 10000 invoked by alias); 12 Dec 2013 18:07:05 -0000 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org Received: (qmail 9964 invoked by uid 48); 12 Dec 2013 18:07:02 -0000 From: "algrant at acm dot org" To: gcc-bugs@gcc.gnu.org Subject: [Bug middle-end/59448] Code generation doesn't respect C11 address-dependency Date: Thu, 12 Dec 2013 18:07:00 -0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: middle-end X-Bugzilla-Version: unknown X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: algrant at acm dot org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-SW-Source: 2013-12/txt/msg01083.txt.bz2 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=59448 --- Comment #4 from algrant at acm dot org --- So using g++, #include int f1(std::atomic const *p, std::atomic const *q) { int flag = p->load(std::memory_order_consume); return flag ? (q + flag - flag)->load(std::memory_order_relaxed) : 0; } demonstrates the same lack of ordering. You suggest that this might be a problem with the atomic built-ins - and yes, if this had been a load-acquire, it would be a problem with the built-in not introducing a barrier or using a load-acquire instruction. But for a load-consume on this architecture, no barrier is necessary to separate the load-consume from a load that is address-dependent on it. The programmer wrote a dependency but the compiler lost track of it. It's not necessary to demonstrate failure - there's an architectural race condition here. Even if it doesn't fail now there's no guarantee it will never fail on future more aggressively reordering cores.