From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id C97893858C00; Mon, 29 May 2023 15:15:13 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org C97893858C00 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1685373313; bh=llHRcE3dbMSortYzGMLC6pOGDmLtOHJoTG0EF8hyTIo=; h=From:To:Subject:Date:In-Reply-To:References:From; b=nI3Gr2KC7aLb3PBMH8EAamLHz+I5514rBlt77+UO4XLtpHl8SAYRrK7K/XajWUXAq VVb4qDH0zUlOJFItAjrBBPbq1Sah7vr5iKVM9YgrAGUJoKUtpKVuEY5E5V33yTeV8U +pXJHOXEpUCcPQXWpcZgllXstyiKrAgfmYXps3oc= From: "pinskia at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug libstdc++/110016] Possible miscodegen when inlining std::condition_variable::wait predicate causes deadlock Date: Mon, 29 May 2023 15:15:13 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: libstdc++ X-Bugzilla-Version: 12.2.1 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: pinskia at gcc dot gnu.org X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D110016 --- Comment #13 from Andrew Pinski --- I suspect if you change the lambda/call to substrate::threadPool_t::waitWor= k to be: inline std::pair> waitWork() noexcept { std::unique_lock lock{workMutex}; ++waitingThreads; // wait, but protect ourselves from accidental wake-ups.. auto b =3D [this]() noexcept -> bool { bool t =3D f= inished; for(volatile int i =3D 0 ; i < 10000;i ++); return t || !work.empty(); }; //if (!b()) haveWork.wait(lock, b); --waitingThreads; if (!work.empty()) return {true, work.pop()}; return {false, {}}; } you might hit the issue with more C++ implementations. This should simulates the issue I was mentioning by adding a slight delay between the load of finished before the call to wait(lock).=