From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 63244 invoked by alias); 1 Aug 2018 15:41:10 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 63215 invoked by uid 89); 1 Aug 2018 15:41:09 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-25.1 required=5.0 tests=AWL,BAYES_00,GIT_PATCH_0,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,KAM_LAZY_DOMAIN_SECURITY,KAM_SHORT,SPF_HELO_PASS autolearn=ham version=3.3.2 spammy=supplied, expired X-HELO: mx1.redhat.com Received: from mx3-rdu2.redhat.com (HELO mx1.redhat.com) (66.187.233.73) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 01 Aug 2018 15:41:07 +0000 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 829C44021703; Wed, 1 Aug 2018 15:41:05 +0000 (UTC) Received: from localhost (unknown [10.33.36.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id 244EA10CD7C1; Wed, 1 Aug 2018 15:41:04 +0000 (UTC) Date: Wed, 01 Aug 2018 15:41:00 -0000 From: Jonathan Wakely To: Mike Crowe Cc: libstdc++@gcc.gnu.org, gcc-patches@gcc.gnu.org Subject: Re: [PATCH 1/2] condition_variable: Report early wakeup of wait_until as no_timeout Message-ID: <20180801154104.GX14057@redhat.com> References: MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="E7i4zwmWs5DOuDSH" Content-Disposition: inline In-Reply-To: X-Clacks-Overhead: GNU Terry Pratchett User-Agent: Mutt/1.9.2 (2017-12-15) X-SW-Source: 2018-08/txt/msg00110.txt.bz2 --E7i4zwmWs5DOuDSH Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline Content-length: 715 On 20/07/18 17:49 +0100, Mike Crowe wrote: >As currently implemented, condition_variable always ultimately waits >against std::chrono::system_clock. This clock can be changed in arbitrary >ways by the user which may result in us waking up too early or too late >when measured against the caller-supplied clock. > >We can't (yet) do much about waking up too late[1], but >if we wake up too early we must return cv_status::no_timeout to indicate a >spurious wakeup rather than incorrectly returning cv_status::timeout. > >[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=41861 Committed to trunk, thanks very much. I reformatted it slightly, to keep the line below 80 columns. The version I committed is attached. --E7i4zwmWs5DOuDSH Content-Type: text/x-patch; charset=us-ascii Content-Disposition: attachment; filename="patch.txt" Content-length: 3900 commit 79a8b4c1d70287ac0e668c4f2335d70d97c3002e Author: redi Date: Wed Aug 1 15:39:45 2018 +0000 Report early wakeup of condition_variable::wait_until as no_timeout As currently implemented, condition_variable always ultimately waits against std::chrono::system_clock. This clock can be changed in arbitrary ways by the user which may result in us waking up too early or too late when measured against the caller-supplied clock. We can't (yet) do much about waking up too late (PR 41861), but if we wake up too early we must return cv_status::no_timeout to indicate a spurious wakeup rather than incorrectly returning cv_status::timeout. 2018-08-01 Mike Crowe * include/std/condition_variable (wait_until): Only report timeout if we really have timed out when measured against the caller-supplied clock. * testsuite/30_threads/condition_variable/members/2.cc: Add test case to confirm above behaviour. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@263224 138bc75d-0d04-0410-961f-82ee72b054a4 diff --git a/libstdc++-v3/include/std/condition_variable b/libstdc++-v3/include/std/condition_variable index 3f690c81799..c00afa2b7ae 100644 --- a/libstdc++-v3/include/std/condition_variable +++ b/libstdc++-v3/include/std/condition_variable @@ -117,7 +117,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION const auto __delta = __atime - __c_entry; const auto __s_atime = __s_entry + __delta; - return __wait_until_impl(__lock, __s_atime); + if (__wait_until_impl(__lock, __s_atime) == cv_status::no_timeout) + return cv_status::no_timeout; + // We got a timeout when measured against __clock_t but + // we need to check against the caller-supplied clock + // to tell whether we should return a timeout. + if (_Clock::now() < __atime) + return cv_status::no_timeout; + return cv_status::timeout; } template diff --git a/libstdc++-v3/testsuite/30_threads/condition_variable/members/2.cc b/libstdc++-v3/testsuite/30_threads/condition_variable/members/2.cc index 0a44c4fa7cf..09a717801e1 100644 --- a/libstdc++-v3/testsuite/30_threads/condition_variable/members/2.cc +++ b/libstdc++-v3/testsuite/30_threads/condition_variable/members/2.cc @@ -51,8 +51,60 @@ void test01() } } +struct slow_clock +{ + using rep = std::chrono::system_clock::rep; + using period = std::chrono::system_clock::period; + using duration = std::chrono::system_clock::duration; + using time_point = std::chrono::time_point; + static constexpr bool is_steady = false; + + static time_point now() + { + auto real = std::chrono::system_clock::now(); + return time_point{real.time_since_epoch() / 3}; + } +}; + + +void test01_alternate_clock() +{ + try + { + std::condition_variable c1; + std::mutex m; + std::unique_lock l(m); + auto const expire = slow_clock::now() + std::chrono::seconds(1); + + while (slow_clock::now() < expire) + { + auto const result = c1.wait_until(l, expire); + + // If wait_until returns before the timeout has expired when + // measured against the supplied clock, then wait_until must + // return no_timeout. + if (slow_clock::now() < expire) + VERIFY(result == std::cv_status::no_timeout); + + // If wait_until returns timeout then the timeout must have + // expired. + if (result == std::cv_status::timeout) + VERIFY(slow_clock::now() >= expire); + } + } + catch (const std::system_error& e) + { + VERIFY( false ); + } + catch (...) + { + VERIFY( false ); + } +} + int main() { test01(); + test01_alternate_clock(); return 0; } --E7i4zwmWs5DOuDSH--