public inbox for libstdc++-cvs@sourceware.org
help / color / mirror / Atom feed
* [gcc r11-3653] libstdc++: Use correct duration for atomic_futex wait on custom clock [PR 91486]
@ 2020-10-05 10:32 Jonathan Wakely
  0 siblings, 0 replies; only message in thread
From: Jonathan Wakely @ 2020-10-05 10:32 UTC (permalink / raw)
  To: gcc-cvs, libstdc++-cvs

https://gcc.gnu.org/g:f33a43f9f7eab7482837662821abb7fd02cb4350

commit r11-3653-gf33a43f9f7eab7482837662821abb7fd02cb4350
Author: Mike Crowe <mac@mcrowe.com>
Date:   Mon Oct 5 11:12:38 2020 +0100

    libstdc++: Use correct duration for atomic_futex wait on custom clock [PR 91486]
    
    As Jonathan Wakely pointed out[1], my change in commit
    f9ddb696a289cc48d24d3d23c0b324cb88de9573 should have been rounding to
    the target clock duration type rather than the input clock duration type
    in __atomic_futex_unsigned::_M_load_when_equal_until just as (e.g.)
    condition_variable does.
    
    As well as fixing this, let's create a rather contrived test that fails
    with the previous code, but unfortunately only when run on a machine
    with an uptime of over 208.5 days, and even then not always.
    
    [1] https://gcc.gnu.org/pipermail/libstdc++/2020-September/051004.html
    
    libstdc++-v3/ChangeLog:
    
            PR libstdc++/91486
            * include/bits/atomic_futex.h:
            (__atomic_futex_unsigned::_M_load_when_equal_until): Use target
            clock duration type when rounding.
            * testsuite/30_threads/async/async.cc (test_pr91486_wait_for):
            Rename from test_pr91486.
            (float_steady_clock): New class for test.
            (test_pr91486_wait_until): New test.

Diff:
---
 libstdc++-v3/include/bits/atomic_futex.h         |  2 +-
 libstdc++-v3/testsuite/30_threads/async/async.cc | 62 +++++++++++++++++++++++-
 2 files changed, 61 insertions(+), 3 deletions(-)

diff --git a/libstdc++-v3/include/bits/atomic_futex.h b/libstdc++-v3/include/bits/atomic_futex.h
index aa137a7b64e..6093be0fbc7 100644
--- a/libstdc++-v3/include/bits/atomic_futex.h
+++ b/libstdc++-v3/include/bits/atomic_futex.h
@@ -235,7 +235,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
 	  const __clock_t::time_point __s_entry = __clock_t::now();
 	  const auto __delta = __atime - __c_entry;
 	  const auto __s_atime = __s_entry +
-	      chrono::__detail::ceil<_Duration>(__delta);
+	      chrono::__detail::ceil<__clock_t::duration>(__delta);
 	  if (_M_load_when_equal_until(__val, __mo, __s_atime))
 	    return true;
 	  __c_entry = _Clock::now();
diff --git a/libstdc++-v3/testsuite/30_threads/async/async.cc b/libstdc++-v3/testsuite/30_threads/async/async.cc
index 46f8d2f327d..1c779bfbcad 100644
--- a/libstdc++-v3/testsuite/30_threads/async/async.cc
+++ b/libstdc++-v3/testsuite/30_threads/async/async.cc
@@ -157,7 +157,7 @@ void test04()
   }
 }
 
-void test_pr91486()
+void test_pr91486_wait_for()
 {
   future<void> f1 = async(launch::async, []() {
       std::this_thread::sleep_for(std::chrono::seconds(1));
@@ -171,6 +171,63 @@ void test_pr91486()
   VERIFY( elapsed_steady >= std::chrono::seconds(1) );
 }
 
+// This is a clock with a very recent epoch which ensures that the difference
+// between now() and one second in the future is representable in a float so
+// that when the generic clock version of
+// __atomic_futex_unsigned::_M_load_when_equal_until calculates the delta it
+// gets a duration of 1.0f.  When chrono::steady_clock has moved sufficiently
+// far from its epoch (about 208.5 days in my testing - about 2^54ns because
+// there's a promotion to double happening too somewhere) adding 1.0f to the
+// current time has no effect.  Using this clock ensures that
+// __atomic_futex_unsigned::_M_load_when_equal_until is using
+// chrono::__detail::ceil correctly so that the function actually sleeps rather
+// than spinning.
+struct float_steady_clock
+{
+  using duration = std::chrono::duration<float>;
+  using rep = typename duration::rep;
+  using period = typename duration::period;
+  using time_point = std::chrono::time_point<float_steady_clock, duration>;
+  static constexpr bool is_steady = true;
+
+  static chrono::steady_clock::time_point epoch;
+  static int call_count;
+
+  static time_point now()
+  {
+    ++call_count;
+    auto real = std::chrono::steady_clock::now();
+    return time_point{real - epoch};
+  }
+};
+
+chrono::steady_clock::time_point float_steady_clock::epoch = chrono::steady_clock::now();
+int float_steady_clock::call_count = 0;
+
+void test_pr91486_wait_until()
+{
+  future<void> f1 = async(launch::async, []() {
+      std::this_thread::sleep_for(std::chrono::seconds(1));
+    });
+
+  float_steady_clock::time_point const now = float_steady_clock::now();
+
+  std::chrono::duration<float> const wait_time = std::chrono::seconds(1);
+  float_steady_clock::time_point const expire = now + wait_time;
+  VERIFY( expire > now );
+
+  auto const start_steady = chrono::steady_clock::now();
+  auto status = f1.wait_until(expire);
+  auto const elapsed_steady = chrono::steady_clock::now() - start_steady;
+
+  // This checks that we didn't come back too soon
+  VERIFY( elapsed_steady >= std::chrono::seconds(1) );
+
+  // This checks that wait_until didn't busy wait checking the clock more times
+  // than necessary.
+  VERIFY( float_steady_clock::call_count <= 3 );
+}
+
 int main()
 {
   test01();
@@ -179,6 +236,7 @@ int main()
   test03<std::chrono::steady_clock>();
   test03<steady_clock_copy>();
   test04();
-  test_pr91486();
+  test_pr91486_wait_for();
+  test_pr91486_wait_until();
   return 0;
 }


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2020-10-05 10:32 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-05 10:32 [gcc r11-3653] libstdc++: Use correct duration for atomic_futex wait on custom clock [PR 91486] Jonathan Wakely

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).