From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 75194 invoked by alias); 8 Apr 2015 19:12:07 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 75166 invoked by uid 89); 8 Apr 2015 19:12:05 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-0.6 required=5.0 tests=AWL,BAYES_50,SPF_HELO_PASS,SPF_PASS,T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-Spam-User: qpsmtpd, 2 recipients X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Wed, 08 Apr 2015 19:12:02 +0000 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (Postfix) with ESMTPS id 609AB8E3ED; Wed, 8 Apr 2015 19:12:01 +0000 (UTC) Received: from localhost (ovpn-116-120.ams2.redhat.com [10.36.116.120]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t38JBxaG017127; Wed, 8 Apr 2015 15:12:00 -0400 Date: Wed, 08 Apr 2015 19:12:00 -0000 From: Jonathan Wakely To: Torvald Riegel Cc: libstdc++@gcc.gnu.org, gcc-patches@gcc.gnu.org Subject: Re: [patch] Fix shared_timed_mutex::try_lock_until() et al Message-ID: <20150408191159.GQ9755@redhat.com> References: <20150407142830.GG9755@redhat.com> <1428512373.924.26.camel@triegel.csb> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="bW7Nw90iaNQhVLQB" Content-Disposition: inline In-Reply-To: <1428512373.924.26.camel@triegel.csb> User-Agent: Mutt/1.5.23 (2014-03-12) X-SW-Source: 2015-04/txt/msg00348.txt.bz2 --bW7Nw90iaNQhVLQB Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline Content-length: 6089 On 08/04/15 18:59 +0200, Torvald Riegel wrote: >> + // set or the maximum number of reader locks is held, then >> increment the >> + // reader lock count. >> + // To release decrement the count, then if the write-entered flag >> is set >> + // and the count is zero then signal gate2 to wake a queued >> writer, >> + // otherwise if the maximum number of reader locks was held >> signal gate1 >> + // to wake a reader. >> + // >> + // To take a writer lock block on gate1 while the write-entered >> flag is >> + // set, then set the write-entered flag to start queueing, then >> block on >> + // gate2 while the number of reader locks is non-zero. >> + // To release unset the write-entered flag and signal gate1 to >> wake all >> + // blocked readers and writers. > >Perhaps it would also be useful to have a sentence on how writers and >readers are prioritized (e.g., writers preferred if readers already hold >the lock, all are equal when writer unlocks, ...). How about: // This means that when no reader locks are held readers and writers get // equal priority. When one or more reader locks is held a writer gets // priority and no more reader locks can be taken while the writer is // queued. >> + static constexpr unsigned _S_n_readers = ~_S_write_entered; > >Rename this to _S_max_readers or such? Done. >> template >> bool >> try_lock_until(const chrono::time_point<_Clock, _Duration>& >> __abs_time) >> { >> - unique_lock<_Mutex> __lk(_M_mut, __abs_time); >> - if (__lk.owns_lock() && _M_state == 0) >> + if (!_M_mut.try_lock_until(__abs_time)) >> + return false; > >I think a non-timed acquisition for the internal lock should be fine. >The internal lock is supposed to protect critical sections of finite >(and short) length, and the condvars also don't do acquisition with time >outs. Good point about the condvars re-acquiring the mutex. We can get rid of the _Mutex type then, and just use std::mutex, and that also means we can provide the timed locking functions even when !defined(_GTHREAD_USE_MUTEX_TIMEDLOCK). And so maybe we should use this fallback implementation instead of the pthread_rwlock_t one when !defined(_GTHREAD_USE_MUTEX_TIMEDLOCK), so that they have a complete std::shared_timed_mutex (this applies to at least Darwin, not sure which other targets). Maybe we should also provide a fallback std::timed_mutex based on a condvar, I'll put that on the TODO list for stage 1. >> + unique_lock __lk(_M_mut, adopt_lock); >> + if (!_M_gate1.wait_until(__lk, __abs_time, >> + [=]{ return !_M_write_entered(); })) >> { >> - _M_state = _S_write_entered; >> - return true; >> + return false; >> } >> - return false; >> + _M_state |= _S_write_entered; >> + if (!_M_gate2.wait_until(__lk, __abs_time, >> + [=]{ return _M_readers() == 0; })) >> + { > >I'd add a comment saying that you have to mimic a full write unlock, so >that's why those steps are necessary. OK, like this: // Wake all threads blocked while the write-entered flag was set. >> + _M_state ^= _S_write_entered; >> + _M_gate1.notify_all(); >> + return false; >> + } >> + return true; >> } >> #endif >> >> @@ -364,7 +399,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION >> unlock() >> { >> { >> - lock_guard<_Mutex> __lk(_M_mut); >> + lock_guard __lk(_M_mut); >> _M_state = 0; >> } >> _M_gate1.notify_all(); > >The notification must be *inside* of the critical section. Otherwise, >you're violating the mutex destruction requirements (see 30.4.1.3.1p3): >After you set _M_state to 0 and release _M_mut, another thread can go >it, acquire, assume it's the last one to own the mutex if that's what >the program ensures, and destroy; then, destruction would be concurrent >with the call to notify_all. (Perhaps check other wake-ups / unlocks as >well.) Thanks. I'll add this comment too: // call notify_all() while mutex is held so that another thread can't // lock and unlock the mutex then destroy *this before we make the call. >> void >> unlock_shared() >> { >> - lock_guard<_Mutex> __lk(_M_mut); >> - unsigned __num_readers = (_M_state & _M_n_readers) - 1; >> - _M_state &= ~_M_n_readers; >> - _M_state |= __num_readers; >> - if (_M_state & _S_write_entered) >> + lock_guard __lk(_M_mut); >> + auto __prev = _M_state--; >> + if (_M_write_entered()) >> { >> - if (__num_readers == 0) >> + if (_M_readers() == 0) >> _M_gate2.notify_one(); >> } >> else >> { >> - if (__num_readers == _M_n_readers - 1) >> + if (__prev == _S_n_readers) >> _M_gate1.notify_one(); >> } > >I think this needs documentation why we can't miss a wake-up in case >_M_write_entered is true and we have a reader overflow. I think it's >not quite obvious why this works. >If there is a writer we don't notify gate1 in unlock_shared(), so there >is potential to "loose" that information in interleavings with a writer. >If the writer acquires the lock eventually, things will work because >then, the writer will do the gate1 notification in its unlock path. If >the writer times out, it still does the gate1 notification regardless of >the the number of readers registered (because if there is no reader, it >will never time out but simply acquire the lock). Thus, when there is a >writer, the writer becomes responsible for gate1 notification, and >things work. Is this sufficient? // No need to notify gate1 because we give priority to the queued // writer, and that writer will eventually notify gate1 after it // clears the write-entered flag. I've attached a diff showing just the changes since the last patch, as well as the complete patch against the trunk. --bW7Nw90iaNQhVLQB Content-Type: text/plain; charset=us-ascii Content-Disposition: attachment; filename="incremental.txt" Content-length: 8396 diff --git a/libstdc++-v3/ChangeLog b/libstdc++-v3/ChangeLog index 5871716..39d6866 100644 --- a/libstdc++-v3/ChangeLog +++ b/libstdc++-v3/ChangeLog @@ -2,7 +2,8 @@ * include/std/shared_mutex (shared_timed_mutex): Add comments to explain the logic. - (_M_n_readers): Rename to _S_n_readers. + (_Mutex): Remove redundant type. + (_M_n_readers): Rename to _S_max_readers. (_M_write_entered, _M_readers): New convenience functions. (lock, lock_shared, try_lock_shared, unlock_shared): Use convenience functions. Use predicates with condition variables. Simplify bitwise @@ -11,7 +12,8 @@ and call try_lock_until or try_shared_lock_until respectively. (try_lock_until, try_shared_lock_until): Wait on the condition variables until the specified time passes. - (unlock, unlock_shared): Add Debug Mode assertions. + (unlock): Add Debug Mode assertion. + (unlock_shared): Add Debug Mode assertion. * testsuite/30_threads/shared_timed_mutex/try_lock/3.cc: New. 2015-03-30 Jonathan Wakely diff --git a/libstdc++-v3/include/std/shared_mutex b/libstdc++-v3/include/std/shared_mutex index feb5022..7f26465 100644 --- a/libstdc++-v3/include/std/shared_mutex +++ b/libstdc++-v3/include/std/shared_mutex @@ -271,47 +271,33 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // Must use the same clock as condition_variable typedef chrono::system_clock __clock_t; -#if _GTHREAD_USE_MUTEX_TIMEDLOCK - // Can't use std::timed_mutex with std::condition_variable, so define - // a new timed mutex type that derives from std::mutex. - struct _Mutex : mutex, __timed_mutex_impl<_Mutex> - { - template - bool - try_lock_for(const chrono::duration<_Rep, _Period>& __rtime) - { return _M_try_lock_for(__rtime); } - - template - bool - try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime) - { return _M_try_lock_until(__atime); } - }; -#else - typedef mutex _Mutex; -#endif - // Based on Howard Hinnant's reference implementation from N2406. // The high bit of _M_state is the write-entered flag which is set to // indicate a writer has taken the lock or is queuing to take the lock. // The remaining bits are the count of reader locks. // - // To take a reader lock block on gate1 while the write-entered flag is + // To take a reader lock, block on gate1 while the write-entered flag is // set or the maximum number of reader locks is held, then increment the // reader lock count. - // To release decrement the count, then if the write-entered flag is set + // To release, decrement the count, then if the write-entered flag is set // and the count is zero then signal gate2 to wake a queued writer, // otherwise if the maximum number of reader locks was held signal gate1 // to wake a reader. // - // To take a writer lock block on gate1 while the write-entered flag is + // To take a writer lock, block on gate1 while the write-entered flag is // set, then set the write-entered flag to start queueing, then block on // gate2 while the number of reader locks is non-zero. - // To release unset the write-entered flag and signal gate1 to wake all + // To release, unset the write-entered flag and signal gate1 to wake all // blocked readers and writers. + // + // This means that when no reader locks are held readers and writers get + // equal priority. When one or more reader locks is held a writer gets + // priority and no more reader locks can be taken while the writer is + // queued. // Only locked when accessing _M_state or waiting on condition variables. - _Mutex _M_mut; + mutex _M_mut; // Used to block while write-entered is set or reader count at maximum. condition_variable _M_gate1; // Used to block queued writers while reader count is non-zero. @@ -321,13 +307,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION static constexpr unsigned _S_write_entered = 1U << (sizeof(unsigned)*__CHAR_BIT__ - 1); - static constexpr unsigned _S_n_readers = ~_S_write_entered; + static constexpr unsigned _S_max_readers = ~_S_write_entered; // Test whether the write-entered flag is set. _M_mut must be locked. bool _M_write_entered() const { return _M_state & _S_write_entered; } // The number of reader locks currently held. _M_mut must be locked. - unsigned _M_readers() const { return _M_state & _S_n_readers; } + unsigned _M_readers() const { return _M_state & _S_max_readers; } public: shared_timed_mutex() : _M_state(0) {} @@ -346,8 +332,10 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION lock() { unique_lock __lk(_M_mut); + // Wait until we can set the write-entered flag. _M_gate1.wait(__lk, [=]{ return !_M_write_entered(); }); _M_state |= _S_write_entered; + // Then wait until there are no more readers. _M_gate2.wait(__lk, [=]{ return _M_readers() == 0; }); } @@ -363,7 +351,6 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION return false; } -#if _GTHREAD_USE_MUTEX_TIMEDLOCK template bool try_lock_for(const chrono::duration<_Rep, _Period>& __rel_time) @@ -375,9 +362,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION bool try_lock_until(const chrono::time_point<_Clock, _Duration>& __abs_time) { - if (!_M_mut.try_lock_until(__abs_time)) - return false; - unique_lock __lk(_M_mut, adopt_lock); + unique_lock __lk(_M_mut); if (!_M_gate1.wait_until(__lk, __abs_time, [=]{ return !_M_write_entered(); })) { @@ -388,21 +373,21 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION [=]{ return _M_readers() == 0; })) { _M_state ^= _S_write_entered; + // Wake all threads blocked while the write-entered flag was set. _M_gate1.notify_all(); return false; } return true; } -#endif void unlock() { - { - lock_guard __lk(_M_mut); - _GLIBCXX_DEBUG_ASSERT( _M_write_entered() ); - _M_state = 0; - } + lock_guard __lk(_M_mut); + _GLIBCXX_DEBUG_ASSERT( _M_write_entered() ); + _M_state = 0; + // call notify_all() while mutex is held so that another thread can't + // lock and unlock the mutex then destroy *this before we make the call. _M_gate1.notify_all(); } @@ -412,7 +397,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION lock_shared() { unique_lock __lk(_M_mut); - _M_gate1.wait(__lk, [=]{ return _M_state < _S_n_readers; }); + _M_gate1.wait(__lk, [=]{ return _M_state < _S_max_readers; }); ++_M_state; } @@ -422,7 +407,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION unique_lock __lk(_M_mut, try_to_lock); if (!__lk.owns_lock()) return false; - if (_M_state < _S_n_readers) + if (_M_state < _S_max_readers) { ++_M_state; return true; @@ -430,7 +415,6 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION return false; } -#if _GTHREAD_USE_MUTEX_TIMEDLOCK template bool try_lock_shared_for(const chrono::duration<_Rep, _Period>& __rel_time) @@ -443,18 +427,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION try_lock_shared_until(const chrono::time_point<_Clock, _Duration>& __abs_time) { - if (!_M_mut.try_lock_until(__abs_time)) - return false; - unique_lock __lk(_M_mut, adopt_lock); + unique_lock __lk(_M_mut); if (!_M_gate1.wait_until(__lk, __abs_time, - [=]{ return _M_state < _S_n_readers; })) + [=]{ return _M_state < _S_max_readers; })) { return false; } ++_M_state; return true; } -#endif void unlock_shared() @@ -464,12 +445,17 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION auto __prev = _M_state--; if (_M_write_entered()) { + // Wake the queued writer if there are no more readers. if (_M_readers() == 0) _M_gate2.notify_one(); + // No need to notify gate1 because we give priority to the queued + // writer, and that writer will eventually notify gate1 after it + // clears the write-entered flag. } else { - if (__prev == _S_n_readers) + // Wake any thread that was blocked on reader overflow. + if (__prev == _S_max_readers) _M_gate1.notify_one(); } } --bW7Nw90iaNQhVLQB Content-Type: text/x-patch; charset=us-ascii Content-Disposition: attachment; filename="patch.txt" Content-length: 12423 commit 11422fedbc04016a519e58e4c965fadbaef7851e Author: Jonathan Wakely Date: Thu Apr 2 19:50:50 2015 +0100 * include/std/shared_mutex (shared_timed_mutex): Add comments to explain the logic. (_Mutex): Remove redundant type. (_M_n_readers): Rename to _S_max_readers. (_M_write_entered, _M_readers): New convenience functions. (lock, lock_shared, try_lock_shared, unlock_shared): Use convenience functions. Use predicates with condition variables. Simplify bitwise operations. (try_lock_for, try_shared_lock_for): Convert duration to time_point and call try_lock_until or try_shared_lock_until respectively. (try_lock_until, try_shared_lock_until): Wait on the condition variables until the specified time passes. (unlock): Add Debug Mode assertion. (unlock_shared): Add Debug Mode assertion. * testsuite/30_threads/shared_timed_mutex/try_lock/3.cc: New. diff --git a/libstdc++-v3/include/std/shared_mutex b/libstdc++-v3/include/std/shared_mutex index ab1b45b..7f26465 100644 --- a/libstdc++-v3/include/std/shared_mutex +++ b/libstdc++-v3/include/std/shared_mutex @@ -268,33 +268,52 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION #else // ! _GLIBCXX_USE_PTHREAD_RWLOCK_T -#if _GTHREAD_USE_MUTEX_TIMEDLOCK - struct _Mutex : mutex, __timed_mutex_impl<_Mutex> - { - template - bool - try_lock_for(const chrono::duration<_Rep, _Period>& __rtime) - { return _M_try_lock_for(__rtime); } - - template - bool - try_lock_until(const chrono::time_point<_Clock, _Duration>& __atime) - { return _M_try_lock_until(__atime); } - }; -#else - typedef mutex _Mutex; -#endif - - // Based on Howard Hinnant's reference implementation from N2406 + // Must use the same clock as condition_variable + typedef chrono::system_clock __clock_t; - _Mutex _M_mut; + // Based on Howard Hinnant's reference implementation from N2406. + + // The high bit of _M_state is the write-entered flag which is set to + // indicate a writer has taken the lock or is queuing to take the lock. + // The remaining bits are the count of reader locks. + // + // To take a reader lock, block on gate1 while the write-entered flag is + // set or the maximum number of reader locks is held, then increment the + // reader lock count. + // To release, decrement the count, then if the write-entered flag is set + // and the count is zero then signal gate2 to wake a queued writer, + // otherwise if the maximum number of reader locks was held signal gate1 + // to wake a reader. + // + // To take a writer lock, block on gate1 while the write-entered flag is + // set, then set the write-entered flag to start queueing, then block on + // gate2 while the number of reader locks is non-zero. + // To release, unset the write-entered flag and signal gate1 to wake all + // blocked readers and writers. + // + // This means that when no reader locks are held readers and writers get + // equal priority. When one or more reader locks is held a writer gets + // priority and no more reader locks can be taken while the writer is + // queued. + + // Only locked when accessing _M_state or waiting on condition variables. + mutex _M_mut; + // Used to block while write-entered is set or reader count at maximum. condition_variable _M_gate1; + // Used to block queued writers while reader count is non-zero. condition_variable _M_gate2; + // The write-entered flag and reader count. unsigned _M_state; static constexpr unsigned _S_write_entered = 1U << (sizeof(unsigned)*__CHAR_BIT__ - 1); - static constexpr unsigned _M_n_readers = ~_S_write_entered; + static constexpr unsigned _S_max_readers = ~_S_write_entered; + + // Test whether the write-entered flag is set. _M_mut must be locked. + bool _M_write_entered() const { return _M_state & _S_write_entered; } + + // The number of reader locks currently held. _M_mut must be locked. + unsigned _M_readers() const { return _M_state & _S_max_readers; } public: shared_timed_mutex() : _M_state(0) {} @@ -313,11 +332,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION lock() { unique_lock __lk(_M_mut); - while (_M_state & _S_write_entered) - _M_gate1.wait(__lk); + // Wait until we can set the write-entered flag. + _M_gate1.wait(__lk, [=]{ return !_M_write_entered(); }); _M_state |= _S_write_entered; - while (_M_state & _M_n_readers) - _M_gate2.wait(__lk); + // Then wait until there are no more readers. + _M_gate2.wait(__lk, [=]{ return _M_readers() == 0; }); } bool @@ -332,41 +351,43 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION return false; } -#if _GTHREAD_USE_MUTEX_TIMEDLOCK template bool try_lock_for(const chrono::duration<_Rep, _Period>& __rel_time) { - unique_lock<_Mutex> __lk(_M_mut, __rel_time); - if (__lk.owns_lock() && _M_state == 0) - { - _M_state = _S_write_entered; - return true; - } - return false; + return try_lock_until(__clock_t::now() + __rel_time); } template bool try_lock_until(const chrono::time_point<_Clock, _Duration>& __abs_time) { - unique_lock<_Mutex> __lk(_M_mut, __abs_time); - if (__lk.owns_lock() && _M_state == 0) + unique_lock __lk(_M_mut); + if (!_M_gate1.wait_until(__lk, __abs_time, + [=]{ return !_M_write_entered(); })) { - _M_state = _S_write_entered; - return true; + return false; } - return false; + _M_state |= _S_write_entered; + if (!_M_gate2.wait_until(__lk, __abs_time, + [=]{ return _M_readers() == 0; })) + { + _M_state ^= _S_write_entered; + // Wake all threads blocked while the write-entered flag was set. + _M_gate1.notify_all(); + return false; + } + return true; } -#endif void unlock() { - { - lock_guard<_Mutex> __lk(_M_mut); - _M_state = 0; - } + lock_guard __lk(_M_mut); + _GLIBCXX_DEBUG_ASSERT( _M_write_entered() ); + _M_state = 0; + // call notify_all() while mutex is held so that another thread can't + // lock and unlock the mutex then destroy *this before we make the call. _M_gate1.notify_all(); } @@ -376,51 +397,29 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION lock_shared() { unique_lock __lk(_M_mut); - while ((_M_state & _S_write_entered) - || (_M_state & _M_n_readers) == _M_n_readers) - { - _M_gate1.wait(__lk); - } - unsigned __num_readers = (_M_state & _M_n_readers) + 1; - _M_state &= ~_M_n_readers; - _M_state |= __num_readers; + _M_gate1.wait(__lk, [=]{ return _M_state < _S_max_readers; }); + ++_M_state; } bool try_lock_shared() { - unique_lock<_Mutex> __lk(_M_mut, try_to_lock); - unsigned __num_readers = _M_state & _M_n_readers; - if (__lk.owns_lock() && !(_M_state & _S_write_entered) - && __num_readers != _M_n_readers) + unique_lock __lk(_M_mut, try_to_lock); + if (!__lk.owns_lock()) + return false; + if (_M_state < _S_max_readers) { - ++__num_readers; - _M_state &= ~_M_n_readers; - _M_state |= __num_readers; + ++_M_state; return true; } return false; } -#if _GTHREAD_USE_MUTEX_TIMEDLOCK template bool try_lock_shared_for(const chrono::duration<_Rep, _Period>& __rel_time) { - unique_lock<_Mutex> __lk(_M_mut, __rel_time); - if (__lk.owns_lock()) - { - unsigned __num_readers = _M_state & _M_n_readers; - if (!(_M_state & _S_write_entered) - && __num_readers != _M_n_readers) - { - ++__num_readers; - _M_state &= ~_M_n_readers; - _M_state |= __num_readers; - return true; - } - } - return false; + return try_lock_shared_until(__clock_t::now() + __rel_time); } template @@ -428,38 +427,35 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION try_lock_shared_until(const chrono::time_point<_Clock, _Duration>& __abs_time) { - unique_lock<_Mutex> __lk(_M_mut, __abs_time); - if (__lk.owns_lock()) + unique_lock __lk(_M_mut); + if (!_M_gate1.wait_until(__lk, __abs_time, + [=]{ return _M_state < _S_max_readers; })) { - unsigned __num_readers = _M_state & _M_n_readers; - if (!(_M_state & _S_write_entered) - && __num_readers != _M_n_readers) - { - ++__num_readers; - _M_state &= ~_M_n_readers; - _M_state |= __num_readers; - return true; - } + return false; } - return false; + ++_M_state; + return true; } -#endif void unlock_shared() { - lock_guard<_Mutex> __lk(_M_mut); - unsigned __num_readers = (_M_state & _M_n_readers) - 1; - _M_state &= ~_M_n_readers; - _M_state |= __num_readers; - if (_M_state & _S_write_entered) + lock_guard __lk(_M_mut); + _GLIBCXX_DEBUG_ASSERT( _M_readers() > 0 ); + auto __prev = _M_state--; + if (_M_write_entered()) { - if (__num_readers == 0) + // Wake the queued writer if there are no more readers. + if (_M_readers() == 0) _M_gate2.notify_one(); + // No need to notify gate1 because we give priority to the queued + // writer, and that writer will eventually notify gate1 after it + // clears the write-entered flag. } else { - if (__num_readers == _M_n_readers - 1) + // Wake any thread that was blocked on reader overflow. + if (__prev == _S_max_readers) _M_gate1.notify_one(); } } diff --git a/libstdc++-v3/testsuite/30_threads/shared_timed_mutex/try_lock/3.cc b/libstdc++-v3/testsuite/30_threads/shared_timed_mutex/try_lock/3.cc new file mode 100644 index 0000000..e9f728e --- /dev/null +++ b/libstdc++-v3/testsuite/30_threads/shared_timed_mutex/try_lock/3.cc @@ -0,0 +1,75 @@ +// { dg-do run { target *-*-freebsd* *-*-dragonfly* *-*-netbsd* *-*-linux* *-*-solaris* *-*-cygwin *-*-darwin* powerpc-ibm-aix* } } +// { dg-options " -std=gnu++14 -pthread" { target *-*-freebsd* *-*-dragonfly* *-*-netbsd* *-*-linux* powerpc-ibm-aix* } } +// { dg-options " -std=gnu++14 -pthreads" { target *-*-solaris* } } +// { dg-options " -std=gnu++14 " { target *-*-cygwin *-*-darwin* } } +// { dg-require-cstdint "" } +// { dg-require-gthreads "" } + +// Copyright (C) 2013-2015 Free Software Foundation, Inc. +// +// This file is part of the GNU ISO C++ Library. This library is free +// software; you can redistribute it and/or modify it under the +// terms of the GNU General Public License as published by the +// Free Software Foundation; either version 3, or (at your option) +// any later version. + +// This library is distributed in the hope that it will be useful, +// but WITHOUT ANY WARRANTY; without even the implied warranty of +// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +// GNU General Public License for more details. + +// You should have received a copy of the GNU General Public License along +// with this library; see the file COPYING3. If not see +// . + + +#include +#include +#include +#include + +int main() +{ + bool test __attribute__((unused)) = true; + typedef std::shared_timed_mutex mutex_type; + + try + { + mutex_type m; + m.lock(); + bool b; + + std::thread t([&] { + try + { + using namespace std::chrono; + auto timeout = 100ms; + auto start = system_clock::now(); + b = m.try_lock_for(timeout); + auto t = system_clock::now() - start; + VERIFY( !b ); + VERIFY( t >= timeout ); + + start = system_clock::now(); + b = m.try_lock_until(start + timeout); + t = system_clock::now() - start; + VERIFY( !b ); + VERIFY( t >= timeout ); + } + catch (const std::system_error& e) + { + VERIFY( false ); + } + }); + t.join(); + m.unlock(); + } + catch (const std::system_error& e) + { + VERIFY( false ); + } + catch (...) + { + VERIFY( false ); + } +} --bW7Nw90iaNQhVLQB--