public inbox for pthreads-win32@sourceware.org
 help / color / mirror / Atom feed
From: Ross Johnson <Ross.Johnson@homemail.com.au>
To: Gottlob Frege <gottlobfrege@gmail.com>
Cc: pthreads-win32@sources.redhat.com
Subject: Re: starvation in pthread_once?
Date: Thu, 17 Sep 2009 01:28:00 -0000	[thread overview]
Message-ID: <4AB190C6.1000500@homemail.com.au> (raw)
In-Reply-To: <97ffb310909160838v6671abccv3b230ec213e66057@mail.gmail.com>

Hi Gottlob,

Vladimir Kliatchko reimplemented pthread_once to use his 
implementation of MCS (Mellor-Crummey/Scott) locks.

 From ptw32_MCS_lock.c:-

 * About MCS locks:
 *
 * MCS locks are queue-based locks, where the queue nodes are local to the
 * thread. The 'lock' is nothing more than a global pointer that points to
 * the last node in the queue, or is NULL if the queue is empty.
 * 
 * Originally designed for use as spin locks requiring no kernel resources
 * for synchronisation or blocking, the implementation below has adapted
 * the MCS spin lock for use as a general mutex that will suspend threads
 * when there is lock contention.
 *
 * Because the queue nodes are thread-local, most of the memory read/write
 * operations required to add or remove nodes from the queue do not trigger
 * cache-coherence updates.
 *
 * Like 'named' mutexes, MCS locks consume system resources transiently -
 * they are able to acquire and free resources automatically - but MCS
 * locks do not require any unique 'name' to identify the lock to all
 * threads using it.



Gottlob Frege wrote:
> Blast from the past - whatever happened to changing call_once() to not
> create the named mutex when it wasn't needed?
>
>
> On Tue, Mar 22, 2005 at 1:26 PM, Gottlob Frege <gottlobfrege@gmail.com> wrote:
>   
>> If it comes down to it, I might vote for daisy-chaining over
>> busy-looping (assuming the busy-looping is endless).  Remember, this
>> all started because the original implementation was polling/sleeping
>> on 'initted' - and if the busy-looping thread is high-priority, then
>> we are locked forever...
>>
>>
>> On Tue, 22 Mar 2005 15:14:07 +1100, Ross Johnson
>> <rpj@callisto.canberra.edu.au> wrote:
>>     
>>> On Mon, 2005-03-21 at 11:07 -0500, Gottlob Frege wrote:
>>>
>>>       
>>>> So, it doesn't seem to be getting any easier!  *Almost* to the point
>>>> where a big named mutex becomes tempting - there is a lot to be said
>>>> for simplicity.  However, my/the goal is still to at least minimize
>>>> the non-contention simple init case...
>>>>         
>>> I'm less and less tempted to use a named mutex. Perhaps there's a
>>> standard technique, but AFAICS it's impossible to guarantee that the
>>> name is unique across the system (and all Windows variants).
>>>
>>> And I agree, minimum overhead for the uncontended case is the top
>>> priority (after correct behaviour). I'm not concerned at all about speed
>>> in the cancellation case.
>>>
>>>       
>>>> And the event is still an auto-reset, although I no longer think it
>>>> really matters - I really haven't had the tenacity to think this stuff
>>>> through.  If it doesn't matter, manual-reset would be better, I think
>>>> - I don't like having 1 thread relying on another thread waking it up,
>>>> - for cases where the thread is killed, or strange thread priorities,
>>>> etc.
>>>>         
>>> It all looks to me like it will work. I don't recall, in the version
>>> that's in pthreads-win32 now, why I included eventUsers (++/--) in what
>>> you have as the __lock() sections. Maybe to save additional Atomic calls
>>> (bus locks). But now I realise [in that version - not yours] that waking
>>> threads can block unnecessarily when leaving the wait section.
>>>
>>> It probably doesn't matter if cancel_event is auto or manual. I think
>>> there will be at most one thread waiting on it. And, for 'event', like
>>> you I'm uncomfortable with daisy-chaining SetEvent() calls.
>>>
>>> The only problem with the alternative of using a manual-reset event is
>>> that some thread/s may busy-loop for a bit until an explicit reset
>>> occurs. It seems untidy, but it's probably more robust than daisy-
>>> chained SetEvents given the issues you've identified above.
>>>
>>> So I'm tempted to leave both events as manual-reset events. I'm also
>>> guessing that this busy-looping will be extremely rare - perhaps only
>>> when a new thread sneaks in to become initter, then suspends just inside
>>> while the first waiter is waking and heading back to the loop start.
>>>
>>> I'll run your design and let you know the results.
>>>
>>> Thanks.
>>> Ross
>>>
>>>
>>>       

  reply	other threads:[~2009-09-17  1:28 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-03-03  9:32 Gottlob Frege
2005-03-04 23:18 ` Ross Johnson
2005-03-08  2:22   ` Ross Johnson
2005-03-08  3:00     ` Gottlob Frege
2005-03-08  6:11       ` Ross Johnson
2005-03-08  9:49         ` Alexander Terekhov
2005-03-08  9:56           ` Alexander Terekhov
2005-03-08  9:58             ` Alexander Terekhov
2005-03-08 16:11               ` Gottlob Frege
2005-03-08 17:14                 ` Alexander Terekhov
2005-03-08 18:28                   ` Gottlob Frege
2005-03-14  2:47                 ` Ross Johnson
     [not found]                   ` <97ffb310503140832401faa2b@mail.gmail.com>
     [not found]                     ` <1110842168.21321.78.camel@desk.home>
     [not found]                       ` <97ffb3105031415473a3ee169@mail.gmail.com>
     [not found]                         ` <1110855601.21321.203.camel@desk.home>
     [not found]                           ` <97ffb31050321080747aa5a7c@mail.gmail.com>
     [not found]                             ` <1111464847.8363.91.camel@desk.home>
     [not found]                               ` <97ffb3105032209269b0f44e@mail.gmail.com>
2009-09-16 15:38                                 ` Gottlob Frege
2009-09-17  1:28                                   ` Ross Johnson [this message]
2005-03-08 16:05       ` Gottlob Frege

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4AB190C6.1000500@homemail.com.au \
    --to=ross.johnson@homemail.com.au \
    --cc=gottlobfrege@gmail.com \
    --cc=pthreads-win32@sources.redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).