From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 19963 invoked by alias); 17 Sep 2009 01:28:50 -0000 Received: (qmail 19953 invoked by uid 22791); 17 Sep 2009 01:28:50 -0000 X-SWARE-Spam-Status: No, hits=-1.0 required=5.0 tests=AWL,BAYES_50 X-Spam-Check-By: sourceware.org Received: from flexo.grapevine.net.au (HELO flexo.grapevine.net.au) (203.129.32.140) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 17 Sep 2009 01:28:45 +0000 Received: from localhost (localhost [127.0.0.1]) by flexo.grapevine.net.au (Postfix) with ESMTP id 264555E82C5; Thu, 17 Sep 2009 11:28:42 +1000 (EST) Received: from flexo.grapevine.net.au ([127.0.0.1]) by localhost (flexo.grapevine.net.au [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id iPH0t5il3Vck; Thu, 17 Sep 2009 11:28:42 +1000 (EST) Received: from [10.1.1.16] (ppp-248.207.127.121.grapevine.net.au [121.127.207.248]) (Authenticated sender: Ross.Johnson@homemail.com.au) by flexo.grapevine.net.au (Postfix) with ESMTP id ABEED5E82BE; Thu, 17 Sep 2009 11:28:41 +1000 (EST) Message-ID: <4AB190C6.1000500@homemail.com.au> Date: Thu, 17 Sep 2009 01:28:00 -0000 From: Ross Johnson User-Agent: Thunderbird 2.0.0.23 (Windows/20090812) MIME-Version: 1.0 To: Gottlob Frege CC: pthreads-win32@sources.redhat.com Subject: Re: starvation in pthread_once? References: <97ffb31050308081116245b75@mail.gmail.com> <1110768429.20103.115.camel@desk.home> <97ffb310503140832401faa2b@mail.gmail.com> <1110842168.21321.78.camel@desk.home> <97ffb3105031415473a3ee169@mail.gmail.com> <1110855601.21321.203.camel@desk.home> <97ffb31050321080747aa5a7c@mail.gmail.com> <1111464847.8363.91.camel@desk.home> <97ffb3105032209269b0f44e@mail.gmail.com> <97ffb310909160838v6671abccv3b230ec213e66057@mail.gmail.com> In-Reply-To: <97ffb310909160838v6671abccv3b230ec213e66057@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact pthreads-win32-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: pthreads-win32-owner@sourceware.org X-SW-Source: 2009/txt/msg00048.txt.bz2 Hi Gottlob, Vladimir Kliatchko reimplemented pthread_once to use his implementation of MCS (Mellor-Crummey/Scott) locks. From ptw32_MCS_lock.c:- * About MCS locks: * * MCS locks are queue-based locks, where the queue nodes are local to the * thread. The 'lock' is nothing more than a global pointer that points to * the last node in the queue, or is NULL if the queue is empty. * * Originally designed for use as spin locks requiring no kernel resources * for synchronisation or blocking, the implementation below has adapted * the MCS spin lock for use as a general mutex that will suspend threads * when there is lock contention. * * Because the queue nodes are thread-local, most of the memory read/write * operations required to add or remove nodes from the queue do not trigger * cache-coherence updates. * * Like 'named' mutexes, MCS locks consume system resources transiently - * they are able to acquire and free resources automatically - but MCS * locks do not require any unique 'name' to identify the lock to all * threads using it. Gottlob Frege wrote: > Blast from the past - whatever happened to changing call_once() to not > create the named mutex when it wasn't needed? > > > On Tue, Mar 22, 2005 at 1:26 PM, Gottlob Frege wrote: > >> If it comes down to it, I might vote for daisy-chaining over >> busy-looping (assuming the busy-looping is endless). Remember, this >> all started because the original implementation was polling/sleeping >> on 'initted' - and if the busy-looping thread is high-priority, then >> we are locked forever... >> >> >> On Tue, 22 Mar 2005 15:14:07 +1100, Ross Johnson >> wrote: >> >>> On Mon, 2005-03-21 at 11:07 -0500, Gottlob Frege wrote: >>> >>> >>>> So, it doesn't seem to be getting any easier! *Almost* to the point >>>> where a big named mutex becomes tempting - there is a lot to be said >>>> for simplicity. However, my/the goal is still to at least minimize >>>> the non-contention simple init case... >>>> >>> I'm less and less tempted to use a named mutex. Perhaps there's a >>> standard technique, but AFAICS it's impossible to guarantee that the >>> name is unique across the system (and all Windows variants). >>> >>> And I agree, minimum overhead for the uncontended case is the top >>> priority (after correct behaviour). I'm not concerned at all about speed >>> in the cancellation case. >>> >>> >>>> And the event is still an auto-reset, although I no longer think it >>>> really matters - I really haven't had the tenacity to think this stuff >>>> through. If it doesn't matter, manual-reset would be better, I think >>>> - I don't like having 1 thread relying on another thread waking it up, >>>> - for cases where the thread is killed, or strange thread priorities, >>>> etc. >>>> >>> It all looks to me like it will work. I don't recall, in the version >>> that's in pthreads-win32 now, why I included eventUsers (++/--) in what >>> you have as the __lock() sections. Maybe to save additional Atomic calls >>> (bus locks). But now I realise [in that version - not yours] that waking >>> threads can block unnecessarily when leaving the wait section. >>> >>> It probably doesn't matter if cancel_event is auto or manual. I think >>> there will be at most one thread waiting on it. And, for 'event', like >>> you I'm uncomfortable with daisy-chaining SetEvent() calls. >>> >>> The only problem with the alternative of using a manual-reset event is >>> that some thread/s may busy-loop for a bit until an explicit reset >>> occurs. It seems untidy, but it's probably more robust than daisy- >>> chained SetEvents given the issues you've identified above. >>> >>> So I'm tempted to leave both events as manual-reset events. I'm also >>> guessing that this busy-looping will be extremely rare - perhaps only >>> when a new thread sneaks in to become initter, then suspends just inside >>> while the first waiter is waking and heading back to the loop start. >>> >>> I'll run your design and let you know the results. >>> >>> Thanks. >>> Ross >>> >>> >>>