From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 3913 invoked by alias); 30 May 2005 14:48:53 -0000 Mailing-List: contact pthreads-win32-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: pthreads-win32-owner@sources.redhat.com Received: (qmail 3894 invoked by uid 22791); 30 May 2005 14:48:45 -0000 Received: from quokka.dot.net.au (HELO quokka.dot.net.au) (202.147.68.16) by sourceware.org (qpsmtpd/0.30-dev) with ESMTP; Mon, 30 May 2005 14:48:44 +0000 Received: from [203.129.52.93] (helo=[203.129.52.93]) by quokka.dot.net.au with esmtp (Exim 3.35 #1 (Debian)) id 1Dcgzj-0001KE-00; Mon, 30 May 2005 19:55:39 +1000 Subject: RE: New pthread_once implementation From: Ross Johnson To: Vladimir Kliatchko Cc: Gottlob Frege , Pthreads-Win32 list In-Reply-To: <0IH700C30EV237@mta8.srv.hcvlny.cv.net> References: <0IH700C30EV237@mta8.srv.hcvlny.cv.net> Content-Type: text/plain Date: Mon, 30 May 2005 14:48:00 -0000 Message-Id: <1117446923.7427.5.camel@desk.home> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit X-SW-Source: 2005/txt/msg00101.txt.bz2 Hi Vlad, The nice thing about your implementation using semaphores was that: even though you could release just one waiter on cancellation, all waiting threads could be released in one call to the kernel when exiting normally. In your MCS version, the dequeueing involves sequential calls to SetEvent, which could be much slower in comparison. That's my only concern with it. The threat of an async cancellation leaving waiters stranded was a concern at one point, but none of the previous implementations of this routine has been safe against it either. Still pondering your previous version (and not yet convinced that it's fatally flawed), I've tried another variation. In this variation, the cancellation handler doesn't reset state to INIT, but to a new state == CANCELLED so that any newly arriving threads plus the awoken waiter are prevented from becoming the new initter until state can be reset to INIT in the wait path [by one of those threads] when semaphore is guaranteed to be valid. I think this removes any races between semaphore closure and operations. [NB. in the test before the WaitForSingleObject call, the == is now >=] This variation passes repeated runs of once4.c (aggressive cancellation with varying priority threads hitting the once_control) on my uni- processor. I also went as far as adding Sleep(1); after every semicolon and left-curly brace to try to break it. PS. I'm also perhaps too conscious of 'spamming' the list with endless versions of this stubborn little routine, but this is the purpose of the list, so I'm not personally going to worry about it. I'm sure anyone who finds it irritating will filter it or something. #define PTHREAD_ONCE_INIT {0, 0, 0, 0} enum ptw32_once_state { PTW32_ONCE_INIT = 0x0, PTW32_ONCE_DONE = 0x1, PTW32_ONCE_STARTED = 0x2, PTW32_ONCE_CANCELLED = 0x3 }; struct pthread_once_t_ { int state; int reserved; int numSemaphoreUsers; HANDLE semaphore; }; static void PTW32_CDECL ptw32_once_init_routine_cleanup(void * arg) { pthread_once_t * once_control = (pthread_once_t *) arg; /* * Continue to direct new threads into the wait path until the waiter that we * release or a new thread can reset state to INIT. */ (void) PTW32_INTERLOCKED_EXCHANGE((LPLONG)&once_control->state, (LONG)PTW32_ONCE_CANCELLED); if (InterlockedExchangeAdd((LPLONG)&once_control->semaphore, 0L)) /* MBR fence */ { ReleaseSemaphore(once_control->semaphore, 1, NULL); } } int pthread_once (pthread_once_t * once_control, void (*init_routine) (void)) { int result; int state; HANDLE sema; if (once_control == NULL || init_routine == NULL) { result = EINVAL; goto FAIL0; } else { result = 0; } while ((state = (int) PTW32_INTERLOCKED_COMPARE_EXCHANGE((PTW32_INTERLOCKED_LPLONG)&once_control->state, (PTW32_INTERLOCKED_LONG)PTW32_ONCE_STARTED, (PTW32_INTERLOCKED_LONG)PTW32_ONCE_INIT)) != PTW32_ONCE_DONE) { if (PTW32_ONCE_INIT == state) { #ifdef _MSC_VER #pragma inline_depth(0) #endif pthread_cleanup_push(ptw32_once_init_routine_cleanup, (void *) once_control); (*init_routine)(); pthread_cleanup_pop(0); #ifdef _MSC_VER #pragma inline_depth() #endif (void) PTW32_INTERLOCKED_EXCHANGE((LPLONG)&once_control->state, (LONG)PTW32_ONCE_DONE); /* * we didn't create the semaphore. * it is only there if there is someone waiting. */ if (InterlockedExchangeAdd((LPLONG)&once_control->semaphore, 0L)) /* MBR fence */ { ReleaseSemaphore(once_control->semaphore, once_control->numSemaphoreUsers, NULL); } } else { if (1 == InterlockedIncrement((LPLONG)&once_control->numSemaphoreUsers)) { sema = CreateSemaphore(NULL, 0, INT_MAX, NULL); if (PTW32_INTERLOCKED_COMPARE_EXCHANGE((PTW32_INTERLOCKED_LPLONG)&once_control->semaphore, (PTW32_INTERLOCKED_LONG)sema, (PTW32_INTERLOCKED_LONG)0)) { CloseHandle(sema); } } /* * If initter was cancelled then state is CANCELLED. * Until state is reset to INIT, all new threads will enter the wait path. * The woken waiter, if it exists, will also re-enter the wait path, but * either it or a new thread will reset state = INIT here, continue around the Wait, * and become the new initter. Any thread that is suspended in the wait path before * this point will hit this check. Any thread suspended between this check and * the Wait will wait on a valid semaphore, and possibly continue through it * if the cancellation handler has incremented (released) it and there were * no waiters. */ (void) PTW32_INTERLOCKED_COMPARE_EXCHANGE((PTW32_INTERLOCKED_LPLONG)&once_control->state, (PTW32_INTERLOCKED_LONG)PTW32_ONCE_INIT, (PTW32_INTERLOCKED_LONG)PTW32_ONCE_CANCELLED); /* * Check 'state' again in case the initting thread has finished * and left before seeing that there was a semaphore. */ if (InterlockedExchangeAdd((LPLONG)&once_control->state, 0L) >= PTW32_ONCE_STARTED) { WaitForSingleObject(once_control->semaphore, INFINITE); } if (0 == InterlockedDecrement((LPLONG)&once_control->numSemaphoreUsers)) { /* we were last */ if ((sema = (HANDLE) PTW32_INTERLOCKED_EXCHANGE((LPLONG)&once_control->semaphore, (LONG)0))) { CloseHandle(sema); } } } } /* * ------------ * Failure Code * ------------ */ FAIL0: return (result); } /* pthread_once */