public inbox for pthreads-win32@sourceware.org
 help / color / mirror / Atom feed
From: Ross Johnson <ross.johnson@homemail.com.au>
To: Vladimir Kliatchko <vladimir@kliatchko.com>
Cc: Gottlob Frege <gottlobfrege@gmail.com>,
	Pthreads-Win32 list <pthreads-win32@sources.redhat.com>
Subject: RE: New pthread_once implementation
Date: Mon, 30 May 2005 14:48:00 -0000	[thread overview]
Message-ID: <1117446923.7427.5.camel@desk.home> (raw)
In-Reply-To: <0IH700C30EV237@mta8.srv.hcvlny.cv.net>

Hi Vlad,

The nice thing about your implementation using semaphores was that: even
though you could release just one waiter on cancellation, all waiting
threads could be released in one call to the kernel when exiting
normally. In your MCS version, the dequeueing involves sequential calls
to SetEvent, which could be much slower in comparison. That's my only
concern with it. The threat of an async cancellation leaving waiters
stranded was a concern at one point, but none of the previous
implementations of this routine has been safe against it either.

Still pondering your previous version (and not yet convinced that it's
fatally flawed), I've tried another variation.

In this variation, the cancellation handler doesn't reset state to INIT,
but to a new state == CANCELLED so that any newly arriving threads plus
the awoken waiter are prevented from becoming the new initter until
state can be reset to INIT in the wait path [by one of those threads]
when semaphore is guaranteed to be valid. I think this removes any races
between semaphore closure and operations.

[NB. in the test before the WaitForSingleObject call, the == is now >=]

This variation passes repeated runs of once4.c (aggressive cancellation
with varying priority threads hitting the once_control) on my uni-
processor. I also went as far as adding Sleep(1); after every semicolon
and left-curly brace to try to break it.

PS. I'm also perhaps too conscious of 'spamming' the list with endless
versions of this stubborn little routine, but this is the purpose of the
list, so I'm not personally going to worry about it. I'm sure anyone who
finds it irritating will filter it or something.


#define PTHREAD_ONCE_INIT       {0, 0, 0, 0}

enum ptw32_once_state {
  PTW32_ONCE_INIT      = 0x0,
  PTW32_ONCE_DONE      = 0x1,
  PTW32_ONCE_STARTED   = 0x2,
  PTW32_ONCE_CANCELLED = 0x3
};

struct pthread_once_t_
{
  int          state;
  int          reserved;
  int          numSemaphoreUsers;
  HANDLE       semaphore;
};

static void PTW32_CDECL
ptw32_once_init_routine_cleanup(void * arg)
{
  pthread_once_t * once_control = (pthread_once_t *) arg;

  /*
   * Continue to direct new threads into the wait path until the waiter that we
   * release or a new thread can reset state to INIT.
   */
  (void) PTW32_INTERLOCKED_EXCHANGE((LPLONG)&once_control->state, (LONG)PTW32_ONCE_CANCELLED);

  if (InterlockedExchangeAdd((LPLONG)&once_control->semaphore, 0L)) /* MBR fence */
    {
      ReleaseSemaphore(once_control->semaphore, 1, NULL);
    }
}

int
pthread_once (pthread_once_t * once_control, void (*init_routine) (void))
{
  int result;
  int state;
  HANDLE sema;

  if (once_control == NULL || init_routine == NULL)
    {
      result = EINVAL;
      goto FAIL0;
    }
  else
    {
      result = 0;
    }

  while ((state = (int)
	  PTW32_INTERLOCKED_COMPARE_EXCHANGE((PTW32_INTERLOCKED_LPLONG)&once_control->state,
					     (PTW32_INTERLOCKED_LONG)PTW32_ONCE_STARTED,
					     (PTW32_INTERLOCKED_LONG)PTW32_ONCE_INIT))
	 != PTW32_ONCE_DONE)
    {
      if (PTW32_ONCE_INIT == state)
        {

#ifdef _MSC_VER
#pragma inline_depth(0)
#endif

          pthread_cleanup_push(ptw32_once_init_routine_cleanup, (void *) once_control);
          (*init_routine)();
          pthread_cleanup_pop(0);

#ifdef _MSC_VER
#pragma inline_depth()
#endif

          (void) PTW32_INTERLOCKED_EXCHANGE((LPLONG)&once_control->state, 
                                            (LONG)PTW32_ONCE_DONE);

          /*
           * we didn't create the semaphore.
           * it is only there if there is someone waiting.
           */
          if (InterlockedExchangeAdd((LPLONG)&once_control->semaphore, 0L)) /* MBR fence */
            {
              ReleaseSemaphore(once_control->semaphore, 
                               once_control->numSemaphoreUsers, NULL);
            }
        }
      else
        {
          if (1 == InterlockedIncrement((LPLONG)&once_control->numSemaphoreUsers))
            {
              sema = CreateSemaphore(NULL, 0, INT_MAX, NULL);

              if (PTW32_INTERLOCKED_COMPARE_EXCHANGE((PTW32_INTERLOCKED_LPLONG)&once_control->semaphore,
						     (PTW32_INTERLOCKED_LONG)sema,
						     (PTW32_INTERLOCKED_LONG)0))
                {
                  CloseHandle(sema);
                }
            }

	  /*
	   * If initter was cancelled then state is CANCELLED.
	   * Until state is reset to INIT, all new threads will enter the wait path.
	   * The woken waiter, if it exists, will also re-enter the wait path, but
	   * either it or a new thread will reset state = INIT here, continue around the Wait,
	   * and become the new initter. Any thread that is suspended in the wait path before
	   * this point will hit this check. Any thread suspended between this check and
	   * the Wait will wait on a valid semaphore, and possibly continue through it
	   * if the cancellation handler has incremented (released) it and there were
	   * no waiters.
	   */
	  (void) PTW32_INTERLOCKED_COMPARE_EXCHANGE((PTW32_INTERLOCKED_LPLONG)&once_control->state,
						    (PTW32_INTERLOCKED_LONG)PTW32_ONCE_INIT,
						    (PTW32_INTERLOCKED_LONG)PTW32_ONCE_CANCELLED);

	  /*
	   * Check 'state' again in case the initting thread has finished
	   * and left before seeing that there was a semaphore.
	   */
          if (InterlockedExchangeAdd((LPLONG)&once_control->state, 0L) >= PTW32_ONCE_STARTED)
            {
              WaitForSingleObject(once_control->semaphore, INFINITE);
            }

          if (0 == InterlockedDecrement((LPLONG)&once_control->numSemaphoreUsers))
            {
              /* we were last */
              if ((sema =
		   (HANDLE) PTW32_INTERLOCKED_EXCHANGE((LPLONG)&once_control->semaphore, (LONG)0)))
                {
                  CloseHandle(sema);
                }
            }
        }
    }

  /*
   * ------------
   * Failure Code
   * ------------
   */
FAIL0:
  return (result);
}                               /* pthread_once */


  parent reply	other threads:[~2005-05-30 14:48 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <97ffb3105052709425ce1126a@mail.gmail.com>
2005-05-28  1:30 ` Vladimir Kliatchko
2005-05-28  3:46   ` Ross Johnson
2005-05-28 10:51     ` Vladimir Kliatchko
2005-05-28 13:54       ` Ross Johnson
2005-05-28 14:29         ` Vladimir Kliatchko
2005-05-29 12:58           ` Vladimir Kliatchko
2005-05-30 14:48           ` Ross Johnson [this message]
2005-05-30 15:26             ` Vladimir Kliatchko
2005-05-31 16:28               ` Gottlob Frege
2005-06-01  3:02                 ` Ross Johnson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1117446923.7427.5.camel@desk.home \
    --to=ross.johnson@homemail.com.au \
    --cc=gottlobfrege@gmail.com \
    --cc=pthreads-win32@sources.redhat.com \
    --cc=vladimir@kliatchko.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).