public inbox for pthreads-win32@sourceware.org
 help / color / mirror / Atom feed
From: Alex Kotliarov <Alex@taquote.com>
To: "'pthreads-win32@sources.redhat.com'"
	<pthreads-win32@sources.redhat.com>
Subject: pthread_cond_broadcast(...) leads to a deadlock
Date: Thu, 18 Nov 2004 16:26:00 -0000	[thread overview]
Message-ID: <715D092C2A9AD411B4DD006097C6EE8401482C8F@EXCHSVR> (raw)

Hi,

- my application that uses one "producer" thread and N "consumer" threads,
where N > 2,
locks up and it seems like there is a problem in implementation of the
condition variable.

- the app locks up if I use pthread_cond_broadcast(...) to unblock waiting
"consumers"

- the app does not lock if pthread_cond_signal(...) is used

- code that causes deadlock
	pocedure:  ptw32_cond_wait_cleanup(...) 
		- CV's external mutex gets locked immediately upon entering
the procedure.
  		- It must be locked before exiting the procedure, after
"semBlockLock" - bin.semaphore - has been posted.

- let's say that N "consumer" threads are waiting on CV and "producer"
thread broadcasts signal on that CV to wake up all consumers

	given:
		semBlockLock semaphore's count == 0   ( decremented in
ptw32_cond_unblock(...) ) 	

	1.  - all "consumers" wake up and enter
ptw32_cond_wait_cleanup(...)
	2.  - one "consumer" - ALPHA - acquires CV's external mutex,
executes cleanup code, returns from pthread_cond_wait() function, 
	releases CV's external mutex
	3.  - another "consumer" acquires CV's "external" mutex and cleans
up....etc
	4. -  ALPHA "consumer" sees that "producer"'s work queue is empty,
decides to wait on CV again, aquires CV's mutex, 
	and enters pthread_cond_wait(...)
	5. - there are still "consumers" to be unblocked - nWaitersToUnblock
!=0 - and they are not going anywhere, because ALPHA "consumer" holds
	CV's external lock
	6. ALPHA consumer executes sem_wait( semBlockLock ); and we get a
deadlock, because nWaitersToUnblock  will never reach 0, and 
	semBlockLock semaphore will never get incremented.

- solution:
	move  these lines:

		  if ((result = pthread_mutex_lock (cleanup_args->mutexPtr))
!= 0)
   		 {
     			 *resultPtr = result;
     			 return;
   		 }
	
	to the end of   ptw32_cond_wait_cleanup procedure:

		static void PTW32_CDECL
		ptw32_cond_wait_cleanup (void *args)
		{
			.....
			.....
			.....
  			if (1 == nSignalsWasLeft)
    			{
     				 if (sem_post (&(cv->semBlockLock)) != 0)
				{
	 				 *resultPtr = errno;
	 				 return;
				}
    			}
 			 /*
   			* XSH: Upon successful return, the mutex has been
locked and is owned
   			* by the calling thread. This must be done before
any cancelation
  			 * cleanup handlers are run.
  			 */
  			if ((result = pthread_mutex_lock
(cleanup_args->mutexPtr)) != 0)
   			 {
      				*resultPtr = result;
      				return;
    			}
		}	

   - any reason why  pthread_mutex_lock (cleanup_args->mutexPtr) was moved
to the top? Algorithm 8A has this line at the bottom of 
	ptw32_cond_wait_cleanup()


   Thanks,

   Alexander Kotliarov.

             reply	other threads:[~2004-11-18 16:26 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-11-18 16:26 Alex Kotliarov [this message]
2004-11-19  7:47 ` Ross Johnson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=715D092C2A9AD411B4DD006097C6EE8401482C8F@EXCHSVR \
    --to=alex@taquote.com \
    --cc=pthreads-win32@sources.redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).