public inbox for libc-hacker@sourceware.org
 help / color / mirror / Atom feed
From: Jakub Jelinek <jakub@redhat.com>
To: Ulrich Drepper <drepper@redhat.com>
Cc: Glibc hackers <libc-hacker@sources.redhat.com>
Subject: [PATCH] Avoid pthread_cond_destroy blocking for too long
Date: Thu, 02 Sep 2004 21:50:00 -0000	[thread overview]
Message-ID: <20040902193259.GN30497@sunsite.ms.mff.cuni.cz> (raw)

Hi!

This patch adds a mutex wakeup if there are still waiters at
pthread_cond_destroy time.
The reason is to make sure pthread_cond_destroy won't block for too long.
If some threads are blocked on the pthread_mutex_t's __lock, it is
under application control when (if ever) they will be woken up and
pthread_cond_destroy would block for that whole time.
By waking all mutex waiters we are just waiting until the scheduler
gives all threads enough timeslice to acquire the condvar internal lock,
hop through the short critical section and release the lock (last thread after
waking up pthread_cond_destroy).

In most programs nwaiters will be < (1 << COND_CLOCK_BITS) at
pthread_cond_destroy time and thus this patch shouldn't cause performance
regressions.

2004-09-02  Jakub Jelinek  <jakub@redhat.com>

	* pthread_cond_destroy.c (__pthread_cond_destroy): If there are
	waiters, awake all waiters on the associated mutex.

--- libc/nptl/pthread_cond_destroy.c.jj	2004-09-02 22:27:56.000000000 +0200
+++ libc/nptl/pthread_cond_destroy.c	2004-09-02 23:33:43.278763736 +0200
@@ -44,15 +44,35 @@ __pthread_cond_destroy (cond)
      broadcasted, but still are using the pthread_cond_t structure,
      pthread_cond_destroy needs to wait for them.  */
   unsigned int nwaiters = cond->__data.__nwaiters;
-  while (nwaiters >= (1 << COND_CLOCK_BITS))
+
+  if (nwaiters >= (1 << COND_CLOCK_BITS))
     {
-      lll_mutex_unlock (cond->__data.__lock);
+      /* Wake everybody on the associated mutex in case there are
+         threads that have been requeued to it.
+         Without this, pthread_cond_destroy could block potentially
+         for a long time or forever, as it would depend on other
+         thread's using the mutex.
+         When all threads waiting on the mutex are woken up, pthread_cond_wait
+         only waits for threads to acquire and release the internal
+         condvar lock.  */
+      if (cond->__data.__mutex != NULL
+	  && cond->__data.__mutex != (void *) ~0l)
+	{
+	  pthread_mutex_t *mut = (pthread_mutex_t *) cond->__data.__mutex;
+	  lll_futex_wake (&mut->__data.__lock, INT_MAX);
+	}
+
+      do
+	{
+	  lll_mutex_unlock (cond->__data.__lock);
 
-      lll_futex_wait (&cond->__data.__nwaiters, nwaiters);
+	  lll_futex_wait (&cond->__data.__nwaiters, nwaiters);
 
-      lll_mutex_lock (cond->__data.__lock);
+	  lll_mutex_lock (cond->__data.__lock);
 
-      nwaiters = cond->__data.__nwaiters;
+	  nwaiters = cond->__data.__nwaiters;
+	}
+      while (nwaiters >= (1 << COND_CLOCK_BITS));
     }
 
   return 0;

	Jakub

                 reply	other threads:[~2004-09-02 21:50 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20040902193259.GN30497@sunsite.ms.mff.cuni.cz \
    --to=jakub@redhat.com \
    --cc=drepper@redhat.com \
    --cc=libc-hacker@sources.redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).