public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Torvald Riegel <triegel@redhat.com>
To: Malte Skarupke <malteskarupke@web.de>, libc-alpha@sourceware.org
Cc: malteskarupke@fastmail.fm
Subject: Re: [PATCH 1/5] nptl: Fix pthread_cond_signal missing a sleeper (#BZ 25847)
Date: Mon, 18 Jan 2021 23:43:06 +0100	[thread overview]
Message-ID: <52eb6c8ce3fcd0cbc00667a946cf7191a68dcfd1.camel@redhat.com> (raw)
In-Reply-To: <20210116204950.16434-1-malteskarupke@web.de>

On Sat, 2021-01-16 at 15:49 -0500, Malte Skarupke wrote:
> This change is the minimal amount of changes necessary to fix the
> bug.
> This leads to slightly slower performance, but the next two patches
> in this series will undo most of that damage.

Is this based on experiments, or an assumption based on reasoning?

Which kinds of workloads are you talking about (e.g., high vs. low
contention, ...).

> ---
>  nptl/pthread_cond_wait.c | 29 +++++++++++------------------
>  1 file changed, 11 insertions(+), 18 deletions(-)
> 
> diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c
> index 02d11c61db..0f50048c0b 100644
> --- a/nptl/pthread_cond_wait.c
> +++ b/nptl/pthread_cond_wait.c
> @@ -405,6 +405,10 @@ __pthread_cond_wait_common (pthread_cond_t
> *cond, pthread_mutex_t *mutex,
>    unsigned int g = wseq & 1;
>    uint64_t seq = wseq >> 1;
> 
> +  /* Acquire a group reference and use acquire MO for that so that
> we
> +     synchronize with the dummy read-modify-write in
> +     __condvar_quiesce_and_switch_g1 if we read from that.  */
> +  atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2);

Please explain the choice of MO properly in comments, unless it's
obvious.  In this example, you just state that you want to have the
synchronize-with relation, but not why.

The comment you broke up has the second part ("In turn, ...") that
explains why we want to have it.  But have you checked that moving the
reference acquisition to earlier will still be correct, in particular
regarding MOs?  Your model didn't include MOs, IIRC, so this needs
reasoning.  We also want to make sure that our future selfs still can
reconstruct this understanding without having to start from scratch, so
we really need good comments.

>    /* Increase the waiter reference count.  Relaxed MO is sufficient
> because
>       we only need to synchronize when decrementing the reference
> count.  */
>    unsigned int flags = atomic_fetch_add_relaxed (&cond-
> >__data.__wrefs, 8);
> @@ -422,6 +426,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond,
> pthread_mutex_t *mutex,
>      {
>        __condvar_cancel_waiting (cond, seq, g, private);
>        __condvar_confirm_wakeup (cond, private);
> +      __condvar_dec_grefs (cond, g, private);
>        return err;
>      }
> 
> @@ -471,24 +476,14 @@ __pthread_cond_wait_common (pthread_cond_t
> *cond, pthread_mutex_t *mutex,
>  	    break;
> 
>  	  /* No signals available after spinning, so prepare to block.
> -	     We first acquire a group reference and use acquire MO for
> that so
> -	     that we synchronize with the dummy read-modify-write in
> -	     __condvar_quiesce_and_switch_g1 if we read from that.  In
> turn,
> -	     in this case this will make us see the closed flag on
> __g_signals
> -	     that designates a concurrent attempt to reuse the group's
> slot.
> -	     We use acquire MO for the __g_signals check to make the
> -	     __g1_start check work (see spinning above).
> -	     Note that the group reference acquisition will not mask
> the
> -	     release MO when decrementing the reference count because
> we use
> -	     an atomic read-modify-write operation and thus extend the
> release
> -	     sequence.  */

You lose this last sentence, but it matters when explaining this.

> -	  atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2);
> +	     First check the closed flag on __g_signals that designates
> a
> +	     concurrent attempt to reuse the group's slot. We use
> acquire MO for
> +	     the __g_signals check to make the __g1_start check work
> (see
> +	     spinning above).  */

See above.

>  	  if (((atomic_load_acquire (cond->__data.__g_signals + g) & 1)
> != 0)
>  	      || (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)))
>  	    {
> -	      /* Our group is closed.  Wake up any signalers that might
> be
> -		 waiting.  */
> -	      __condvar_dec_grefs (cond, g, private);
> +	      /* Our group is closed.  */
>  	      goto done;
>  	    }
> 
> @@ -508,7 +503,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond,
> pthread_mutex_t *mutex,
> 
>  	  if (__glibc_unlikely (err == ETIMEDOUT || err == EOVERFLOW))
>  	    {
> -	      __condvar_dec_grefs (cond, g, private);
>  	      /* If we timed out, we effectively cancel waiting.  Note
> that
>  		 we have decremented __g_refs before cancellation, so
> that a
>  		 deadlock between waiting for quiescence of our group
> in
> @@ -518,8 +512,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond,
> pthread_mutex_t *mutex,
>  	      result = err;
>  	      goto done;
>  	    }
> -	  else
> -	    __condvar_dec_grefs (cond, g, private);
> 
>  	  /* Reload signals.  See above for MO.  */
>  	  signals = atomic_load_acquire (cond->__data.__g_signals + g);
> @@ -602,6 +594,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond,
> pthread_mutex_t *mutex,
>       to allow for execution of pthread_cond_destroy while having
> acquired the
>       mutex.  */
>    __condvar_confirm_wakeup (cond, private);
> +  __condvar_dec_grefs (cond, g, private);

This is the wrong order.  After confirming the wakeup for cond_destroy,
the thread must not touch the condvar memory anymore (including reading
it because destruction could unmap it, for example; futex calls on it
are fine).


      parent reply	other threads:[~2021-01-18 22:43 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-16 20:49 Malte Skarupke
2021-01-16 20:49 ` [PATCH 2/5] nptl: Remove the signal-stealing code. It is no longer needed Malte Skarupke
2021-01-16 20:49 ` [PATCH 3/5] nptl: Optimization by not incrementing wrefs in pthread_cond_wait Malte Skarupke
2021-01-18 23:41   ` Torvald Riegel
2021-01-16 20:49 ` [PATCH 4/5] nptl: Make test-cond-printers check the number of waiters Malte Skarupke
2021-01-16 20:49 ` [PATCH 5/5] nptl: Rename __wrefs to __flags because its meaning has changed Malte Skarupke
2021-01-18 23:47   ` Torvald Riegel
2021-01-18 22:43 ` Torvald Riegel [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52eb6c8ce3fcd0cbc00667a946cf7191a68dcfd1.camel@redhat.com \
    --to=triegel@redhat.com \
    --cc=libc-alpha@sourceware.org \
    --cc=malteskarupke@fastmail.fm \
    --cc=malteskarupke@web.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).