public inbox for glibc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "bugdal at aerifal dot cx" <sourceware-bugzilla@sourceware.org>
To: glibc-bugs@sourceware.org
Subject: [Bug nptl/13690] pthread_mutex_unlock potentially cause invalid access
Date: Tue, 12 Aug 2014 02:29:00 -0000	[thread overview]
Message-ID: <bug-13690-131-0MXRXrPOFG@http.sourceware.org/bugzilla/> (raw)
In-Reply-To: <bug-13690-131@http.sourceware.org/bugzilla/>

https://sourceware.org/bugzilla/show_bug.cgi?id=13690

--- Comment #48 from Rich Felker <bugdal at aerifal dot cx> ---
> This would increase the unlock latency whenever there is any waiter (because
> we let the kernel do it, and after it has found and acquired the futex lock). 
> I don't have numbers for this increase, but if there's a non-neglible increase
> in latency, then I wouldn't want to see this in glibc.

Torvald, I agree you have a legitimate concern (unlock latency), but while I
don't have evidence to back this up (just high-level reasoning), I think the
difference in time at which the atomic-store actually works in favor of
performance with FUTEX_WAKE_OP. I'll try to explain:

In the case where there is no waiter at the time of unlock, no wake occurs,
neither by FUTEX_WAKE nor FUTEX_WAKE_OP. There's only an atomic operation (CAS,
if we want to fix the bug this whole issue tracker thread is about). So for the
sake of comparing performance, we need to consider the case where there is at
least one waiter.

Right now (with FUTEX_WAKE), there's a great deal of latency between the atomic
operation that releases the lock and the FUTEX_WAKE being dispatched, due to
kernel entry overhead, futex hash overhead, etc. During that window, a thread
which is not a waiter can race for the lock and acquire it first, despite there
being waiters. This acquisition inherently happens with very low latency, but I
think it's actually likely to be bad for performance:

If the thread which "stole" the lock has not released it by the time the thread
woken by FUTEX_WAKE gets scheduled, the latter thread will uselessly contend
for the lock again, imposing additional cache synchronization overhead and an
additional syscall to wait on the futex again. It will also wrongly get moved
to the end of the wait queue.

If on the other hand, the thread which "stole" the lock immediately releases
it, before the woken thread gets scheduled, my understanding is that it will
see that there are waiters and issue an additional FUTEX_WAKE at unlock time.
At the very least this is a wasted syscall. If there actually are two or more
waiters, it's a lot more expensive, since an extra thread wakes up only to
contend the lock and re-wait.

As both of these situations seem undesirable to me, I think the optimal
behavior should be to minimize the latency between the atomic-release operation
that makes the lock available to other threads and the futex wake. And the only
way to make this latency small is to perform the atomic release in kernel
space.

> I still think that the only thing we need to fix is to make sure that no
> program can interpret a spurious wake-up (by a pending futex_wake) as a real
> wake-up.

As I understand it, all of the current code treats futex wakes much like POSIX
condition variable waits: as an indication to re-check an external predicate
rather than as the bearer of notification about state. If not, things would
already be a lot more broken than they are now in regards to this issue.

On the other hand, if you eliminate all sources of spurious wakes, I think it's
possible to achieve better behavior; in particular I think it may be possible
to prevent "stealing" of locks entirely and ensure that the next futex waiter
always gets the lock on unlock. Whether this behavior is desirable for glibc or
not, I'm not sure. I'm going to do research on it as a future possibility for
musl.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


  parent reply	other threads:[~2014-08-12  2:29 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-14 14:28 [Bug nptl/13690] New: " anemo at mba dot ocn.ne.jp
2012-02-14 14:29 ` [Bug nptl/13690] " anemo at mba dot ocn.ne.jp
2012-02-14 15:39 ` carlos at systemhalted dot org
2012-02-14 15:41 ` carlos at systemhalted dot org
2012-02-14 15:42 ` carlos at systemhalted dot org
2012-02-15  6:47 ` ppluzhnikov at google dot com
2012-02-15 13:18 ` anemo at mba dot ocn.ne.jp
2012-02-15 14:35 ` carlos at systemhalted dot org
2012-02-16  5:09 ` bugdal at aerifal dot cx
2012-02-16 14:43 ` anemo at mba dot ocn.ne.jp
2012-02-16 14:47 ` anemo at mba dot ocn.ne.jp
2012-02-16 15:37 ` carlos at systemhalted dot org
2012-02-16 15:41 ` carlos at systemhalted dot org
2012-02-16 16:22 ` bugdal at aerifal dot cx
2012-02-16 16:35 ` carlos at systemhalted dot org
2012-02-17  5:11 ` bugdal at aerifal dot cx
2012-02-17 13:27 ` anemo at mba dot ocn.ne.jp
2012-02-17 16:18 ` carlos at systemhalted dot org
2012-02-17 16:37 ` carlos at systemhalted dot org
2012-02-20 11:42 ` anemo at mba dot ocn.ne.jp
2012-02-22 14:57 ` carlos at systemhalted dot org
2012-02-29 16:54 ` carlos at systemhalted dot org
2012-03-07 10:30 ` drepper.fsp at gmail dot com
2012-03-07 17:53 ` bugdal at aerifal dot cx
2012-03-08  3:23 ` carlos at systemhalted dot org
2012-03-08  5:13 ` bugdal at aerifal dot cx
2012-04-28  9:57 ` coolhair24 at verizon dot net
2012-06-27 22:32 ` jsm28 at gcc dot gnu.org
2012-11-29 15:55 ` carlos_odonell at mentor dot com
2012-12-01 16:43 ` aj at suse dot de
2012-12-03 23:57 ` carlos at systemhalted dot org
2013-10-09 20:14 ` neleai at seznam dot cz
2013-12-18 20:13 ` triegel at redhat dot com
2013-12-18 20:33 ` bugdal at aerifal dot cx
2013-12-18 20:49 ` bugdal at aerifal dot cx
2013-12-20 19:08 ` lopresti at gmail dot com
2013-12-20 19:38 ` carlos at redhat dot com
2013-12-20 20:25 ` triegel at redhat dot com
2013-12-20 22:51 ` bugdal at aerifal dot cx
2014-01-03  9:10 ` kevin.dempsey at aculab dot com
2014-01-06 16:58 ` triegel at redhat dot com
2014-01-06 17:46 ` lopresti at gmail dot com
2014-01-06 20:38 ` triegel at redhat dot com
2014-01-06 20:47 ` bugdal at aerifal dot cx
2014-01-06 21:20 ` triegel at redhat dot com
2014-01-06 21:24 ` bugdal at aerifal dot cx
2014-03-28  1:27 ` dancol at dancol dot org
2014-03-28 20:07 ` tudorb at gmail dot com
2014-06-20 12:23 ` kevin.dempsey at aculab dot com
2014-06-20 18:29 ` triegel at redhat dot com
2014-06-20 19:02 ` bugdal at aerifal dot cx
2014-06-20 19:10 ` bugdal at aerifal dot cx
2014-06-23  3:06 ` bugdal at aerifal dot cx
2014-06-25 14:34 ` triegel at redhat dot com
2014-06-25 16:01 ` bugdal at aerifal dot cx
2014-06-25 17:40 ` triegel at redhat dot com
2014-06-25 18:03 ` bugdal at aerifal dot cx
2014-06-27  7:26 ` fweimer at redhat dot com
2014-08-09 20:38 ` triegel at redhat dot com
2014-08-12  2:29 ` bugdal at aerifal dot cx [this message]
2015-01-15  8:45 ` mtk.manpages at gmail dot com
2015-05-30 18:25 ` dancol at dancol dot org
2015-06-03  4:08 ` carlos at redhat dot com
2015-06-03  4:09 ` carlos at redhat dot com
2015-07-14 20:23 ` triegel at redhat dot com

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-13690-131-0MXRXrPOFG@http.sourceware.org/bugzilla/ \
    --to=sourceware-bugzilla@sourceware.org \
    --cc=glibc-bugs@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).