public inbox for glibc-bugs@sourceware.org
help / color / mirror / Atom feed
From: "bugdal at aerifal dot cx" <sourceware-bugzilla@sourceware.org>
To: glibc-bugs@sourceware.org
Subject: [Bug libc/14958] Concurrent reader deadlock in pthread_rwlock_rdlock()
Date: Sat, 15 Dec 2012 05:16:00 -0000	[thread overview]
Message-ID: <bug-14958-131-cPtG1IW5Ey@http.sourceware.org/bugzilla/> (raw)
In-Reply-To: <bug-14958-131@http.sourceware.org/bugzilla/>

http://sourceware.org/bugzilla/show_bug.cgi?id=14958

Rich Felker <bugdal at aerifal dot cx> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |bugdal at aerifal dot cx

--- Comment #2 from Rich Felker <bugdal at aerifal dot cx> 2012-12-15 05:15:58 UTC ---
Confirmed. I don't see a need for more debugging/trace data. What's going on is
very clear upon reading the original report and test case. I see two potential
fixes:

1. The easy way is to just always use broadcast wakes and let readers and
writers duke it out for who gets the lock next. NPTL's synchronization
primitives are riddled with bugs from trying to be too smart (avoiding spurious
wakes, etc.) and missing important corner cases, and I would not be sad to see
all this ripped out in favor of something that's definitively robust; however,
I recognize I'm probably in the minority here.

2. The way that's harder to get right but that should give better performance
is for the reader that "steals" the lock from waiting writers to detect this
situation and broadcast wake the futex so that all the other waiting readers
also wake up and get a chance to obtain read locks. It seems highly nontrivial
to get all the corner cases right here, especially since waiting writers can be
performing a timedlock operation and could be gone by the time the reader
"steals" the lock from them.

I also confirmed that we don't have a corresponding bug in musl (which uses
approach #1 above).

-- 
Configure bugmail: http://sourceware.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.


  parent reply	other threads:[~2012-12-15  5:16 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-12-13 20:55 [Bug libc/14958] New: " daniel.stodden at gmail dot com
2012-12-13 23:22 ` [Bug libc/14958] " daniel.stodden at gmail dot com
2012-12-13 23:59 ` daniel.stodden at gmail dot com
2012-12-15  5:16 ` bugdal at aerifal dot cx [this message]
2012-12-16  9:12 ` daniel.stodden at gmail dot com
2012-12-16 17:33 ` bugdal at aerifal dot cx
2012-12-17  9:20 ` daniel.stodden at gmail dot com
2013-10-20 19:22 ` neleai at seznam dot cz
2014-02-07  3:17 ` [Bug nptl/14958] " jsm28 at gcc dot gnu.org
2014-06-14  5:35 ` fweimer at redhat dot com
2015-04-28 21:39 ` triegel at redhat dot com
2015-06-04 16:03 ` cvs-commit at gcc dot gnu.org
2015-06-04 16:07 ` triegel at redhat dot com

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-14958-131-cPtG1IW5Ey@http.sourceware.org/bugzilla/ \
    --to=sourceware-bugzilla@sourceware.org \
    --cc=glibc-bugs@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).