public inbox for libc-ports@sourceware.org
 help / color / mirror / Atom feed
From: Rich Felker <dalias@aerifal.cx>
To: Torvald Riegel <triegel@redhat.com>
Cc: GLIBC Devel <libc-alpha@sourceware.org>,
	libc-ports <libc-ports@sourceware.org>
Subject: Re: [PATCH] Unify pthread_once (bug 15215)
Date: Thu, 09 May 2013 14:02:00 -0000	[thread overview]
Message-ID: <20130509140245.GI20323@brightrain.aerifal.cx> (raw)
In-Reply-To: <1368088765.7774.1571.camel@triegel.csb>

On Thu, May 09, 2013 at 10:39:25AM +0200, Torvald Riegel wrote:
> > However, the idea is that pthread_once only runs
> > init routines a small finite number of times, so even if you had to so
> > some horrible hack that makes the synchronization on return 1000x
> > slower (e.g. a syscall), it would still be better than incurring the
> > cost of a full acquire barrier in each subsequent call, which ideally
> > should have the same cost as a call to an empty function.
> 
> That would be true if non-first calls appear
> 1000*(syscall_overhead/acquire_mbar_overhead) times.  But do they?

In theory they might. Imagine a math function that might be called
millions or billions of times, but which depends on a precomputed
table. Personally, my view of best-practices is that you should use
'static const' for such tables, even if they're huge, rather than
runtime generation, but unfortunately I think my view is still a
minority one...

Also, keep in mind that even large overhead on the first call to
pthread_once is likely to be small in comparison to the time spent in
the initialization function, while even small overhead is huge in
comparison to a call to pthread_once that doesn't call the
initialization function.

> I think the way forward here is to:
> 1) Fix the implementation (ie, add the mbars).
> 2) Let the arch maintainers of the affected archs with weak memory moels
> (or people interested in this) look at this and come up with some
> measurements for how much overhead the mbars actually present in real
> code.
> 3) Decide whether this overhead justifies adding optimizations.
> 
> This patch is step 1.  I don't think we need to merge this step 3.

I think this is a reasonable approach.

> > > > Since it's impossible to track whether a call is the first call in a
> > > > given thread
> > > 
> > > Are you sure about this? :)
> > 
> > It's impossible with bounded memory requirements, and thus impossible
> > in general (allocating memory for the tracking might fail).
> 
> I believe you think about needing to track more than you actually need
> to know.  All you need is knowing whether a thread established a
> happens-before with whoever initialized the once_control in the past.
> So you do need per-thread state, and per-once_control state, but not
> necessarily more.  If in doubt, you can still do the acquire barrier.

The number of threads and the number of once controls are both
unbounded. You might could solve the problem with serial numbers if
there were room to store a sufficiently large one in the once control,
but the once control is 32-bit and the serial numbers could (in a
pathological but valid application) easily overflow 32 bits.

> > I think my confusion is merely that POSIX does not define the phrase
> > "synchronize memory", and in the absence of a definition, "full memory
> > barrier" (both release and acquire semantics) is the only reasonable
> > interpretation I can find. In other words, it seems like a
> > pathological conforming program could attempt to use the language in
> > the specification to use pthread_once as a release barrier. I'm not
> > sure if there are ways this could be meaningfully arranged (i.e. with
> > well-defined ordering; off-hand, I would think tricks with cancelling
> > an in-progress invocation of pthread_once might make it possible.
> 
> I agree that the absence of a proper memory model makes reasoning about
> some of this hard.  I guess it would be best if POSIX would just endorse
> C11's memory model, and specify the intended semantics in relation to
> this model where needed.

Agreed, and I suspect this is what they'll do. I can raise the issue,
but perhaps you'd be better at expressing it. Let me know if you'd
rather I do it.

Rich

  reply	other threads:[~2013-05-09 14:02 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-08 14:44 Torvald Riegel
2013-05-08 17:51 ` Rich Felker
2013-05-08 20:47   ` Torvald Riegel
2013-05-08 21:25     ` Rich Felker
2013-05-09  8:39       ` Torvald Riegel
2013-05-09 14:02         ` Rich Felker [this message]
2013-05-09 15:14           ` Torvald Riegel
2013-05-09 15:56             ` Rich Felker
2013-05-10  8:31               ` Torvald Riegel
2013-05-10 13:22                 ` Rich Felker
2013-05-23  4:15 ` Carlos O'Donell
2013-08-26 12:50   ` Ondřej Bílka
2013-08-26 16:45     ` Rich Felker
2013-08-26 18:41       ` Ondřej Bílka
2013-08-27  2:29         ` Rich Felker
2013-10-06  0:20   ` Torvald Riegel
2013-10-06 21:41     ` Torvald Riegel
2013-10-07 16:04     ` Joseph S. Myers
2013-10-07 21:53       ` Torvald Riegel
2014-03-31 11:44         ` Will Newton
2014-03-31 20:09           ` Torvald Riegel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130509140245.GI20323@brightrain.aerifal.cx \
    --to=dalias@aerifal.cx \
    --cc=libc-alpha@sourceware.org \
    --cc=libc-ports@sourceware.org \
    --cc=triegel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).