public inbox for libc-ports@sourceware.org
 help / color / mirror / Atom feed
From: Chris Metcalf <cmetcalf@tilera.com>
To: Roland McGrath <roland@hack.frob.com>
Cc: Maxim Kuvyrkov <maxim@codesourcery.com>,
	Andrew Haley <aph@redhat.com>,	David Miller <davem@davemloft.net>,
	<joseph@codesourcery.com>,	<rdsandiford@googlemail.com>,
	<libc-ports@sourceware.org>,	<libc-alpha@sourceware.org>
Subject: Re: [PATCH] Unify pthread_spin_[try]lock implementations.
Date: Wed, 25 Jul 2012 20:29:00 -0000	[thread overview]
Message-ID: <501056FC.80104@tilera.com> (raw)
In-Reply-To: <20120725202226.560582C0DA@topped-with-meat.com>

On 7/25/2012 4:22 PM, Roland McGrath wrote:
>> The tile architecture is unlikely to use this generic version no matter
>> what; see http://sourceware.org/ml/libc-ports/2012-07/msg00030.html for the
>> details, but the primary point is that in a mesh-based architecture it's a
>> bad idea to ever end up in a situation where all the cores can be spinning
>> issues loads or cmpxchg as fast as they can, so some kind of backoff is
>> necessary.
> I had read that before but only noticed the explanation that the plain
> reads were bad.  (Hence in my suggestion you'd share the code but with a
> #define that means the loop of plain reads would be elided entirely at
> compile time by constant folding.)  What kind of "backoff" do you mean?
> It's probably appropriate on every machine to use "atomic_delay ();" inside
> such loops.

Some work we did a while back found that bounded exponential delay tended
to work best for any kind of spinlock.  So if we fail to acquire the lock
the first time around, we wait a few cycles and try again, then keep
doubling the wait time up to some ceiling (around 1000 cycles or so); see
ports/sysdeps/tile/nptl/pthread_spin_lock.c.

This way in worst-case situation when you have (say) 100 cores all trying
to acquire the lock at once, you don't hose the memory network with
traffic.  This also helps somewhat to avoid unfairness where closer cores
have a dramatically better chance of acquiring the lock due to how wormhole
routing allocates links in the mesh to memory messages.

Of course, the real answer tends to be "don't use simple spinlocks", so in
the kernel, for example, we use ticket locks instead.  But with pthread
spinlocks that's not a great option since if any thread waiting for the
lock is scheduled out for a while, no later thread can acquire the lock either.

-- 
Chris Metcalf, Tilera Corp.
http://www.tilera.com

  reply	other threads:[~2012-07-25 20:29 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-14  4:36 [PATCH 2/3, MIPS] Rewrite MIPS' pthread_spin_[try]lock using __atomic_* builtins Maxim Kuvyrkov
2012-06-28 23:10 ` Joseph S. Myers
2012-07-11  5:58   ` [PATCH] Unify pthread_spin_[try]lock implementations Maxim Kuvyrkov
2012-07-11  8:15     ` Roland McGrath
2012-07-11  8:25       ` David Miller
2012-07-11  8:44         ` Andrew Haley
2012-07-11  8:54           ` Maxim Kuvyrkov
2012-07-11  9:02             ` Andrew Haley
2012-07-11  9:05               ` Maxim Kuvyrkov
2012-07-11 11:22                 ` Roland McGrath
2012-07-11 14:52                   ` Chris Metcalf
2012-07-11 22:16                   ` Maxim Kuvyrkov
2012-07-25 18:13                     ` Roland McGrath
2012-07-25 19:04                       ` Chris Metcalf
2012-07-25 20:22                         ` Roland McGrath
2012-07-25 20:29                           ` Chris Metcalf [this message]
2012-07-25 20:43                             ` Roland McGrath
2012-08-15  3:17                       ` Maxim Kuvyrkov
2012-08-15 16:27                         ` Roland McGrath
2012-08-15 16:39                           ` Maxim Kuvyrkov
2012-08-15 16:44                             ` Roland McGrath
2012-08-15 16:53                               ` Maxim Kuvyrkov
2012-08-15 16:56                                 ` Roland McGrath
2012-08-15 17:05                                   ` Maxim Kuvyrkov
2012-08-15 17:05                             ` Jeff Law
2012-08-15 17:11                               ` Roland McGrath
2012-08-15 17:24                                 ` Jeff Law
2012-08-15 16:40                         ` Joseph S. Myers
2012-08-15 16:43                         ` Carlos O'Donell
2012-08-15 22:00                         ` Andreas Schwab
2012-08-16 10:21                         ` Torvald Riegel
2012-08-16 12:40                           ` Chris Metcalf
2012-08-22 23:16                           ` Maxim Kuvyrkov
2012-07-11  8:26       ` Maxim Kuvyrkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=501056FC.80104@tilera.com \
    --to=cmetcalf@tilera.com \
    --cc=aph@redhat.com \
    --cc=davem@davemloft.net \
    --cc=joseph@codesourcery.com \
    --cc=libc-alpha@sourceware.org \
    --cc=libc-ports@sourceware.org \
    --cc=maxim@codesourcery.com \
    --cc=rdsandiford@googlemail.com \
    --cc=roland@hack.frob.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).