public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Torvald Riegel <triegel@redhat.com>
To: Rich Felker <dalias@libc.org>, "H.J. Lu" <hjl.tools@gmail.com>
Cc: Ma Ling <ling.ma.program@gmail.com>,
	GNU C Library <libc-alpha@sourceware.org>,
	"Lu, Hongjiu" <hongjiu.lu@intel.com>,
	"ling.ma" <ling.ml@antfin.com>, Wei Xiao <wei3.xiao@intel.com>
Subject: Re: [PATCH] NUMA spinlock [BZ #23962]
Date: Mon, 14 Jan 2019 23:18:00 -0000	[thread overview]
Message-ID: <5c2bf8859a412759aba26a21b317ea98f6ff8eaf.camel@redhat.com> (raw)
In-Reply-To: <20190103212113.GV23599@brightrain.aerifal.cx>

On Thu, 2019-01-03 at 16:21 -0500, Rich Felker wrote:
> On Thu, Jan 03, 2019 at 12:54:18PM -0800, H.J. Lu wrote:
> > On Thu, Jan 3, 2019 at 12:43 PM Rich Felker <dalias@libc.org> wrote:
> > > 
> > > On Wed, Dec 26, 2018 at 10:50:19AM +0800, Ma Ling wrote:
> > > > From: "ling.ma" <ling.ml@antfin.com>
> > > > 
> > > > On multi-socket systems, memory is shared across the entire system.
> > > > Data access to the local socket is much faster than the remote socket
> > > > and data access to the local core is faster than sibling cores on the
> > > > same socket.  For serialized workloads with conventional spinlock,
> > > > when there is high spinlock contention between threads, lock ping-pong
> > > > among sockets becomes the bottleneck and threads spend majority of
> > > > their time in spinlock overhead.
> > > > 
> > > > On multi-socket systems, the keys to our NUMA spinlock performance
> > > > are to minimize cross-socket traffic as well as localize the serialized
> > > > workload to one core for execution.  The basic principles of NUMA
> > > > spinlock are mainly consisted of following approaches, which reduce
> > > > data movement and accelerate critical section, eventually give us
> > > > significant performance improvement.
> > > 
> > > I question whether this belongs in glibc. It seems highly application-
> > > and kernel-specific. Is there a reason you wouldn't prefer to
> > > implement and maintain it in a library for use in the kind of
> > > application that needs it?
> > 
> > This is a good question.  On the other hand,  the current spinlock
> > in glibc hasn't been changed for many years.  It doesn't scale for
> > today's hardware.  Having a scalable spinlock in glibc is desirable.
> 
> "Scalable spinlock" is something of an oxymoron.

No, that's not true at all.  Most high-performance shared-memory
synchronization constructs (on typical HW we have today) will do some kind
of spinning (and back-off), and there's nothing wrong about it.  This can
scale very well. 

> Spinlocks are for
> situations where contention is extremely rare,

No, the question is rather whether the program needs blocking through the
OS (for performance, or for semantics such as PI) or not.  Energy may be
another factor.  For example, glibc's current mutexes don't scale well on
short critical because there's not enough spinning being done.

In particular, in cases where there aren't more threads than cores (ie,
what lots of high-performance parallel applications will ensure), it's
better to just spin (and back off) than to eagerly block using the OS.

> since they inherently
> blow up badly under contention.

Did I mention back-off before? ;)

> If this is happening it means you
> wanted a mutex not a spinlock.

It doesn't make that much sense to have different interfaces for those, if
it weren't for PThreads mutexes being dynamically typed, unfortunately.

I believe that a high-performance default lock (C++11 or C11 semantics,
non-process-shared) would beat both our spinlocks and mutex implementations
that we have today.

We could tune the C11 mutex implementation, but the number of users would
still be small in the foreseeable future, I guess.  Tuning libstdc++'s
mutex implementation (so that it doesn't just use nptl mutexes but does
something that's closer to the state of the art) would reach more users.

  parent reply	other threads:[~2019-01-14 23:18 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-26  9:51 Ma Ling
2019-01-03  4:05 ` 马凌(彦军)
     [not found]   ` <0a474516-b8c8-48cf-aeea-e57c77b78cbd.ling.ml@antfin.com>
2019-01-03  5:35     ` 转发:[PATCH] " 马凌(彦军)
2019-01-03 14:52       ` Szabolcs Nagy
2019-01-03 19:59         ` H.J. Lu
2019-01-05 12:34           ` [PATCH] " Carlos O'Donell
2019-01-05 16:36             ` H.J. Lu
2019-01-07 19:12               ` Florian Weimer
2019-01-07 19:49                 ` H.J. Lu
2019-01-10 16:31                   ` Carlos O'Donell
2019-01-10 16:32                     ` Florian Weimer
2019-01-10 16:41                       ` Carlos O'Donell
2019-01-10 17:52                         ` Szabolcs Nagy
2019-01-10 19:24                           ` Carlos O'Donell
2019-01-11 12:01                             ` kemi
2019-01-14 22:45                         ` Torvald Riegel
2019-01-15  9:32                           ` Florian Weimer
2019-01-15 12:01                             ` Torvald Riegel
2019-01-15 12:17                               ` Florian Weimer
2019-01-15 12:31                                 ` Torvald Riegel
2019-01-11 16:24                       ` H.J. Lu
2019-01-14 23:03             ` Torvald Riegel
2019-01-04  4:13         ` 转发:[PATCH] " 马凌(彦军)
2019-01-03 20:43 ` [PATCH] " Rich Felker
2019-01-03 20:55   ` H.J. Lu
2019-01-03 21:21     ` Rich Felker
2019-01-03 21:28       ` H.J. Lu
2019-01-14 23:18       ` Torvald Riegel [this message]
2019-01-15  2:33         ` kemi
2019-01-15 12:37           ` Torvald Riegel
2019-01-15 16:44             ` Rich Felker
2019-01-17  3:10             ` kemi
2019-02-04 17:23               ` Torvald Riegel
2019-01-14 22:40     ` Torvald Riegel
2019-01-14 23:26 ` Torvald Riegel
2019-01-15  4:47   ` 马凌(彦军)
2019-01-15  2:56 ` kemi
2019-01-15  4:27   ` 马凌(彦军)
2019-01-10 13:18 马凌(彦军)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5c2bf8859a412759aba26a21b317ea98f6ff8eaf.camel@redhat.com \
    --to=triegel@redhat.com \
    --cc=dalias@libc.org \
    --cc=hjl.tools@gmail.com \
    --cc=hongjiu.lu@intel.com \
    --cc=libc-alpha@sourceware.org \
    --cc=ling.ma.program@gmail.com \
    --cc=ling.ml@antfin.com \
    --cc=wei3.xiao@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).