public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Noah Goldstein <goldstein.w.n@gmail.com>
To: "Guo, Wangyang" <wangyang.guo@intel.com>
Cc: GNU C Library <libc-alpha@sourceware.org>
Subject: Re: [PATCH v2] nptl: Add backoff mechanism to spinlock loop
Date: Wed, 30 Mar 2022 14:39:40 -0500	[thread overview]
Message-ID: <CAFUsyfJHYSRhfhVngSfNEdAKs64V2REVCi4854MfKFmg6KJ-ig@mail.gmail.com> (raw)
In-Reply-To: <f2ac1eb3-4693-422c-3613-398d2df238ed@intel.com>

On Wed, Mar 30, 2022 at 6:45 AM Guo, Wangyang <wangyang.guo@intel.com> wrote:
>
> On 3/29/2022 12:41 AM, Noah Goldstein via Libc-alpha wrote:
> > On Mon, Mar 28, 2022 at 3:47 AM Wangyang Guo <wangyang.guo@intel.com> wrote:
> >>
> >> When mutiple threads waiting for lock at the same time, once lock owner
> >> releases the lock, waiters will see lock available and all try to lock,
> >> which may cause an expensive CAS storm.
> >>
> >> Binary exponential backoff with random jitter is introduced. As try-lock
> >> attempt increases, there is more likely that a larger number threads
> >> compete for adaptive mutex lock, so increase wait time in exponential.
> >> A random jitter is also added to avoid synchronous try-lock from other
> >> threads.
> >>
> >> v2: Remove read-check before try-lock for performance.
> >>
> >> Signed-off-by: Wangyang Guo <wangyang.guo@intel.com>
> >> ---
> >>   nptl/pthread_mutex_lock.c | 25 ++++++++++++++++---------
> >>   1 file changed, 16 insertions(+), 9 deletions(-)
> >>
> >> diff --git a/nptl/pthread_mutex_lock.c b/nptl/pthread_mutex_lock.c
> >> index d2e652d151..7e75ec1cba 100644
> >> --- a/nptl/pthread_mutex_lock.c
> >> +++ b/nptl/pthread_mutex_lock.c
> >> @@ -26,6 +26,7 @@
> >>   #include <futex-internal.h>
> >>   #include <stap-probe.h>
> >>   #include <shlib-compat.h>
> >> +#include <random-bits.h>
> >>
> >>   /* Some of the following definitions differ when pthread_mutex_cond_lock.c
> >>      includes this file.  */
> >> @@ -64,11 +65,6 @@ lll_mutex_lock_optimized (pthread_mutex_t *mutex)
> >>   # define PTHREAD_MUTEX_VERSIONS 1
> >>   #endif
> >>
> >> -#ifndef LLL_MUTEX_READ_LOCK
> >> -# define LLL_MUTEX_READ_LOCK(mutex) \
> >> -  atomic_load_relaxed (&(mutex)->__data.__lock)
> >> -#endif
> >> -
> >>   static int __pthread_mutex_lock_full (pthread_mutex_t *mutex)
> >>        __attribute_noinline__;
> >>
> >> @@ -138,17 +134,28 @@ PTHREAD_MUTEX_LOCK (pthread_mutex_t *mutex)
> >>            int cnt = 0;
> >>            int max_cnt = MIN (max_adaptive_count (),
> >>                               mutex->__data.__spins * 2 + 10);
> >> +         int spin_count, exp_backoff = 1;
> >> +         unsigned int jitter = random_bits ();
> >>            do
> >>              {
> >> -             if (cnt++ >= max_cnt)
> >> +             /* In each loop, spin count is exponential backoff plus
> >> +                random jitter, random range is [0, exp_backoff-1].  */
> >> +             spin_count = exp_backoff + (jitter & (exp_backoff - 1));
> >> +             cnt += spin_count;
> >> +             if (cnt >= max_cnt)
> >>                  {
> >> +                 /* If cnt exceeds max spin count, just go to wait
> >> +                    queue.  */
> >>                    LLL_MUTEX_LOCK (mutex);
> >>                    break;
> >>                  }
> >> -             atomic_spin_nop ();
> >> +             do
> >> +                 atomic_spin_nop ();
> >> +             while (--spin_count > 0);
> >> +             /* Binary exponential backoff, prepare for next loop.  */
> >> +             exp_backoff <<= 1;
> >>              }
> >> -         while (LLL_MUTEX_READ_LOCK (mutex) != 0
> >> -                || LLL_MUTEX_TRYLOCK (mutex) != 0);
> >> +         while (LLL_MUTEX_TRYLOCK (mutex) != 0);
> >
> > Does it perform better w.o the read?
> >
> > In general can you post some benchmarks varying the number of threads / size of
> > the critical section? Would also be nice if you could collect some
> > stats regarding
> > the average number of failed CAS attempts before/after.
>
> I work out a mutex bench:
> https://gist.github.com/guowangy/459085e5be6bfeda101c591d2d2304c5
> It can test with different threads and critical section (critical length
> determined by num of pause instructions).

Can you add a benchmark to glibc? Also you are compiling w.o
optimizations on.

>
> Here is the result of v1 against upstream:
> (Tested in Xeon 8380)
>
> npause  1       4       16      64      128   <== thread num
> --------------------------------------------
> 0       1.00    1.70    1.94    1.70    1.76
> 1       0.99    1.57    1.75    1.54    1.58
> 4       1.00    1.28    1.42    1.38    1.41
> 16      1.00    1.02    1.11    1.18    1.19
> 64      1.00    1.00    0.95    0.99    0.97
> 128     1.00    0.92    0.92    0.87    0.87

I was able to essentially reproduce this.

Maybe in light of these changes `max_cnt` needs to a new threshold.
Also maybe instead of `cnt += spin_count` it should stay `cnt++` so we don't
hit the futex so eagerly.


>
> The first row header is number of threads.
> The first column is num of pause, which represents different critical
> section sizes.
> The value is the rate of v1/upstream throughput (High Is Better).
> RSD varies by thread num: thread(4, 16) < 2%, thread(1, 64, 128) < 1%.
>
> Thread=1 is the baseline and should be the same for the both.
> Backoff works well when critical section is small.
> But it cause some regression when critical section is very large,
> because of upstream using frequently attempt to reduce the latency.
> Since adaptive is aimed for 'quick' locks, I think it's not a big
> problem here.
>
> The next is v2 against v1:
>         1       4       16      64      128
> --------------------------------------------
> 0       1.00    0.99    0.95    1.02    1.02
> 1       1.00    1.08    0.92    1.04    1.04
> 4       1.00    0.94    0.94    1.05    1.04
> 16      1.00    1.01    0.98    1.03    1.03
> 64      1.00    1.01    1.00    1.00    1.01
> 128     1.00    1.02    1.00    1.00    1.00
>
> v2 without read-check can have benefit in large thread num, but have
> drawback in small thread num.
>
I am seeing a pretty significant benefit in keeping the read check
on my Tigerlake (only measuring up to 16 threads as its an 8-core
machine).

W/ read check:
[pause:   0, thread:   1] -- Throughput:   76528393 ops/s RSD: 4.55%
[pause:   0, thread:   4] -- Throughput:   28065221 ops/s RSD: 1.58%
[pause:   0, thread:   8] -- Throughput:   14757408 ops/s RSD: 1.86%
[pause:   0, thread:  16] -- Throughput:   15501579 ops/s RSD: 1.06%

w/o read check:
[pause:   0, thread:   1] -- Throughput:   71303250 ops/s RSD: 4.98%
[pause:   0, thread:   4] -- Throughput:   11308910 ops/s RSD: 8.96%
[pause:   0, thread:   8] -- Throughput:    7380217 ops/s RSD: 5.52%
[pause:   0, thread:  16] -- Throughput:    8253073 ops/s RSD: 2.19%


> I also have a version with random jitter removed (v3), here is the
> result of v3 against v1:
>         1       4       16      64      128
> --------------------------------------------
> 0       1.00    0.95    1.04    1.02    1.03
> 1       1.01    1.09    1.04    1.02    1.03
> 4       1.00    1.02    1.05    1.02    1.02
> 16      1.00    1.01    1.01    1.02    1.02
> 64      1.00    1.04    1.02    1.01    1.01
> 128     1.00    0.98    0.99    0.99    0.98
>
> The result shows without random is a better solution.
>
> >>
> >>            mutex->__data.__spins += (cnt - mutex->__data.__spins) / 8;
> >>          }
> >> --
> >> 2.35.1
> >>
> >
>

  reply	other threads:[~2022-03-30 19:39 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-28  8:47 Wangyang Guo
2022-03-28 16:41 ` Noah Goldstein
2022-03-30 11:44   ` Guo, Wangyang
2022-03-30 19:39     ` Noah Goldstein [this message]
2022-03-30 11:53 ` Adhemerval Zanella
2022-03-30 17:07   ` Noah Goldstein
2022-03-30 17:21     ` Adhemerval Zanella
2022-04-12 11:53       ` Florian Weimer
2022-04-22 13:30         ` Yann Droneaud
2022-04-22 13:32           ` Florian Weimer
2022-04-22 13:35             ` Cristian Rodríguez
2022-04-22 15:25               ` Noah Goldstein
2022-04-26 12:25                 ` Florian Weimer
2022-04-26 12:42                   ` Yann Droneaud
2022-05-04  2:50 ` [PATCH v3] " Wangyang Guo
2022-05-04  2:58 ` Wangyang Guo
2022-05-04  3:17   ` [PATCH v4] " Wangyang Guo
2022-05-05  1:56     ` H.J. Lu
2022-05-05  2:52       ` Noah Goldstein
2022-05-05  2:59       ` Guo, Wangyang
2022-05-05 22:44         ` H.J. Lu
2022-05-06  1:52           ` Guo, Wangyang
2022-05-06  1:50     ` [PATCH v5] " Wangyang Guo
2022-05-06  3:06       ` H.J. Lu
2022-09-11 20:29         ` Sunil Pandey
2022-09-14  1:26           ` Noah Goldstein
2022-09-29  0:12           ` Noah Goldstein
2022-09-30 13:18             ` FUCKETY FUCK FUCK, PLEASE REMOVE ME FROM THESE EMAILS Darren Tristano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAFUsyfJHYSRhfhVngSfNEdAKs64V2REVCi4854MfKFmg6KJ-ig@mail.gmail.com \
    --to=goldstein.w.n@gmail.com \
    --cc=libc-alpha@sourceware.org \
    --cc=wangyang.guo@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).