public inbox for libc-stable@sourceware.org
 help / color / mirror / Atom feed
* Re: [PATCH v5] nptl: Add backoff mechanism to spinlock loop
       [not found]   ` <CAMe9rOpOuQfuJbARLLqSLUc3AzA6pXpj-+zR=0dZG-fT7T3eiw@mail.gmail.com>
@ 2022-09-11 20:29     ` Sunil Pandey
  2022-09-14  1:26       ` Noah Goldstein
  2022-09-29  0:12       ` Noah Goldstein
  0 siblings, 2 replies; 4+ messages in thread
From: Sunil Pandey @ 2022-09-11 20:29 UTC (permalink / raw)
  To: H.J. Lu, Libc-stable Mailing List, Florian Weimer
  Cc: Wangyang Guo, GNU C Library

On Thu, May 5, 2022 at 8:07 PM H.J. Lu via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> On Thu, May 5, 2022 at 6:50 PM Wangyang Guo <wangyang.guo@intel.com> wrote:
> >
> > When mutiple threads waiting for lock at the same time, once lock owner
> > releases the lock, waiters will see lock available and all try to lock,
> > which may cause an expensive CAS storm.
> >
> > Binary exponential backoff with random jitter is introduced. As try-lock
> > attempt increases, there is more likely that a larger number threads
> > compete for adaptive mutex lock, so increase wait time in exponential.
> > A random jitter is also added to avoid synchronous try-lock from other
> > threads.
> >
> > v2: Remove read-check before try-lock for performance.
> >
> > v3:
> > 1. Restore read-check since it works well in some platform.
> > 2. Make backoff arch dependent, and enable it for x86_64.
> > 3. Limit max backoff to reduce latency in large critical section.
> >
> > v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h
> >
> > v5: Commit log updated for regression in large critical section.
> >
> > Result of pthread-mutex-locks bench
> >
> > Test Platform: Xeon 8280L (2 socket, 112 CPUs in total)
> > First Row: thread number
> > First Col: critical section length
> > Values: backoff vs upstream, time based, low is better
> >
> > non-critical-length: 1
> >         1       2       4       8       16      32      64      112     140
> > 0       0.99    0.58    0.52    0.49    0.43    0.44    0.46    0.52    0.54
> > 1       0.98    0.43    0.56    0.50    0.44    0.45    0.50    0.56    0.57
> > 2       0.99    0.41    0.57    0.51    0.45    0.47    0.48    0.60    0.61
> > 4       0.99    0.45    0.59    0.53    0.48    0.49    0.52    0.64    0.65
> > 8       1.00    0.66    0.71    0.63    0.56    0.59    0.66    0.72    0.71
> > 16      0.97    0.78    0.91    0.73    0.67    0.70    0.79    0.80    0.80
> > 32      0.95    1.17    0.98    0.87    0.82    0.86    0.89    0.90    0.90
> > 64      0.96    0.95    1.01    1.01    0.98    1.00    1.03    0.99    0.99
> > 128     0.99    1.01    1.01    1.17    1.08    1.12    1.02    0.97    1.02
> >
> > non-critical-length: 32
> >         1       2       4       8       16      32      64      112     140
> > 0       1.03    0.97    0.75    0.65    0.58    0.58    0.56    0.70    0.70
> > 1       0.94    0.95    0.76    0.65    0.58    0.58    0.61    0.71    0.72
> > 2       0.97    0.96    0.77    0.66    0.58    0.59    0.62    0.74    0.74
> > 4       0.99    0.96    0.78    0.66    0.60    0.61    0.66    0.76    0.77
> > 8       0.99    0.99    0.84    0.70    0.64    0.66    0.71    0.80    0.80
> > 16      0.98    0.97    0.95    0.76    0.70    0.73    0.81    0.85    0.84
> > 32      1.04    1.12    1.04    0.89    0.82    0.86    0.93    0.91    0.91
> > 64      0.99    1.15    1.07    1.00    0.99    1.01    1.05    0.99    0.99
> > 128     1.00    1.21    1.20    1.22    1.25    1.31    1.12    1.10    0.99
> >
> > non-critical-length: 128
> >         1       2       4       8       16      32      64      112     140
> > 0       1.02    1.00    0.99    0.67    0.61    0.61    0.61    0.74    0.73
> > 1       0.95    0.99    1.00    0.68    0.61    0.60    0.60    0.74    0.74
> > 2       1.00    1.04    1.00    0.68    0.59    0.61    0.65    0.76    0.76
> > 4       1.00    0.96    0.98    0.70    0.63    0.63    0.67    0.78    0.77
> > 8       1.01    1.02    0.89    0.73    0.65    0.67    0.71    0.81    0.80
> > 16      0.99    0.96    0.96    0.79    0.71    0.73    0.80    0.84    0.84
> > 32      0.99    0.95    1.05    0.89    0.84    0.85    0.94    0.92    0.91
> > 64      1.00    0.99    1.16    1.04    1.00    1.02    1.06    0.99    0.99
> > 128     1.00    1.06    0.98    1.14    1.39    1.26    1.08    1.02    0.98
> >
> > There is regression in large critical section. But adaptive mutex is
> > aimed for "quick" locks. Small critical section is more common when
> > users choose to use adaptive pthread_mutex.
> >
> > Signed-off-by: Wangyang Guo <wangyang.guo@intel.com>
> > ---
> >  nptl/pthread_mutex_lock.c                   | 16 +++++++--
> >  sysdeps/nptl/pthreadP.h                     |  1 +
> >  sysdeps/nptl/pthread_mutex_backoff.h        | 35 ++++++++++++++++++
> >  sysdeps/x86_64/nptl/pthread_mutex_backoff.h | 39 +++++++++++++++++++++
> >  4 files changed, 89 insertions(+), 2 deletions(-)
> >  create mode 100644 sysdeps/nptl/pthread_mutex_backoff.h
> >  create mode 100644 sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> >
> > diff --git a/nptl/pthread_mutex_lock.c b/nptl/pthread_mutex_lock.c
> > index d2e652d151..6e767a8724 100644
> > --- a/nptl/pthread_mutex_lock.c
> > +++ b/nptl/pthread_mutex_lock.c
> > @@ -138,14 +138,26 @@ PTHREAD_MUTEX_LOCK (pthread_mutex_t *mutex)
> >           int cnt = 0;
> >           int max_cnt = MIN (max_adaptive_count (),
> >                              mutex->__data.__spins * 2 + 10);
> > +         int spin_count, exp_backoff = 1;
> > +         unsigned int jitter = get_jitter ();
> >           do
> >             {
> > -             if (cnt++ >= max_cnt)
> > +             /* In each loop, spin count is exponential backoff plus
> > +                random jitter, random range is [0, exp_backoff-1].  */
> > +             spin_count = exp_backoff + (jitter & (exp_backoff - 1));
> > +             cnt += spin_count;
> > +             if (cnt >= max_cnt)
> >                 {
> > +                 /* If cnt exceeds max spin count, just go to wait
> > +                    queue.  */
> >                   LLL_MUTEX_LOCK (mutex);
> >                   break;
> >                 }
> > -             atomic_spin_nop ();
> > +             do
> > +               atomic_spin_nop ();
> > +             while (--spin_count > 0);
> > +             /* Prepare for next loop.  */
> > +             exp_backoff = get_next_backoff (exp_backoff);
> >             }
> >           while (LLL_MUTEX_READ_LOCK (mutex) != 0
> >                  || LLL_MUTEX_TRYLOCK (mutex) != 0);
> > diff --git a/sysdeps/nptl/pthreadP.h b/sysdeps/nptl/pthreadP.h
> > index 601db4ff2b..39af275c25 100644
> > --- a/sysdeps/nptl/pthreadP.h
> > +++ b/sysdeps/nptl/pthreadP.h
> > @@ -33,6 +33,7 @@
> >  #include <kernel-features.h>
> >  #include <errno.h>
> >  #include <internal-signals.h>
> > +#include <pthread_mutex_backoff.h>
> >  #include "pthread_mutex_conf.h"
> >
> >
> > diff --git a/sysdeps/nptl/pthread_mutex_backoff.h b/sysdeps/nptl/pthread_mutex_backoff.h
> > new file mode 100644
> > index 0000000000..5b26c22ac7
> > --- /dev/null
> > +++ b/sysdeps/nptl/pthread_mutex_backoff.h
> > @@ -0,0 +1,35 @@
> > +/* Pthread mutex backoff configuration.
> > +   Copyright (C) 2022 Free Software Foundation, Inc.
> > +   This file is part of the GNU C Library.
> > +
> > +   The GNU C Library is free software; you can redistribute it and/or
> > +   modify it under the terms of the GNU Lesser General Public
> > +   License as published by the Free Software Foundation; either
> > +   version 2.1 of the License, or (at your option) any later version.
> > +
> > +   The GNU C Library is distributed in the hope that it will be useful,
> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > +   Lesser General Public License for more details.
> > +
> > +   You should have received a copy of the GNU Lesser General Public
> > +   License along with the GNU C Library; if not, see
> > +   <https://www.gnu.org/licenses/>.  */
> > +#ifndef _PTHREAD_MUTEX_BACKOFF_H
> > +#define _PTHREAD_MUTEX_BACKOFF_H 1
> > +
> > +static inline unsigned int
> > +get_jitter (void)
> > +{
> > +  /* Arch dependent random jitter, return 0 disables random.  */
> > +  return 0;
> > +}
> > +
> > +static inline int
> > +get_next_backoff (int backoff)
> > +{
> > +  /* Next backoff, return 1 disables mutex backoff.  */
> > +  return 1;
> > +}
> > +
> > +#endif
> > diff --git a/sysdeps/x86_64/nptl/pthread_mutex_backoff.h b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > new file mode 100644
> > index 0000000000..ec74c3d9db
> > --- /dev/null
> > +++ b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > @@ -0,0 +1,39 @@
> > +/* Pthread mutex backoff configuration.
> > +   Copyright (C) 2022 Free Software Foundation, Inc.
> > +   This file is part of the GNU C Library.
> > +
> > +   The GNU C Library is free software; you can redistribute it and/or
> > +   modify it under the terms of the GNU Lesser General Public
> > +   License as published by the Free Software Foundation; either
> > +   version 2.1 of the License, or (at your option) any later version.
> > +
> > +   The GNU C Library is distributed in the hope that it will be useful,
> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > +   Lesser General Public License for more details.
> > +
> > +   You should have received a copy of the GNU Lesser General Public
> > +   License along with the GNU C Library; if not, see
> > +   <https://www.gnu.org/licenses/>.  */
> > +#ifndef _PTHREAD_MUTEX_BACKOFF_H
> > +#define _PTHREAD_MUTEX_BACKOFF_H 1
> > +
> > +#include <fast-jitter.h>
> > +
> > +static inline unsigned int
> > +get_jitter (void)
> > +{
> > +  return get_fast_jitter ();
> > +}
> > +
> > +#define MAX_BACKOFF 16
> > +
> > +static inline int
> > +get_next_backoff (int backoff)
> > +{
> > +  /* Binary expontial backoff. Limiting max backoff
> > +     can reduce latency in large critical section.  */
> > +  return (backoff < MAX_BACKOFF) ? backoff << 1 : backoff;
> > +}
> > +
> > +#endif
> > --
> > 2.35.1
> >
>
> LGTM.
>
> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
>
> Please wait until next Monday for more comments.
>
> Thanks.
>
> --
> H.J.

I would like to backport this patch to release branch 2.33, 2.34 and 2.35

Any comments/suggestions or objections on this.

commit 8162147872491bb5b48e91543b19c49a29ae6b6d
Author: Wangyang Guo <wangyang.guo@intel.com>
Date:   Fri May 6 01:50:10 2022 +0000

    nptl: Add backoff mechanism to spinlock loop

    When mutiple threads waiting for lock at the same time, once lock owner
    releases the lock, waiters will see lock available and all try to lock,
    which may cause an expensive CAS storm.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v5] nptl: Add backoff mechanism to spinlock loop
  2022-09-11 20:29     ` [PATCH v5] nptl: Add backoff mechanism to spinlock loop Sunil Pandey
@ 2022-09-14  1:26       ` Noah Goldstein
  2022-09-29  0:12       ` Noah Goldstein
  1 sibling, 0 replies; 4+ messages in thread
From: Noah Goldstein @ 2022-09-14  1:26 UTC (permalink / raw)
  To: Sunil Pandey
  Cc: H.J. Lu, Libc-stable Mailing List, Florian Weimer, Wangyang Guo,
	GNU C Library

On Sun, Sep 11, 2022 at 1:30 PM Sunil Pandey via Libc-stable
<libc-stable@sourceware.org> wrote:
>
> On Thu, May 5, 2022 at 8:07 PM H.J. Lu via Libc-alpha
> <libc-alpha@sourceware.org> wrote:
> >
> > On Thu, May 5, 2022 at 6:50 PM Wangyang Guo <wangyang.guo@intel.com> wrote:
> > >
> > > When mutiple threads waiting for lock at the same time, once lock owner
> > > releases the lock, waiters will see lock available and all try to lock,
> > > which may cause an expensive CAS storm.
> > >
> > > Binary exponential backoff with random jitter is introduced. As try-lock
> > > attempt increases, there is more likely that a larger number threads
> > > compete for adaptive mutex lock, so increase wait time in exponential.
> > > A random jitter is also added to avoid synchronous try-lock from other
> > > threads.
> > >
> > > v2: Remove read-check before try-lock for performance.
> > >
> > > v3:
> > > 1. Restore read-check since it works well in some platform.
> > > 2. Make backoff arch dependent, and enable it for x86_64.
> > > 3. Limit max backoff to reduce latency in large critical section.
> > >
> > > v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h
> > >
> > > v5: Commit log updated for regression in large critical section.
> > >
> > > Result of pthread-mutex-locks bench
> > >
> > > Test Platform: Xeon 8280L (2 socket, 112 CPUs in total)
> > > First Row: thread number
> > > First Col: critical section length
> > > Values: backoff vs upstream, time based, low is better
> > >
> > > non-critical-length: 1
> > >         1       2       4       8       16      32      64      112     140
> > > 0       0.99    0.58    0.52    0.49    0.43    0.44    0.46    0.52    0.54
> > > 1       0.98    0.43    0.56    0.50    0.44    0.45    0.50    0.56    0.57
> > > 2       0.99    0.41    0.57    0.51    0.45    0.47    0.48    0.60    0.61
> > > 4       0.99    0.45    0.59    0.53    0.48    0.49    0.52    0.64    0.65
> > > 8       1.00    0.66    0.71    0.63    0.56    0.59    0.66    0.72    0.71
> > > 16      0.97    0.78    0.91    0.73    0.67    0.70    0.79    0.80    0.80
> > > 32      0.95    1.17    0.98    0.87    0.82    0.86    0.89    0.90    0.90
> > > 64      0.96    0.95    1.01    1.01    0.98    1.00    1.03    0.99    0.99
> > > 128     0.99    1.01    1.01    1.17    1.08    1.12    1.02    0.97    1.02
> > >
> > > non-critical-length: 32
> > >         1       2       4       8       16      32      64      112     140
> > > 0       1.03    0.97    0.75    0.65    0.58    0.58    0.56    0.70    0.70
> > > 1       0.94    0.95    0.76    0.65    0.58    0.58    0.61    0.71    0.72
> > > 2       0.97    0.96    0.77    0.66    0.58    0.59    0.62    0.74    0.74
> > > 4       0.99    0.96    0.78    0.66    0.60    0.61    0.66    0.76    0.77
> > > 8       0.99    0.99    0.84    0.70    0.64    0.66    0.71    0.80    0.80
> > > 16      0.98    0.97    0.95    0.76    0.70    0.73    0.81    0.85    0.84
> > > 32      1.04    1.12    1.04    0.89    0.82    0.86    0.93    0.91    0.91
> > > 64      0.99    1.15    1.07    1.00    0.99    1.01    1.05    0.99    0.99
> > > 128     1.00    1.21    1.20    1.22    1.25    1.31    1.12    1.10    0.99
> > >
> > > non-critical-length: 128
> > >         1       2       4       8       16      32      64      112     140
> > > 0       1.02    1.00    0.99    0.67    0.61    0.61    0.61    0.74    0.73
> > > 1       0.95    0.99    1.00    0.68    0.61    0.60    0.60    0.74    0.74
> > > 2       1.00    1.04    1.00    0.68    0.59    0.61    0.65    0.76    0.76
> > > 4       1.00    0.96    0.98    0.70    0.63    0.63    0.67    0.78    0.77
> > > 8       1.01    1.02    0.89    0.73    0.65    0.67    0.71    0.81    0.80
> > > 16      0.99    0.96    0.96    0.79    0.71    0.73    0.80    0.84    0.84
> > > 32      0.99    0.95    1.05    0.89    0.84    0.85    0.94    0.92    0.91
> > > 64      1.00    0.99    1.16    1.04    1.00    1.02    1.06    0.99    0.99
> > > 128     1.00    1.06    0.98    1.14    1.39    1.26    1.08    1.02    0.98
> > >
> > > There is regression in large critical section. But adaptive mutex is
> > > aimed for "quick" locks. Small critical section is more common when
> > > users choose to use adaptive pthread_mutex.
> > >
> > > Signed-off-by: Wangyang Guo <wangyang.guo@intel.com>
> > > ---
> > >  nptl/pthread_mutex_lock.c                   | 16 +++++++--
> > >  sysdeps/nptl/pthreadP.h                     |  1 +
> > >  sysdeps/nptl/pthread_mutex_backoff.h        | 35 ++++++++++++++++++
> > >  sysdeps/x86_64/nptl/pthread_mutex_backoff.h | 39 +++++++++++++++++++++
> > >  4 files changed, 89 insertions(+), 2 deletions(-)
> > >  create mode 100644 sysdeps/nptl/pthread_mutex_backoff.h
> > >  create mode 100644 sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > >
> > > diff --git a/nptl/pthread_mutex_lock.c b/nptl/pthread_mutex_lock.c
> > > index d2e652d151..6e767a8724 100644
> > > --- a/nptl/pthread_mutex_lock.c
> > > +++ b/nptl/pthread_mutex_lock.c
> > > @@ -138,14 +138,26 @@ PTHREAD_MUTEX_LOCK (pthread_mutex_t *mutex)
> > >           int cnt = 0;
> > >           int max_cnt = MIN (max_adaptive_count (),
> > >                              mutex->__data.__spins * 2 + 10);
> > > +         int spin_count, exp_backoff = 1;
> > > +         unsigned int jitter = get_jitter ();
> > >           do
> > >             {
> > > -             if (cnt++ >= max_cnt)
> > > +             /* In each loop, spin count is exponential backoff plus
> > > +                random jitter, random range is [0, exp_backoff-1].  */
> > > +             spin_count = exp_backoff + (jitter & (exp_backoff - 1));
> > > +             cnt += spin_count;
> > > +             if (cnt >= max_cnt)
> > >                 {
> > > +                 /* If cnt exceeds max spin count, just go to wait
> > > +                    queue.  */
> > >                   LLL_MUTEX_LOCK (mutex);
> > >                   break;
> > >                 }
> > > -             atomic_spin_nop ();
> > > +             do
> > > +               atomic_spin_nop ();
> > > +             while (--spin_count > 0);
> > > +             /* Prepare for next loop.  */
> > > +             exp_backoff = get_next_backoff (exp_backoff);
> > >             }
> > >           while (LLL_MUTEX_READ_LOCK (mutex) != 0
> > >                  || LLL_MUTEX_TRYLOCK (mutex) != 0);
> > > diff --git a/sysdeps/nptl/pthreadP.h b/sysdeps/nptl/pthreadP.h
> > > index 601db4ff2b..39af275c25 100644
> > > --- a/sysdeps/nptl/pthreadP.h
> > > +++ b/sysdeps/nptl/pthreadP.h
> > > @@ -33,6 +33,7 @@
> > >  #include <kernel-features.h>
> > >  #include <errno.h>
> > >  #include <internal-signals.h>
> > > +#include <pthread_mutex_backoff.h>
> > >  #include "pthread_mutex_conf.h"
> > >
> > >
> > > diff --git a/sysdeps/nptl/pthread_mutex_backoff.h b/sysdeps/nptl/pthread_mutex_backoff.h
> > > new file mode 100644
> > > index 0000000000..5b26c22ac7
> > > --- /dev/null
> > > +++ b/sysdeps/nptl/pthread_mutex_backoff.h
> > > @@ -0,0 +1,35 @@
> > > +/* Pthread mutex backoff configuration.
> > > +   Copyright (C) 2022 Free Software Foundation, Inc.
> > > +   This file is part of the GNU C Library.
> > > +
> > > +   The GNU C Library is free software; you can redistribute it and/or
> > > +   modify it under the terms of the GNU Lesser General Public
> > > +   License as published by the Free Software Foundation; either
> > > +   version 2.1 of the License, or (at your option) any later version.
> > > +
> > > +   The GNU C Library is distributed in the hope that it will be useful,
> > > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > > +   Lesser General Public License for more details.
> > > +
> > > +   You should have received a copy of the GNU Lesser General Public
> > > +   License along with the GNU C Library; if not, see
> > > +   <https://www.gnu.org/licenses/>.  */
> > > +#ifndef _PTHREAD_MUTEX_BACKOFF_H
> > > +#define _PTHREAD_MUTEX_BACKOFF_H 1
> > > +
> > > +static inline unsigned int
> > > +get_jitter (void)
> > > +{
> > > +  /* Arch dependent random jitter, return 0 disables random.  */
> > > +  return 0;
> > > +}
> > > +
> > > +static inline int
> > > +get_next_backoff (int backoff)
> > > +{
> > > +  /* Next backoff, return 1 disables mutex backoff.  */
> > > +  return 1;
> > > +}
> > > +
> > > +#endif
> > > diff --git a/sysdeps/x86_64/nptl/pthread_mutex_backoff.h b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > > new file mode 100644
> > > index 0000000000..ec74c3d9db
> > > --- /dev/null
> > > +++ b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > > @@ -0,0 +1,39 @@
> > > +/* Pthread mutex backoff configuration.
> > > +   Copyright (C) 2022 Free Software Foundation, Inc.
> > > +   This file is part of the GNU C Library.
> > > +
> > > +   The GNU C Library is free software; you can redistribute it and/or
> > > +   modify it under the terms of the GNU Lesser General Public
> > > +   License as published by the Free Software Foundation; either
> > > +   version 2.1 of the License, or (at your option) any later version.
> > > +
> > > +   The GNU C Library is distributed in the hope that it will be useful,
> > > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > > +   Lesser General Public License for more details.
> > > +
> > > +   You should have received a copy of the GNU Lesser General Public
> > > +   License along with the GNU C Library; if not, see
> > > +   <https://www.gnu.org/licenses/>.  */
> > > +#ifndef _PTHREAD_MUTEX_BACKOFF_H
> > > +#define _PTHREAD_MUTEX_BACKOFF_H 1
> > > +
> > > +#include <fast-jitter.h>
> > > +
> > > +static inline unsigned int
> > > +get_jitter (void)
> > > +{
> > > +  return get_fast_jitter ();
> > > +}
> > > +
> > > +#define MAX_BACKOFF 16
> > > +
> > > +static inline int
> > > +get_next_backoff (int backoff)
> > > +{
> > > +  /* Binary expontial backoff. Limiting max backoff
> > > +     can reduce latency in large critical section.  */
> > > +  return (backoff < MAX_BACKOFF) ? backoff << 1 : backoff;
> > > +}
> > > +
> > > +#endif
> > > --
> > > 2.35.1
> > >
> >
> > LGTM.
> >
> > Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
> >
> > Please wait until next Monday for more comments.
> >
> > Thanks.
> >
> > --
> > H.J.
>
> I would like to backport this patch to release branch 2.33, 2.34 and 2.35
>
> Any comments/suggestions or objections on this.

Fine by me.
>
> commit 8162147872491bb5b48e91543b19c49a29ae6b6d
> Author: Wangyang Guo <wangyang.guo@intel.com>
> Date:   Fri May 6 01:50:10 2022 +0000
>
>     nptl: Add backoff mechanism to spinlock loop
>
>     When mutiple threads waiting for lock at the same time, once lock owner
>     releases the lock, waiters will see lock available and all try to lock,
>     which may cause an expensive CAS storm.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v5] nptl: Add backoff mechanism to spinlock loop
  2022-09-11 20:29     ` [PATCH v5] nptl: Add backoff mechanism to spinlock loop Sunil Pandey
  2022-09-14  1:26       ` Noah Goldstein
@ 2022-09-29  0:12       ` Noah Goldstein
  2022-09-30 13:18         ` FUCKETY FUCK FUCK, PLEASE REMOVE ME FROM THESE EMAILS Darren Tristano
  1 sibling, 1 reply; 4+ messages in thread
From: Noah Goldstein @ 2022-09-29  0:12 UTC (permalink / raw)
  To: Sunil Pandey
  Cc: H.J. Lu, Libc-stable Mailing List, Florian Weimer, Wangyang Guo,
	GNU C Library

On Sun, Sep 11, 2022 at 4:30 PM Sunil Pandey via Libc-stable
<libc-stable@sourceware.org> wrote:
>
> On Thu, May 5, 2022 at 8:07 PM H.J. Lu via Libc-alpha
> <libc-alpha@sourceware.org> wrote:
> >
> > On Thu, May 5, 2022 at 6:50 PM Wangyang Guo <wangyang.guo@intel.com> wrote:
> > >
> > > When mutiple threads waiting for lock at the same time, once lock owner
> > > releases the lock, waiters will see lock available and all try to lock,
> > > which may cause an expensive CAS storm.
> > >
> > > Binary exponential backoff with random jitter is introduced. As try-lock
> > > attempt increases, there is more likely that a larger number threads
> > > compete for adaptive mutex lock, so increase wait time in exponential.
> > > A random jitter is also added to avoid synchronous try-lock from other
> > > threads.
> > >
> > > v2: Remove read-check before try-lock for performance.
> > >
> > > v3:
> > > 1. Restore read-check since it works well in some platform.
> > > 2. Make backoff arch dependent, and enable it for x86_64.
> > > 3. Limit max backoff to reduce latency in large critical section.
> > >
> > > v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h
> > >
> > > v5: Commit log updated for regression in large critical section.
> > >
> > > Result of pthread-mutex-locks bench
> > >
> > > Test Platform: Xeon 8280L (2 socket, 112 CPUs in total)
> > > First Row: thread number
> > > First Col: critical section length
> > > Values: backoff vs upstream, time based, low is better
> > >
> > > non-critical-length: 1
> > >         1       2       4       8       16      32      64      112     140
> > > 0       0.99    0.58    0.52    0.49    0.43    0.44    0.46    0.52    0.54
> > > 1       0.98    0.43    0.56    0.50    0.44    0.45    0.50    0.56    0.57
> > > 2       0.99    0.41    0.57    0.51    0.45    0.47    0.48    0.60    0.61
> > > 4       0.99    0.45    0.59    0.53    0.48    0.49    0.52    0.64    0.65
> > > 8       1.00    0.66    0.71    0.63    0.56    0.59    0.66    0.72    0.71
> > > 16      0.97    0.78    0.91    0.73    0.67    0.70    0.79    0.80    0.80
> > > 32      0.95    1.17    0.98    0.87    0.82    0.86    0.89    0.90    0.90
> > > 64      0.96    0.95    1.01    1.01    0.98    1.00    1.03    0.99    0.99
> > > 128     0.99    1.01    1.01    1.17    1.08    1.12    1.02    0.97    1.02
> > >
> > > non-critical-length: 32
> > >         1       2       4       8       16      32      64      112     140
> > > 0       1.03    0.97    0.75    0.65    0.58    0.58    0.56    0.70    0.70
> > > 1       0.94    0.95    0.76    0.65    0.58    0.58    0.61    0.71    0.72
> > > 2       0.97    0.96    0.77    0.66    0.58    0.59    0.62    0.74    0.74
> > > 4       0.99    0.96    0.78    0.66    0.60    0.61    0.66    0.76    0.77
> > > 8       0.99    0.99    0.84    0.70    0.64    0.66    0.71    0.80    0.80
> > > 16      0.98    0.97    0.95    0.76    0.70    0.73    0.81    0.85    0.84
> > > 32      1.04    1.12    1.04    0.89    0.82    0.86    0.93    0.91    0.91
> > > 64      0.99    1.15    1.07    1.00    0.99    1.01    1.05    0.99    0.99
> > > 128     1.00    1.21    1.20    1.22    1.25    1.31    1.12    1.10    0.99
> > >
> > > non-critical-length: 128
> > >         1       2       4       8       16      32      64      112     140
> > > 0       1.02    1.00    0.99    0.67    0.61    0.61    0.61    0.74    0.73
> > > 1       0.95    0.99    1.00    0.68    0.61    0.60    0.60    0.74    0.74
> > > 2       1.00    1.04    1.00    0.68    0.59    0.61    0.65    0.76    0.76
> > > 4       1.00    0.96    0.98    0.70    0.63    0.63    0.67    0.78    0.77
> > > 8       1.01    1.02    0.89    0.73    0.65    0.67    0.71    0.81    0.80
> > > 16      0.99    0.96    0.96    0.79    0.71    0.73    0.80    0.84    0.84
> > > 32      0.99    0.95    1.05    0.89    0.84    0.85    0.94    0.92    0.91
> > > 64      1.00    0.99    1.16    1.04    1.00    1.02    1.06    0.99    0.99
> > > 128     1.00    1.06    0.98    1.14    1.39    1.26    1.08    1.02    0.98
> > >
> > > There is regression in large critical section. But adaptive mutex is
> > > aimed for "quick" locks. Small critical section is more common when
> > > users choose to use adaptive pthread_mutex.
> > >
> > > Signed-off-by: Wangyang Guo <wangyang.guo@intel.com>
> > > ---
> > >  nptl/pthread_mutex_lock.c                   | 16 +++++++--
> > >  sysdeps/nptl/pthreadP.h                     |  1 +
> > >  sysdeps/nptl/pthread_mutex_backoff.h        | 35 ++++++++++++++++++
> > >  sysdeps/x86_64/nptl/pthread_mutex_backoff.h | 39 +++++++++++++++++++++
> > >  4 files changed, 89 insertions(+), 2 deletions(-)
> > >  create mode 100644 sysdeps/nptl/pthread_mutex_backoff.h
> > >  create mode 100644 sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > >
> > > diff --git a/nptl/pthread_mutex_lock.c b/nptl/pthread_mutex_lock.c
> > > index d2e652d151..6e767a8724 100644
> > > --- a/nptl/pthread_mutex_lock.c
> > > +++ b/nptl/pthread_mutex_lock.c
> > > @@ -138,14 +138,26 @@ PTHREAD_MUTEX_LOCK (pthread_mutex_t *mutex)
> > >           int cnt = 0;
> > >           int max_cnt = MIN (max_adaptive_count (),
> > >                              mutex->__data.__spins * 2 + 10);
> > > +         int spin_count, exp_backoff = 1;
> > > +         unsigned int jitter = get_jitter ();
> > >           do
> > >             {
> > > -             if (cnt++ >= max_cnt)
> > > +             /* In each loop, spin count is exponential backoff plus
> > > +                random jitter, random range is [0, exp_backoff-1].  */
> > > +             spin_count = exp_backoff + (jitter & (exp_backoff - 1));
> > > +             cnt += spin_count;
> > > +             if (cnt >= max_cnt)
> > >                 {
> > > +                 /* If cnt exceeds max spin count, just go to wait
> > > +                    queue.  */
> > >                   LLL_MUTEX_LOCK (mutex);
> > >                   break;
> > >                 }
> > > -             atomic_spin_nop ();
> > > +             do
> > > +               atomic_spin_nop ();
> > > +             while (--spin_count > 0);
> > > +             /* Prepare for next loop.  */
> > > +             exp_backoff = get_next_backoff (exp_backoff);
> > >             }
> > >           while (LLL_MUTEX_READ_LOCK (mutex) != 0
> > >                  || LLL_MUTEX_TRYLOCK (mutex) != 0);
> > > diff --git a/sysdeps/nptl/pthreadP.h b/sysdeps/nptl/pthreadP.h
> > > index 601db4ff2b..39af275c25 100644
> > > --- a/sysdeps/nptl/pthreadP.h
> > > +++ b/sysdeps/nptl/pthreadP.h
> > > @@ -33,6 +33,7 @@
> > >  #include <kernel-features.h>
> > >  #include <errno.h>
> > >  #include <internal-signals.h>
> > > +#include <pthread_mutex_backoff.h>
> > >  #include "pthread_mutex_conf.h"
> > >
> > >
> > > diff --git a/sysdeps/nptl/pthread_mutex_backoff.h b/sysdeps/nptl/pthread_mutex_backoff.h
> > > new file mode 100644
> > > index 0000000000..5b26c22ac7
> > > --- /dev/null
> > > +++ b/sysdeps/nptl/pthread_mutex_backoff.h
> > > @@ -0,0 +1,35 @@
> > > +/* Pthread mutex backoff configuration.
> > > +   Copyright (C) 2022 Free Software Foundation, Inc.
> > > +   This file is part of the GNU C Library.
> > > +
> > > +   The GNU C Library is free software; you can redistribute it and/or
> > > +   modify it under the terms of the GNU Lesser General Public
> > > +   License as published by the Free Software Foundation; either
> > > +   version 2.1 of the License, or (at your option) any later version.
> > > +
> > > +   The GNU C Library is distributed in the hope that it will be useful,
> > > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > > +   Lesser General Public License for more details.
> > > +
> > > +   You should have received a copy of the GNU Lesser General Public
> > > +   License along with the GNU C Library; if not, see
> > > +   <https://www.gnu.org/licenses/>.  */
> > > +#ifndef _PTHREAD_MUTEX_BACKOFF_H
> > > +#define _PTHREAD_MUTEX_BACKOFF_H 1
> > > +
> > > +static inline unsigned int
> > > +get_jitter (void)
> > > +{
> > > +  /* Arch dependent random jitter, return 0 disables random.  */
> > > +  return 0;
> > > +}
> > > +
> > > +static inline int
> > > +get_next_backoff (int backoff)
> > > +{
> > > +  /* Next backoff, return 1 disables mutex backoff.  */
> > > +  return 1;
> > > +}
> > > +
> > > +#endif
> > > diff --git a/sysdeps/x86_64/nptl/pthread_mutex_backoff.h b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > > new file mode 100644
> > > index 0000000000..ec74c3d9db
> > > --- /dev/null
> > > +++ b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > > @@ -0,0 +1,39 @@
> > > +/* Pthread mutex backoff configuration.
> > > +   Copyright (C) 2022 Free Software Foundation, Inc.
> > > +   This file is part of the GNU C Library.
> > > +
> > > +   The GNU C Library is free software; you can redistribute it and/or
> > > +   modify it under the terms of the GNU Lesser General Public
> > > +   License as published by the Free Software Foundation; either
> > > +   version 2.1 of the License, or (at your option) any later version.
> > > +
> > > +   The GNU C Library is distributed in the hope that it will be useful,
> > > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > > +   Lesser General Public License for more details.
> > > +
> > > +   You should have received a copy of the GNU Lesser General Public
> > > +   License along with the GNU C Library; if not, see
> > > +   <https://www.gnu.org/licenses/>.  */
> > > +#ifndef _PTHREAD_MUTEX_BACKOFF_H
> > > +#define _PTHREAD_MUTEX_BACKOFF_H 1
> > > +
> > > +#include <fast-jitter.h>
> > > +
> > > +static inline unsigned int
> > > +get_jitter (void)
> > > +{
> > > +  return get_fast_jitter ();
> > > +}
> > > +
> > > +#define MAX_BACKOFF 16
> > > +
> > > +static inline int
> > > +get_next_backoff (int backoff)
> > > +{
> > > +  /* Binary expontial backoff. Limiting max backoff
> > > +     can reduce latency in large critical section.  */
> > > +  return (backoff < MAX_BACKOFF) ? backoff << 1 : backoff;
> > > +}
> > > +
> > > +#endif
> > > --
> > > 2.35.1
> > >
> >
> > LGTM.
> >
> > Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
> >
> > Please wait until next Monday for more comments.
> >
> > Thanks.
> >
> > --
> > H.J.
>
> I would like to backport this patch to release branch 2.33, 2.34 and 2.35

Fine by me.
>
> Any comments/suggestions or objections on this.
>
> commit 8162147872491bb5b48e91543b19c49a29ae6b6d
> Author: Wangyang Guo <wangyang.guo@intel.com>
> Date:   Fri May 6 01:50:10 2022 +0000
>
>     nptl: Add backoff mechanism to spinlock loop
>
>     When mutiple threads waiting for lock at the same time, once lock owner
>     releases the lock, waiters will see lock available and all try to lock,
>     which may cause an expensive CAS storm.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* FUCKETY FUCK FUCK, PLEASE REMOVE ME FROM THESE EMAILS
  2022-09-29  0:12       ` Noah Goldstein
@ 2022-09-30 13:18         ` Darren Tristano
  0 siblings, 0 replies; 4+ messages in thread
From: Darren Tristano @ 2022-09-30 13:18 UTC (permalink / raw)
  To: Sunil Pandey, Noah Goldstein
  Cc: Florian Weimer, GNU C Library, Wangyang Guo, Libc-stable Mailing List

[-- Attachment #1: Type: text/plain, Size: 11755 bytes --]

FUCK FUCK FUCK

________________________________
From: Libc-stable <libc-stable-bounces+darren=darrentristano.com@sourceware.org> on behalf of Noah Goldstein via Libc-stable <libc-stable@sourceware.org>
Sent: Wednesday, September 28, 2022 7:12 PM
To: Sunil Pandey <skpgkp2@gmail.com>
Cc: Florian Weimer <fweimer@redhat.com>; GNU C Library <libc-alpha@sourceware.org>; Wangyang Guo <wangyang.guo@intel.com>; Libc-stable Mailing List <libc-stable@sourceware.org>
Subject: Re: [PATCH v5] nptl: Add backoff mechanism to spinlock loop

On Sun, Sep 11, 2022 at 4:30 PM Sunil Pandey via Libc-stable
<libc-stable@sourceware.org> wrote:
>
> On Thu, May 5, 2022 at 8:07 PM H.J. Lu via Libc-alpha
> <libc-alpha@sourceware.org> wrote:
> >
> > On Thu, May 5, 2022 at 6:50 PM Wangyang Guo <wangyang.guo@intel.com> wrote:
> > >
> > > When mutiple threads waiting for lock at the same time, once lock owner
> > > releases the lock, waiters will see lock available and all try to lock,
> > > which may cause an expensive CAS storm.
> > >
> > > Binary exponential backoff with random jitter is introduced. As try-lock
> > > attempt increases, there is more likely that a larger number threads
> > > compete for adaptive mutex lock, so increase wait time in exponential.
> > > A random jitter is also added to avoid synchronous try-lock from other
> > > threads.
> > >
> > > v2: Remove read-check before try-lock for performance.
> > >
> > > v3:
> > > 1. Restore read-check since it works well in some platform.
> > > 2. Make backoff arch dependent, and enable it for x86_64.
> > > 3. Limit max backoff to reduce latency in large critical section.
> > >
> > > v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h
> > >
> > > v5: Commit log updated for regression in large critical section.
> > >
> > > Result of pthread-mutex-locks bench
> > >
> > > Test Platform: Xeon 8280L (2 socket, 112 CPUs in total)
> > > First Row: thread number
> > > First Col: critical section length
> > > Values: backoff vs upstream, time based, low is better
> > >
> > > non-critical-length: 1
> > >         1       2       4       8       16      32      64      112     140
> > > 0       0.99    0.58    0.52    0.49    0.43    0.44    0.46    0.52    0.54
> > > 1       0.98    0.43    0.56    0.50    0.44    0.45    0.50    0.56    0.57
> > > 2       0.99    0.41    0.57    0.51    0.45    0.47    0.48    0.60    0.61
> > > 4       0.99    0.45    0.59    0.53    0.48    0.49    0.52    0.64    0.65
> > > 8       1.00    0.66    0.71    0.63    0.56    0.59    0.66    0.72    0.71
> > > 16      0.97    0.78    0.91    0.73    0.67    0.70    0.79    0.80    0.80
> > > 32      0.95    1.17    0.98    0.87    0.82    0.86    0.89    0.90    0.90
> > > 64      0.96    0.95    1.01    1.01    0.98    1.00    1.03    0.99    0.99
> > > 128     0.99    1.01    1.01    1.17    1.08    1.12    1.02    0.97    1.02
> > >
> > > non-critical-length: 32
> > >         1       2       4       8       16      32      64      112     140
> > > 0       1.03    0.97    0.75    0.65    0.58    0.58    0.56    0.70    0.70
> > > 1       0.94    0.95    0.76    0.65    0.58    0.58    0.61    0.71    0.72
> > > 2       0.97    0.96    0.77    0.66    0.58    0.59    0.62    0.74    0.74
> > > 4       0.99    0.96    0.78    0.66    0.60    0.61    0.66    0.76    0.77
> > > 8       0.99    0.99    0.84    0.70    0.64    0.66    0.71    0.80    0.80
> > > 16      0.98    0.97    0.95    0.76    0.70    0.73    0.81    0.85    0.84
> > > 32      1.04    1.12    1.04    0.89    0.82    0.86    0.93    0.91    0.91
> > > 64      0.99    1.15    1.07    1.00    0.99    1.01    1.05    0.99    0.99
> > > 128     1.00    1.21    1.20    1.22    1.25    1.31    1.12    1.10    0.99
> > >
> > > non-critical-length: 128
> > >         1       2       4       8       16      32      64      112     140
> > > 0       1.02    1.00    0.99    0.67    0.61    0.61    0.61    0.74    0.73
> > > 1       0.95    0.99    1.00    0.68    0.61    0.60    0.60    0.74    0.74
> > > 2       1.00    1.04    1.00    0.68    0.59    0.61    0.65    0.76    0.76
> > > 4       1.00    0.96    0.98    0.70    0.63    0.63    0.67    0.78    0.77
> > > 8       1.01    1.02    0.89    0.73    0.65    0.67    0.71    0.81    0.80
> > > 16      0.99    0.96    0.96    0.79    0.71    0.73    0.80    0.84    0.84
> > > 32      0.99    0.95    1.05    0.89    0.84    0.85    0.94    0.92    0.91
> > > 64      1.00    0.99    1.16    1.04    1.00    1.02    1.06    0.99    0.99
> > > 128     1.00    1.06    0.98    1.14    1.39    1.26    1.08    1.02    0.98
> > >
> > > There is regression in large critical section. But adaptive mutex is
> > > aimed for "quick" locks. Small critical section is more common when
> > > users choose to use adaptive pthread_mutex.
> > >
> > > Signed-off-by: Wangyang Guo <wangyang.guo@intel.com>
> > > ---
> > >  nptl/pthread_mutex_lock.c                   | 16 +++++++--
> > >  sysdeps/nptl/pthreadP.h                     |  1 +
> > >  sysdeps/nptl/pthread_mutex_backoff.h        | 35 ++++++++++++++++++
> > >  sysdeps/x86_64/nptl/pthread_mutex_backoff.h | 39 +++++++++++++++++++++
> > >  4 files changed, 89 insertions(+), 2 deletions(-)
> > >  create mode 100644 sysdeps/nptl/pthread_mutex_backoff.h
> > >  create mode 100644 sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > >
> > > diff --git a/nptl/pthread_mutex_lock.c b/nptl/pthread_mutex_lock.c
> > > index d2e652d151..6e767a8724 100644
> > > --- a/nptl/pthread_mutex_lock.c
> > > +++ b/nptl/pthread_mutex_lock.c
> > > @@ -138,14 +138,26 @@ PTHREAD_MUTEX_LOCK (pthread_mutex_t *mutex)
> > >           int cnt = 0;
> > >           int max_cnt = MIN (max_adaptive_count (),
> > >                              mutex->__data.__spins * 2 + 10);
> > > +         int spin_count, exp_backoff = 1;
> > > +         unsigned int jitter = get_jitter ();
> > >           do
> > >             {
> > > -             if (cnt++ >= max_cnt)
> > > +             /* In each loop, spin count is exponential backoff plus
> > > +                random jitter, random range is [0, exp_backoff-1].  */
> > > +             spin_count = exp_backoff + (jitter & (exp_backoff - 1));
> > > +             cnt += spin_count;
> > > +             if (cnt >= max_cnt)
> > >                 {
> > > +                 /* If cnt exceeds max spin count, just go to wait
> > > +                    queue.  */
> > >                   LLL_MUTEX_LOCK (mutex);
> > >                   break;
> > >                 }
> > > -             atomic_spin_nop ();
> > > +             do
> > > +               atomic_spin_nop ();
> > > +             while (--spin_count > 0);
> > > +             /* Prepare for next loop.  */
> > > +             exp_backoff = get_next_backoff (exp_backoff);
> > >             }
> > >           while (LLL_MUTEX_READ_LOCK (mutex) != 0
> > >                  || LLL_MUTEX_TRYLOCK (mutex) != 0);
> > > diff --git a/sysdeps/nptl/pthreadP.h b/sysdeps/nptl/pthreadP.h
> > > index 601db4ff2b..39af275c25 100644
> > > --- a/sysdeps/nptl/pthreadP.h
> > > +++ b/sysdeps/nptl/pthreadP.h
> > > @@ -33,6 +33,7 @@
> > >  #include <kernel-features.h>
> > >  #include <errno.h>
> > >  #include <internal-signals.h>
> > > +#include <pthread_mutex_backoff.h>
> > >  #include "pthread_mutex_conf.h"
> > >
> > >
> > > diff --git a/sysdeps/nptl/pthread_mutex_backoff.h b/sysdeps/nptl/pthread_mutex_backoff.h
> > > new file mode 100644
> > > index 0000000000..5b26c22ac7
> > > --- /dev/null
> > > +++ b/sysdeps/nptl/pthread_mutex_backoff.h
> > > @@ -0,0 +1,35 @@
> > > +/* Pthread mutex backoff configuration.
> > > +   Copyright (C) 2022 Free Software Foundation, Inc.
> > > +   This file is part of the GNU C Library.
> > > +
> > > +   The GNU C Library is free software; you can redistribute it and/or
> > > +   modify it under the terms of the GNU Lesser General Public
> > > +   License as published by the Free Software Foundation; either
> > > +   version 2.1 of the License, or (at your option) any later version.
> > > +
> > > +   The GNU C Library is distributed in the hope that it will be useful,
> > > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > > +   Lesser General Public License for more details.
> > > +
> > > +   You should have received a copy of the GNU Lesser General Public
> > > +   License along with the GNU C Library; if not, see
> > > +   <https://www.gnu.org/licenses/>.  */
> > > +#ifndef _PTHREAD_MUTEX_BACKOFF_H
> > > +#define _PTHREAD_MUTEX_BACKOFF_H 1
> > > +
> > > +static inline unsigned int
> > > +get_jitter (void)
> > > +{
> > > +  /* Arch dependent random jitter, return 0 disables random.  */
> > > +  return 0;
> > > +}
> > > +
> > > +static inline int
> > > +get_next_backoff (int backoff)
> > > +{
> > > +  /* Next backoff, return 1 disables mutex backoff.  */
> > > +  return 1;
> > > +}
> > > +
> > > +#endif
> > > diff --git a/sysdeps/x86_64/nptl/pthread_mutex_backoff.h b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > > new file mode 100644
> > > index 0000000000..ec74c3d9db
> > > --- /dev/null
> > > +++ b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h
> > > @@ -0,0 +1,39 @@
> > > +/* Pthread mutex backoff configuration.
> > > +   Copyright (C) 2022 Free Software Foundation, Inc.
> > > +   This file is part of the GNU C Library.
> > > +
> > > +   The GNU C Library is free software; you can redistribute it and/or
> > > +   modify it under the terms of the GNU Lesser General Public
> > > +   License as published by the Free Software Foundation; either
> > > +   version 2.1 of the License, or (at your option) any later version.
> > > +
> > > +   The GNU C Library is distributed in the hope that it will be useful,
> > > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > > +   Lesser General Public License for more details.
> > > +
> > > +   You should have received a copy of the GNU Lesser General Public
> > > +   License along with the GNU C Library; if not, see
> > > +   <https://www.gnu.org/licenses/>.  */
> > > +#ifndef _PTHREAD_MUTEX_BACKOFF_H
> > > +#define _PTHREAD_MUTEX_BACKOFF_H 1
> > > +
> > > +#include <fast-jitter.h>
> > > +
> > > +static inline unsigned int
> > > +get_jitter (void)
> > > +{
> > > +  return get_fast_jitter ();
> > > +}
> > > +
> > > +#define MAX_BACKOFF 16
> > > +
> > > +static inline int
> > > +get_next_backoff (int backoff)
> > > +{
> > > +  /* Binary expontial backoff. Limiting max backoff
> > > +     can reduce latency in large critical section.  */
> > > +  return (backoff < MAX_BACKOFF) ? backoff << 1 : backoff;
> > > +}
> > > +
> > > +#endif
> > > --
> > > 2.35.1
> > >
> >
> > LGTM.
> >
> > Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
> >
> > Please wait until next Monday for more comments.
> >
> > Thanks.
> >
> > --
> > H.J.
>
> I would like to backport this patch to release branch 2.33, 2.34 and 2.35

Fine by me.
>
> Any comments/suggestions or objections on this.
>
> commit 8162147872491bb5b48e91543b19c49a29ae6b6d
> Author: Wangyang Guo <wangyang.guo@intel.com>
> Date:   Fri May 6 01:50:10 2022 +0000
>
>     nptl: Add backoff mechanism to spinlock loop
>
>     When mutiple threads waiting for lock at the same time, once lock owner
>     releases the lock, waiters will see lock available and all try to lock,
>     which may cause an expensive CAS storm.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-09-30 13:18 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20220504031710.60677-1-wangyang.guo@intel.com>
     [not found] ` <20220506015010.362894-1-wangyang.guo@intel.com>
     [not found]   ` <CAMe9rOpOuQfuJbARLLqSLUc3AzA6pXpj-+zR=0dZG-fT7T3eiw@mail.gmail.com>
2022-09-11 20:29     ` [PATCH v5] nptl: Add backoff mechanism to spinlock loop Sunil Pandey
2022-09-14  1:26       ` Noah Goldstein
2022-09-29  0:12       ` Noah Goldstein
2022-09-30 13:18         ` FUCKETY FUCK FUCK, PLEASE REMOVE ME FROM THESE EMAILS Darren Tristano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).