From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by sourceware.org (Postfix) with ESMTPS id C48803858407 for ; Fri, 6 May 2022 03:06:50 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org C48803858407 Received: by mail-pl1-x635.google.com with SMTP id n8so6180597plh.1 for ; Thu, 05 May 2022 20:06:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=M0a6B/1Oay1EaHx2uAG6ZBBZu48NlHmMKCPSVklhzlw=; b=7t6kv0uLldqedzFoXdsml0mK338yru/5nivW+FDOIo7dDqF4gFBpZdrTBNNf0TsbhI RqzGyNOtB6W43bfSzLR0E8O/JPJbO7MNlVcLTBuVSf15r4Amkqfm8JDn1DEUtRIPsR9A +3gYpSDwJy8USr2IqjiJ32r7AlpYUJA2vtIRnpjX2ipJTpMcJnDS6BPA1MHjd62cf4nn qMvRfZcATSV/BWyrQi8eEOPTgFe77My9P++5X4ONXNvnSxVV3D/iPw5Yr8sxNZ//db71 gf56pstZTc8Rzh9NdrT9GOWqn6W015b5pGzSBVkdH/TuS+ppn7WrSLlMIoBq7J2sOQmA +p6A== X-Gm-Message-State: AOAM533EGpueb0rhq5Tk0OLl1NUbJkfrYMl1sTo/FTXXjlHej2QWufte /D3/Z3secM/T4AGedKo1q91U7C7+FtfCaKehIwM= X-Google-Smtp-Source: ABdhPJxf1uldQMduFQV0uRGr7sqe0YLrd50iGiLYoW6mi34P9QTaN9NN73wpj+VF497B7GsaVC7o7pd1jkopH6W22LE= X-Received: by 2002:a17:902:eacd:b0:15c:17fc:31e with SMTP id p13-20020a170902eacd00b0015c17fc031emr1458786pld.4.1651806409710; Thu, 05 May 2022 20:06:49 -0700 (PDT) MIME-Version: 1.0 References: <20220504031710.60677-1-wangyang.guo@intel.com> <20220506015010.362894-1-wangyang.guo@intel.com> In-Reply-To: <20220506015010.362894-1-wangyang.guo@intel.com> From: "H.J. Lu" Date: Thu, 5 May 2022 20:06:13 -0700 Message-ID: Subject: Re: [PATCH v5] nptl: Add backoff mechanism to spinlock loop To: Wangyang Guo Cc: GNU C Library , Noah Goldstein Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-3025.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 06 May 2022 03:06:53 -0000 On Thu, May 5, 2022 at 6:50 PM Wangyang Guo wrote: > > When mutiple threads waiting for lock at the same time, once lock owner > releases the lock, waiters will see lock available and all try to lock, > which may cause an expensive CAS storm. > > Binary exponential backoff with random jitter is introduced. As try-lock > attempt increases, there is more likely that a larger number threads > compete for adaptive mutex lock, so increase wait time in exponential. > A random jitter is also added to avoid synchronous try-lock from other > threads. > > v2: Remove read-check before try-lock for performance. > > v3: > 1. Restore read-check since it works well in some platform. > 2. Make backoff arch dependent, and enable it for x86_64. > 3. Limit max backoff to reduce latency in large critical section. > > v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h > > v5: Commit log updated for regression in large critical section. > > Result of pthread-mutex-locks bench > > Test Platform: Xeon 8280L (2 socket, 112 CPUs in total) > First Row: thread number > First Col: critical section length > Values: backoff vs upstream, time based, low is better > > non-critical-length: 1 > 1 2 4 8 16 32 64 112 140 > 0 0.99 0.58 0.52 0.49 0.43 0.44 0.46 0.52 0.54 > 1 0.98 0.43 0.56 0.50 0.44 0.45 0.50 0.56 0.57 > 2 0.99 0.41 0.57 0.51 0.45 0.47 0.48 0.60 0.61 > 4 0.99 0.45 0.59 0.53 0.48 0.49 0.52 0.64 0.65 > 8 1.00 0.66 0.71 0.63 0.56 0.59 0.66 0.72 0.71 > 16 0.97 0.78 0.91 0.73 0.67 0.70 0.79 0.80 0.80 > 32 0.95 1.17 0.98 0.87 0.82 0.86 0.89 0.90 0.90 > 64 0.96 0.95 1.01 1.01 0.98 1.00 1.03 0.99 0.99 > 128 0.99 1.01 1.01 1.17 1.08 1.12 1.02 0.97 1.02 > > non-critical-length: 32 > 1 2 4 8 16 32 64 112 140 > 0 1.03 0.97 0.75 0.65 0.58 0.58 0.56 0.70 0.70 > 1 0.94 0.95 0.76 0.65 0.58 0.58 0.61 0.71 0.72 > 2 0.97 0.96 0.77 0.66 0.58 0.59 0.62 0.74 0.74 > 4 0.99 0.96 0.78 0.66 0.60 0.61 0.66 0.76 0.77 > 8 0.99 0.99 0.84 0.70 0.64 0.66 0.71 0.80 0.80 > 16 0.98 0.97 0.95 0.76 0.70 0.73 0.81 0.85 0.84 > 32 1.04 1.12 1.04 0.89 0.82 0.86 0.93 0.91 0.91 > 64 0.99 1.15 1.07 1.00 0.99 1.01 1.05 0.99 0.99 > 128 1.00 1.21 1.20 1.22 1.25 1.31 1.12 1.10 0.99 > > non-critical-length: 128 > 1 2 4 8 16 32 64 112 140 > 0 1.02 1.00 0.99 0.67 0.61 0.61 0.61 0.74 0.73 > 1 0.95 0.99 1.00 0.68 0.61 0.60 0.60 0.74 0.74 > 2 1.00 1.04 1.00 0.68 0.59 0.61 0.65 0.76 0.76 > 4 1.00 0.96 0.98 0.70 0.63 0.63 0.67 0.78 0.77 > 8 1.01 1.02 0.89 0.73 0.65 0.67 0.71 0.81 0.80 > 16 0.99 0.96 0.96 0.79 0.71 0.73 0.80 0.84 0.84 > 32 0.99 0.95 1.05 0.89 0.84 0.85 0.94 0.92 0.91 > 64 1.00 0.99 1.16 1.04 1.00 1.02 1.06 0.99 0.99 > 128 1.00 1.06 0.98 1.14 1.39 1.26 1.08 1.02 0.98 > > There is regression in large critical section. But adaptive mutex is > aimed for "quick" locks. Small critical section is more common when > users choose to use adaptive pthread_mutex. > > Signed-off-by: Wangyang Guo > --- > nptl/pthread_mutex_lock.c | 16 +++++++-- > sysdeps/nptl/pthreadP.h | 1 + > sysdeps/nptl/pthread_mutex_backoff.h | 35 ++++++++++++++++++ > sysdeps/x86_64/nptl/pthread_mutex_backoff.h | 39 +++++++++++++++++++++ > 4 files changed, 89 insertions(+), 2 deletions(-) > create mode 100644 sysdeps/nptl/pthread_mutex_backoff.h > create mode 100644 sysdeps/x86_64/nptl/pthread_mutex_backoff.h > > diff --git a/nptl/pthread_mutex_lock.c b/nptl/pthread_mutex_lock.c > index d2e652d151..6e767a8724 100644 > --- a/nptl/pthread_mutex_lock.c > +++ b/nptl/pthread_mutex_lock.c > @@ -138,14 +138,26 @@ PTHREAD_MUTEX_LOCK (pthread_mutex_t *mutex) > int cnt = 0; > int max_cnt = MIN (max_adaptive_count (), > mutex->__data.__spins * 2 + 10); > + int spin_count, exp_backoff = 1; > + unsigned int jitter = get_jitter (); > do > { > - if (cnt++ >= max_cnt) > + /* In each loop, spin count is exponential backoff plus > + random jitter, random range is [0, exp_backoff-1]. */ > + spin_count = exp_backoff + (jitter & (exp_backoff - 1)); > + cnt += spin_count; > + if (cnt >= max_cnt) > { > + /* If cnt exceeds max spin count, just go to wait > + queue. */ > LLL_MUTEX_LOCK (mutex); > break; > } > - atomic_spin_nop (); > + do > + atomic_spin_nop (); > + while (--spin_count > 0); > + /* Prepare for next loop. */ > + exp_backoff = get_next_backoff (exp_backoff); > } > while (LLL_MUTEX_READ_LOCK (mutex) != 0 > || LLL_MUTEX_TRYLOCK (mutex) != 0); > diff --git a/sysdeps/nptl/pthreadP.h b/sysdeps/nptl/pthreadP.h > index 601db4ff2b..39af275c25 100644 > --- a/sysdeps/nptl/pthreadP.h > +++ b/sysdeps/nptl/pthreadP.h > @@ -33,6 +33,7 @@ > #include > #include > #include > +#include > #include "pthread_mutex_conf.h" > > > diff --git a/sysdeps/nptl/pthread_mutex_backoff.h b/sysdeps/nptl/pthread_mutex_backoff.h > new file mode 100644 > index 0000000000..5b26c22ac7 > --- /dev/null > +++ b/sysdeps/nptl/pthread_mutex_backoff.h > @@ -0,0 +1,35 @@ > +/* Pthread mutex backoff configuration. > + Copyright (C) 2022 Free Software Foundation, Inc. > + This file is part of the GNU C Library. > + > + The GNU C Library is free software; you can redistribute it and/or > + modify it under the terms of the GNU Lesser General Public > + License as published by the Free Software Foundation; either > + version 2.1 of the License, or (at your option) any later version. > + > + The GNU C Library is distributed in the hope that it will be useful, > + but WITHOUT ANY WARRANTY; without even the implied warranty of > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > + Lesser General Public License for more details. > + > + You should have received a copy of the GNU Lesser General Public > + License along with the GNU C Library; if not, see > + . */ > +#ifndef _PTHREAD_MUTEX_BACKOFF_H > +#define _PTHREAD_MUTEX_BACKOFF_H 1 > + > +static inline unsigned int > +get_jitter (void) > +{ > + /* Arch dependent random jitter, return 0 disables random. */ > + return 0; > +} > + > +static inline int > +get_next_backoff (int backoff) > +{ > + /* Next backoff, return 1 disables mutex backoff. */ > + return 1; > +} > + > +#endif > diff --git a/sysdeps/x86_64/nptl/pthread_mutex_backoff.h b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h > new file mode 100644 > index 0000000000..ec74c3d9db > --- /dev/null > +++ b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h > @@ -0,0 +1,39 @@ > +/* Pthread mutex backoff configuration. > + Copyright (C) 2022 Free Software Foundation, Inc. > + This file is part of the GNU C Library. > + > + The GNU C Library is free software; you can redistribute it and/or > + modify it under the terms of the GNU Lesser General Public > + License as published by the Free Software Foundation; either > + version 2.1 of the License, or (at your option) any later version. > + > + The GNU C Library is distributed in the hope that it will be useful, > + but WITHOUT ANY WARRANTY; without even the implied warranty of > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > + Lesser General Public License for more details. > + > + You should have received a copy of the GNU Lesser General Public > + License along with the GNU C Library; if not, see > + . */ > +#ifndef _PTHREAD_MUTEX_BACKOFF_H > +#define _PTHREAD_MUTEX_BACKOFF_H 1 > + > +#include > + > +static inline unsigned int > +get_jitter (void) > +{ > + return get_fast_jitter (); > +} > + > +#define MAX_BACKOFF 16 > + > +static inline int > +get_next_backoff (int backoff) > +{ > + /* Binary expontial backoff. Limiting max backoff > + can reduce latency in large critical section. */ > + return (backoff < MAX_BACKOFF) ? backoff << 1 : backoff; > +} > + > +#endif > -- > 2.35.1 > LGTM. Reviewed-by: H.J. Lu Please wait until next Monday for more comments. Thanks. -- H.J.