From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com [IPv6:2607:f8b0:4864:20::1130]) by sourceware.org (Postfix) with ESMTPS id 161B03858D3C for ; Thu, 5 May 2022 02:52:20 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 161B03858D3C Received: by mail-yw1-x1130.google.com with SMTP id 00721157ae682-2ef5380669cso35007867b3.9 for ; Wed, 04 May 2022 19:52:20 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=hOEHk1hrOSc8fAf5HpD1Ytx47k9erkbf5K8RdkDl2v8=; b=OratZpkpp+4rt5X+sPvzY1f63GincrHpudGAQvmjI4jZ8OJl78LuyZtwCxIS1l7Cfg RM+ex2k7Zcj5Po+F6z9ZClEKJ0dfkVi5usYJ5vdQRrZrS0v+uJFi//23R/MwKjjCc9MC daOYXWyfmaFvvpweOoMPB+p1+sN6aNRr6rLp6nV9ZC7CmeHFrltCUAvFZ4zg0nMoLNLd Evz1jzwA99FftO7fA3v0vmYsmkadBm4tp9BytyV0gjXgi1aeLKR6QG+NzBfMsHvVeUa3 OXRStA7fE0LbIcTrHQlg7j/eD0Kj0s892z/UMlwnTCriUNBRarH2Bf0upahTgLeEqlmq O5SQ== X-Gm-Message-State: AOAM532tLuCCj+xI5w2vB1KtKw97uPnESmHO8Zq1zjNSSlrjH5UllvEG XEvbKGC/jvTYgZivybbNZbcRzOjlX6UOMIrhCuI= X-Google-Smtp-Source: ABdhPJzNvrU6vDtgInj4/zMIYGhVQUB65EWUKXThv8XP+YXv4EILBFD89+hRGpJiEDqvq0cHDnWpyBmTsZeRdd8rVJE= X-Received: by 2002:a81:c44b:0:b0:2d6:4726:ef4b with SMTP id s11-20020a81c44b000000b002d64726ef4bmr22148135ywj.184.1651719138440; Wed, 04 May 2022 19:52:18 -0700 (PDT) MIME-Version: 1.0 References: <20220504025849.13449-1-wangyang.guo@intel.com> <20220504031710.60677-1-wangyang.guo@intel.com> In-Reply-To: From: Noah Goldstein Date: Wed, 4 May 2022 21:52:06 -0500 Message-ID: Subject: Re: [PATCH v4] nptl: Add backoff mechanism to spinlock loop To: "H.J. Lu" Cc: Wangyang Guo , GNU C Library Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 05 May 2022 02:52:22 -0000 On Wed, May 4, 2022 at 8:57 PM H.J. Lu wrote: > > On Tue, May 3, 2022 at 8:17 PM Wangyang Guo wrote: > > > > When mutiple threads waiting for lock at the same time, once lock owner > > releases the lock, waiters will see lock available and all try to lock, > > which may cause an expensive CAS storm. > > > > Binary exponential backoff with random jitter is introduced. As try-lock > > attempt increases, there is more likely that a larger number threads > > compete for adaptive mutex lock, so increase wait time in exponential. > > A random jitter is also added to avoid synchronous try-lock from other > > threads. > > > > v2: Remove read-check before try-lock for performance. > > > > v3: > > 1. Restore read-check since it works well in some platform. > > 2. Make backoff arch dependent, and enable it for x86_64. > > 3. Limit max backoff to reduce latency in large critical section. > > > > v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h > > > > Result of pthread-mutex-locks bench > > > > Test Platform: Xeon 8280L (2 socket, 112 CPUs in total) > > First Row: thread number > > First Col: critical section length > > Values: backoff vs upstream, time based, low is better > > > > non-critical-length: 1 > > 1 2 4 8 16 32 64 112 140 > > 0 0.99 0.58 0.52 0.49 0.43 0.44 0.46 0.52 0.54 > > 1 0.98 0.43 0.56 0.50 0.44 0.45 0.50 0.56 0.57 > > 2 0.99 0.41 0.57 0.51 0.45 0.47 0.48 0.60 0.61 > > 4 0.99 0.45 0.59 0.53 0.48 0.49 0.52 0.64 0.65 > > 8 1.00 0.66 0.71 0.63 0.56 0.59 0.66 0.72 0.71 > > 16 0.97 0.78 0.91 0.73 0.67 0.70 0.79 0.80 0.80 > > 32 0.95 1.17 0.98 0.87 0.82 0.86 0.89 0.90 0.90 > > 64 0.96 0.95 1.01 1.01 0.98 1.00 1.03 0.99 0.99 > > 128 0.99 1.01 1.01 1.17 1.08 1.12 1.02 0.97 1.02 > > > > non-critical-length: 32 > > 1 2 4 8 16 32 64 112 140 > > 0 1.03 0.97 0.75 0.65 0.58 0.58 0.56 0.70 0.70 > > 1 0.94 0.95 0.76 0.65 0.58 0.58 0.61 0.71 0.72 > > 2 0.97 0.96 0.77 0.66 0.58 0.59 0.62 0.74 0.74 > > 4 0.99 0.96 0.78 0.66 0.60 0.61 0.66 0.76 0.77 > > 8 0.99 0.99 0.84 0.70 0.64 0.66 0.71 0.80 0.80 > > 16 0.98 0.97 0.95 0.76 0.70 0.73 0.81 0.85 0.84 > > 32 1.04 1.12 1.04 0.89 0.82 0.86 0.93 0.91 0.91 > > 64 0.99 1.15 1.07 1.00 0.99 1.01 1.05 0.99 0.99 > > 128 1.00 1.21 1.20 1.22 1.25 1.31 1.12 1.10 0.99 > > > > non-critical-length: 128 > > 1 2 4 8 16 32 64 112 140 > > 0 1.02 1.00 0.99 0.67 0.61 0.61 0.61 0.74 0.73 > > 1 0.95 0.99 1.00 0.68 0.61 0.60 0.60 0.74 0.74 > > 2 1.00 1.04 1.00 0.68 0.59 0.61 0.65 0.76 0.76 > > 4 1.00 0.96 0.98 0.70 0.63 0.63 0.67 0.78 0.77 > > 8 1.01 1.02 0.89 0.73 0.65 0.67 0.71 0.81 0.80 > > 16 0.99 0.96 0.96 0.79 0.71 0.73 0.80 0.84 0.84 > > 32 0.99 0.95 1.05 0.89 0.84 0.85 0.94 0.92 0.91 > > 64 1.00 0.99 1.16 1.04 1.00 1.02 1.06 0.99 0.99 > > 128 1.00 1.06 0.98 1.14 1.39 1.26 1.08 1.02 0.98 > > For large critical section lengths, the new code can be up to 39% slower. > Is it true? Are small critical section lengths more common? Only when a VERY high number of threads are competing for the lock which is probably uncommon outside of benchmarks. > > > Signed-off-by: Wangyang Guo > > --- > > nptl/pthread_mutex_lock.c | 16 +++++++-- > > sysdeps/nptl/pthreadP.h | 1 + > > sysdeps/nptl/pthread_mutex_backoff.h | 35 ++++++++++++++++++ > > sysdeps/x86_64/nptl/pthread_mutex_backoff.h | 39 +++++++++++++++++++++ > > 4 files changed, 89 insertions(+), 2 deletions(-) > > create mode 100644 sysdeps/nptl/pthread_mutex_backoff.h > > create mode 100644 sysdeps/x86_64/nptl/pthread_mutex_backoff.h > > > > diff --git a/nptl/pthread_mutex_lock.c b/nptl/pthread_mutex_lock.c > > index d2e652d151..6e767a8724 100644 > > --- a/nptl/pthread_mutex_lock.c > > +++ b/nptl/pthread_mutex_lock.c > > @@ -138,14 +138,26 @@ PTHREAD_MUTEX_LOCK (pthread_mutex_t *mutex) > > int cnt = 0; > > int max_cnt = MIN (max_adaptive_count (), > > mutex->__data.__spins * 2 + 10); > > + int spin_count, exp_backoff = 1; > > + unsigned int jitter = get_jitter (); > > do > > { > > - if (cnt++ >= max_cnt) > > + /* In each loop, spin count is exponential backoff plus > > + random jitter, random range is [0, exp_backoff-1]. */ > > + spin_count = exp_backoff + (jitter & (exp_backoff - 1)); > > + cnt += spin_count; > > + if (cnt >= max_cnt) > > { > > + /* If cnt exceeds max spin count, just go to wait > > + queue. */ > > LLL_MUTEX_LOCK (mutex); > > break; > > } > > - atomic_spin_nop (); > > + do > > + atomic_spin_nop (); > > + while (--spin_count > 0); > > + /* Prepare for next loop. */ > > + exp_backoff = get_next_backoff (exp_backoff); > > } > > while (LLL_MUTEX_READ_LOCK (mutex) != 0 > > || LLL_MUTEX_TRYLOCK (mutex) != 0); > > diff --git a/sysdeps/nptl/pthreadP.h b/sysdeps/nptl/pthreadP.h > > index 601db4ff2b..39af275c25 100644 > > --- a/sysdeps/nptl/pthreadP.h > > +++ b/sysdeps/nptl/pthreadP.h > > @@ -33,6 +33,7 @@ > > #include > > #include > > #include > > +#include > > #include "pthread_mutex_conf.h" > > > > > > diff --git a/sysdeps/nptl/pthread_mutex_backoff.h b/sysdeps/nptl/pthread_mutex_backoff.h > > new file mode 100644 > > index 0000000000..5b26c22ac7 > > --- /dev/null > > +++ b/sysdeps/nptl/pthread_mutex_backoff.h > > @@ -0,0 +1,35 @@ > > +/* Pthread mutex backoff configuration. > > + Copyright (C) 2022 Free Software Foundation, Inc. > > + This file is part of the GNU C Library. > > + > > + The GNU C Library is free software; you can redistribute it and/or > > + modify it under the terms of the GNU Lesser General Public > > + License as published by the Free Software Foundation; either > > + version 2.1 of the License, or (at your option) any later version. > > + > > + The GNU C Library is distributed in the hope that it will be useful, > > + but WITHOUT ANY WARRANTY; without even the implied warranty of > > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > + Lesser General Public License for more details. > > + > > + You should have received a copy of the GNU Lesser General Public > > + License along with the GNU C Library; if not, see > > + . */ > > +#ifndef _PTHREAD_MUTEX_BACKOFF_H > > +#define _PTHREAD_MUTEX_BACKOFF_H 1 > > + > > +static inline unsigned int > > +get_jitter (void) > > +{ > > + /* Arch dependent random jitter, return 0 disables random. */ > > + return 0; > > +} > > + > > +static inline int > > +get_next_backoff (int backoff) > > +{ > > + /* Next backoff, return 1 disables mutex backoff. */ > > + return 1; > > +} > > + > > +#endif > > diff --git a/sysdeps/x86_64/nptl/pthread_mutex_backoff.h b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h > > new file mode 100644 > > index 0000000000..ec74c3d9db > > --- /dev/null > > +++ b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h > > @@ -0,0 +1,39 @@ > > +/* Pthread mutex backoff configuration. > > + Copyright (C) 2022 Free Software Foundation, Inc. > > + This file is part of the GNU C Library. > > + > > + The GNU C Library is free software; you can redistribute it and/or > > + modify it under the terms of the GNU Lesser General Public > > + License as published by the Free Software Foundation; either > > + version 2.1 of the License, or (at your option) any later version. > > + > > + The GNU C Library is distributed in the hope that it will be useful, > > + but WITHOUT ANY WARRANTY; without even the implied warranty of > > + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > + Lesser General Public License for more details. > > + > > + You should have received a copy of the GNU Lesser General Public > > + License along with the GNU C Library; if not, see > > + . */ > > +#ifndef _PTHREAD_MUTEX_BACKOFF_H > > +#define _PTHREAD_MUTEX_BACKOFF_H 1 > > + > > +#include > > + > > +static inline unsigned int > > +get_jitter (void) > > +{ > > + return get_fast_jitter (); > > +} > > + > > +#define MAX_BACKOFF 16 > > + > > +static inline int > > +get_next_backoff (int backoff) > > +{ > > + /* Binary expontial backoff. Limiting max backoff > > + can reduce latency in large critical section. */ > > + return (backoff < MAX_BACKOFF) ? backoff << 1 : backoff; > > +} > > + > > +#endif > > -- > > 2.35.1 > > > > > -- > H.J.