From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 7852) id C91CA3857832; Wed, 28 Sep 2022 16:06:24 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org C91CA3857832 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1664381184; bh=Zzf9ISJvBb9hkzZgpHo5zeTclBC6/+5O4wEa+M8sNNM=; h=From:To:Subject:Date:From; b=htSSNg1LEDE9MZUgUl1eaHILkm9pw+hXhFVs5SkDTAU7W3dsZjq3/WSJHQB/9z/Ua 5gfk7oAgNlGYT0WQYU5KgfHhKiESj1hrzqjx9GONsIgmqzT+6/Oqdbm1blQ4ZAvwOH tChXkcztLwhE0KKq3XAEHCbDoSKkd5/zoHFIrpbY= Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: Sunil Pandey To: glibc-cvs@sourceware.org Subject: [glibc/release/2.34/master] nptl: Add backoff mechanism to spinlock loop X-Act-Checkin: glibc X-Git-Author: Wangyang Guo X-Git-Refname: refs/heads/release/2.34/master X-Git-Oldrev: 04efdcfac405723c23b25d124817bcfc1697e2d8 X-Git-Newrev: ea69248445fb9b80da02ee0c7261cba4b1a5532e Message-Id: <20220928160624.C91CA3857832@sourceware.org> Date: Wed, 28 Sep 2022 16:06:24 +0000 (GMT) List-Id: https://sourceware.org/git/gitweb.cgi?p=glibc.git;h=ea69248445fb9b80da02ee0c7261cba4b1a5532e commit ea69248445fb9b80da02ee0c7261cba4b1a5532e Author: Wangyang Guo Date: Fri May 6 01:50:10 2022 +0000 nptl: Add backoff mechanism to spinlock loop When mutiple threads waiting for lock at the same time, once lock owner releases the lock, waiters will see lock available and all try to lock, which may cause an expensive CAS storm. Binary exponential backoff with random jitter is introduced. As try-lock attempt increases, there is more likely that a larger number threads compete for adaptive mutex lock, so increase wait time in exponential. A random jitter is also added to avoid synchronous try-lock from other threads. v2: Remove read-check before try-lock for performance. v3: 1. Restore read-check since it works well in some platform. 2. Make backoff arch dependent, and enable it for x86_64. 3. Limit max backoff to reduce latency in large critical section. v4: Fix strict-prototypes error in sysdeps/nptl/pthread_mutex_backoff.h v5: Commit log updated for regression in large critical section. Result of pthread-mutex-locks bench Test Platform: Xeon 8280L (2 socket, 112 CPUs in total) First Row: thread number First Col: critical section length Values: backoff vs upstream, time based, low is better non-critical-length: 1 1 2 4 8 16 32 64 112 140 0 0.99 0.58 0.52 0.49 0.43 0.44 0.46 0.52 0.54 1 0.98 0.43 0.56 0.50 0.44 0.45 0.50 0.56 0.57 2 0.99 0.41 0.57 0.51 0.45 0.47 0.48 0.60 0.61 4 0.99 0.45 0.59 0.53 0.48 0.49 0.52 0.64 0.65 8 1.00 0.66 0.71 0.63 0.56 0.59 0.66 0.72 0.71 16 0.97 0.78 0.91 0.73 0.67 0.70 0.79 0.80 0.80 32 0.95 1.17 0.98 0.87 0.82 0.86 0.89 0.90 0.90 64 0.96 0.95 1.01 1.01 0.98 1.00 1.03 0.99 0.99 128 0.99 1.01 1.01 1.17 1.08 1.12 1.02 0.97 1.02 non-critical-length: 32 1 2 4 8 16 32 64 112 140 0 1.03 0.97 0.75 0.65 0.58 0.58 0.56 0.70 0.70 1 0.94 0.95 0.76 0.65 0.58 0.58 0.61 0.71 0.72 2 0.97 0.96 0.77 0.66 0.58 0.59 0.62 0.74 0.74 4 0.99 0.96 0.78 0.66 0.60 0.61 0.66 0.76 0.77 8 0.99 0.99 0.84 0.70 0.64 0.66 0.71 0.80 0.80 16 0.98 0.97 0.95 0.76 0.70 0.73 0.81 0.85 0.84 32 1.04 1.12 1.04 0.89 0.82 0.86 0.93 0.91 0.91 64 0.99 1.15 1.07 1.00 0.99 1.01 1.05 0.99 0.99 128 1.00 1.21 1.20 1.22 1.25 1.31 1.12 1.10 0.99 non-critical-length: 128 1 2 4 8 16 32 64 112 140 0 1.02 1.00 0.99 0.67 0.61 0.61 0.61 0.74 0.73 1 0.95 0.99 1.00 0.68 0.61 0.60 0.60 0.74 0.74 2 1.00 1.04 1.00 0.68 0.59 0.61 0.65 0.76 0.76 4 1.00 0.96 0.98 0.70 0.63 0.63 0.67 0.78 0.77 8 1.01 1.02 0.89 0.73 0.65 0.67 0.71 0.81 0.80 16 0.99 0.96 0.96 0.79 0.71 0.73 0.80 0.84 0.84 32 0.99 0.95 1.05 0.89 0.84 0.85 0.94 0.92 0.91 64 1.00 0.99 1.16 1.04 1.00 1.02 1.06 0.99 0.99 128 1.00 1.06 0.98 1.14 1.39 1.26 1.08 1.02 0.98 There is regression in large critical section. But adaptive mutex is aimed for "quick" locks. Small critical section is more common when users choose to use adaptive pthread_mutex. Signed-off-by: Wangyang Guo Reviewed-by: H.J. Lu (cherry picked from commit 8162147872491bb5b48e91543b19c49a29ae6b6d) Diff: --- nptl/pthread_mutex_lock.c | 16 ++++++++++-- sysdeps/nptl/pthreadP.h | 1 + sysdeps/nptl/pthread_mutex_backoff.h | 35 ++++++++++++++++++++++++++ sysdeps/x86_64/nptl/pthread_mutex_backoff.h | 39 +++++++++++++++++++++++++++++ 4 files changed, 89 insertions(+), 2 deletions(-) diff --git a/nptl/pthread_mutex_lock.c b/nptl/pthread_mutex_lock.c index bbe754a272..8f3f687f2a 100644 --- a/nptl/pthread_mutex_lock.c +++ b/nptl/pthread_mutex_lock.c @@ -139,14 +139,26 @@ PTHREAD_MUTEX_LOCK (pthread_mutex_t *mutex) int cnt = 0; int max_cnt = MIN (max_adaptive_count (), mutex->__data.__spins * 2 + 10); + int spin_count, exp_backoff = 1; + unsigned int jitter = get_jitter (); do { - if (cnt++ >= max_cnt) + /* In each loop, spin count is exponential backoff plus + random jitter, random range is [0, exp_backoff-1]. */ + spin_count = exp_backoff + (jitter & (exp_backoff - 1)); + cnt += spin_count; + if (cnt >= max_cnt) { + /* If cnt exceeds max spin count, just go to wait + queue. */ LLL_MUTEX_LOCK (mutex); break; } - atomic_spin_nop (); + do + atomic_spin_nop (); + while (--spin_count > 0); + /* Prepare for next loop. */ + exp_backoff = get_next_backoff (exp_backoff); } while (LLL_MUTEX_READ_LOCK (mutex) != 0 || LLL_MUTEX_TRYLOCK (mutex) != 0); diff --git a/sysdeps/nptl/pthreadP.h b/sysdeps/nptl/pthreadP.h index b968afc4c6..ed186ce3df 100644 --- a/sysdeps/nptl/pthreadP.h +++ b/sysdeps/nptl/pthreadP.h @@ -34,6 +34,7 @@ #include #include #include +#include #include "pthread_mutex_conf.h" diff --git a/sysdeps/nptl/pthread_mutex_backoff.h b/sysdeps/nptl/pthread_mutex_backoff.h new file mode 100644 index 0000000000..5b26c22ac7 --- /dev/null +++ b/sysdeps/nptl/pthread_mutex_backoff.h @@ -0,0 +1,35 @@ +/* Pthread mutex backoff configuration. + Copyright (C) 2022 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ +#ifndef _PTHREAD_MUTEX_BACKOFF_H +#define _PTHREAD_MUTEX_BACKOFF_H 1 + +static inline unsigned int +get_jitter (void) +{ + /* Arch dependent random jitter, return 0 disables random. */ + return 0; +} + +static inline int +get_next_backoff (int backoff) +{ + /* Next backoff, return 1 disables mutex backoff. */ + return 1; +} + +#endif diff --git a/sysdeps/x86_64/nptl/pthread_mutex_backoff.h b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h new file mode 100644 index 0000000000..ec74c3d9db --- /dev/null +++ b/sysdeps/x86_64/nptl/pthread_mutex_backoff.h @@ -0,0 +1,39 @@ +/* Pthread mutex backoff configuration. + Copyright (C) 2022 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ +#ifndef _PTHREAD_MUTEX_BACKOFF_H +#define _PTHREAD_MUTEX_BACKOFF_H 1 + +#include + +static inline unsigned int +get_jitter (void) +{ + return get_fast_jitter (); +} + +#define MAX_BACKOFF 16 + +static inline int +get_next_backoff (int backoff) +{ + /* Binary expontial backoff. Limiting max backoff + can reduce latency in large critical section. */ + return (backoff < MAX_BACKOFF) ? backoff << 1 : backoff; +} + +#endif