From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by sourceware.org (Postfix) with ESMTP id 61F843968C18 for ; Fri, 16 Apr 2021 09:23:46 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 61F843968C18 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-580-tweOYKL5MbSyYKW4rDHrfA-1; Fri, 16 Apr 2021 05:23:43 -0400 X-MC-Unique: tweOYKL5MbSyYKW4rDHrfA-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3AC338030A0 for ; Fri, 16 Apr 2021 09:23:43 +0000 (UTC) Received: from oldenburg.str.redhat.com (ovpn-113-139.ams2.redhat.com [10.36.113.139]) by smtp.corp.redhat.com (Postfix) with ESMTPS id AB795100E113 for ; Fri, 16 Apr 2021 09:23:42 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 32/37] nptl: pthread_mutex_lock, pthread_mutex_unock single-threaded optimization In-Reply-To: References: Message-Id: <85feebd8764b7b1ad0332ee0cc422765d38fe6c8.1618564630.git.fweimer@redhat.com> Date: Fri, 16 Apr 2021 11:24:00 +0200 User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 16 Apr 2021 09:23:48 -0000 This is optimization is similar in spirit to the SINGLE_THREAD_P check in the malloc implementation. Doing this in generic code allows us to prioritize those cases which are likely to occur in single-threaded programs (normal and recursive mutexes). Reviewed-by: Adhemerval Zanella --- nptl/pthread_mutex_cond_lock.c | 1 + nptl/pthread_mutex_lock.c | 25 ++++++++++++++++++++++--- nptl/pthread_mutex_unlock.c | 17 ++++++++++++++++- 3 files changed, 39 insertions(+), 4 deletions(-) diff --git a/nptl/pthread_mutex_cond_lock.c b/nptl/pthread_mutex_cond_lock.c index 2f0771302f..3386bd689b 100644 --- a/nptl/pthread_mutex_cond_lock.c +++ b/nptl/pthread_mutex_cond_lock.c @@ -2,6 +2,7 @@ #define LLL_MUTEX_LOCK(mutex) \ lll_cond_lock ((mutex)->__data.__lock, PTHREAD_MUTEX_PSHARED (mutex)) +#define LLL_MUTEX_LOCK_OPTIMIZED(mutex) LLL_MUTEX_LOCK (mutex) /* Not actually elided so far. Needed? */ #define LLL_MUTEX_LOCK_ELISION(mutex) \ diff --git a/nptl/pthread_mutex_lock.c b/nptl/pthread_mutex_lock.c index f0de7b7fd6..8649a92ffb 100644 --- a/nptl/pthread_mutex_lock.c +++ b/nptl/pthread_mutex_lock.c @@ -30,8 +30,27 @@ /* Some of the following definitions differ when pthread_mutex_cond_lock.c includes this file. */ #ifndef LLL_MUTEX_LOCK -# define LLL_MUTEX_LOCK(mutex) \ +/* lll_lock with single-thread optimization. */ +static inline void +lll_mutex_lock_optimized (pthread_mutex_t *mutex) +{ + /* The single-threaded optimization is only valid for private + mutexes. For process-shared mutexes, the mutex could be in a + shared mapping, so synchronization with another process is needed + even without any threads. If the lock is already marked as + acquired, POSIX requires that pthread_mutex_lock deadlocks for + normal mutexes, so skip the optimization in that case as + well. */ + int private = PTHREAD_MUTEX_PSHARED (mutex); + if (private == LLL_PRIVATE && SINGLE_THREAD_P && mutex->__data.__lock == 0) + mutex->__data.__lock = 1; + else + lll_lock (mutex->__data.__lock, private); +} + +# define LLL_MUTEX_LOCK(mutex) \ lll_lock ((mutex)->__data.__lock, PTHREAD_MUTEX_PSHARED (mutex)) +# define LLL_MUTEX_LOCK_OPTIMIZED(mutex) lll_mutex_lock_optimized (mutex) # define LLL_MUTEX_TRYLOCK(mutex) \ lll_trylock ((mutex)->__data.__lock) # define LLL_ROBUST_MUTEX_LOCK_MODIFIER 0 @@ -64,7 +83,7 @@ __pthread_mutex_lock (pthread_mutex_t *mutex) FORCE_ELISION (mutex, goto elision); simple: /* Normal mutex. */ - LLL_MUTEX_LOCK (mutex); + LLL_MUTEX_LOCK_OPTIMIZED (mutex); assert (mutex->__data.__owner == 0); } #if ENABLE_ELISION_SUPPORT @@ -99,7 +118,7 @@ __pthread_mutex_lock (pthread_mutex_t *mutex) } /* We have to get the mutex. */ - LLL_MUTEX_LOCK (mutex); + LLL_MUTEX_LOCK_OPTIMIZED (mutex); assert (mutex->__data.__owner == 0); mutex->__data.__count = 1; diff --git a/nptl/pthread_mutex_unlock.c b/nptl/pthread_mutex_unlock.c index 3b5ccdacf9..655093ee9a 100644 --- a/nptl/pthread_mutex_unlock.c +++ b/nptl/pthread_mutex_unlock.c @@ -28,6 +28,21 @@ static int __pthread_mutex_unlock_full (pthread_mutex_t *mutex, int decr) __attribute_noinline__; +/* lll_lock with single-thread optimization. */ +static inline void +lll_mutex_unlock_optimized (pthread_mutex_t *mutex) +{ + /* The single-threaded optimization is only valid for private + mutexes. For process-shared mutexes, the mutex could be in a + shared mapping, so synchronization with another process is needed + even without any threads. */ + int private = PTHREAD_MUTEX_PSHARED (mutex); + if (private == LLL_PRIVATE && SINGLE_THREAD_P) + mutex->__data.__lock = 0; + else + lll_unlock (mutex->__data.__lock, private); +} + int attribute_hidden __pthread_mutex_unlock_usercnt (pthread_mutex_t *mutex, int decr) @@ -51,7 +66,7 @@ __pthread_mutex_unlock_usercnt (pthread_mutex_t *mutex, int decr) --mutex->__data.__nusers; /* Unlock. */ - lll_unlock (mutex->__data.__lock, PTHREAD_MUTEX_PSHARED (mutex)); + lll_mutex_unlock_optimized (mutex); LIBC_PROBE (mutex_release, 1, mutex); -- 2.30.2