From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by sourceware.org (Postfix) with ESMTPS id B91BD3857345 for ; Thu, 4 May 2023 04:55:13 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org B91BD3857345 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=fastmail.fm Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=fastmail.fm Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.nyi.internal (Postfix) with ESMTP id 7D50A5C041C; Thu, 4 May 2023 00:55:13 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute4.internal (MEProxy); Thu, 04 May 2023 00:55:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastmail.fm; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1683176113; x= 1683262513; bh=7KfmjRZvdzpaozsGCfaH4vQUBGk9qmXBW+HC2kF9GpM=; b=X WQTDqHPhLFhhZJp+0/H0ncOjJs8h4F9IY1SHU7DHNRaMxHYuqdoGiBqheHS1DSSj aM/JqkkjkL9OhGwtXALR+LgrB9yWwyYshXSwkfMilc8ZkUqrzNLo+As1ZY39bk3u p82W5KVKl2f2VZRQk7WBxInJ5O3XnZcVQ5I5ukczKwcxL5pUPX9ZNIIhCb5c2d9C 6deEXCgcHdQT3O5+Yh2ocaRh/k6bRSDv5vU+CMBjKaXt1bN7mZdy+yCQy44R5vhL jZLJUGOONJSDTavPU6rbHKqg21xtrJCRciWCsUcqOPsl0zdYCpSlW+smjnSTPQgZ vedWajE+0WKc9n4mrnH0w== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1683176113; x= 1683262513; bh=7KfmjRZvdzpaozsGCfaH4vQUBGk9qmXBW+HC2kF9GpM=; b=K dS6llt4zB0Tgm4bUuYHu5XwzFjxeRWe5mW/yGP3UhVMYiEH2QqMxfQqWfVDMWDQy sateYRchmLfyRHXHVEdyyr37xYZOYTJ4KdeGtqzYhXx1whnGFh1FwqJ+R/HD+I4l 9fmlt+14FVFrcivT7xsH+pvxYQ5L5C0MdJnBoNEBvfxnTxPT4nWuu1wYotPo76G+ dn8yv5+CGyGBY3fTSPMLy5Bj6Mm8SbhhiqE1va0/5gzFYsGCZKafnj4KRm1eR5MB PrweC4dWBm4PJXOwrDZACqty3YnLL3e49kfA8IIS8O5eJzg2Tx81iK3+fJ812prO 36PtgMxs5WJC/ce8AVgRA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedvledgkeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmhgrlhht vghskhgrrhhuphhkvgesfhgrshhtmhgrihhlrdhfmhenucggtffrrghtthgvrhhnpeetge elgfeggeeuleeuffetveefgffgjedvgeehffdthfekteegtdeguefhffeftdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghlthgvshhkrg hruhhpkhgvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Feedback-ID: ifa6c408f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 4 May 2023 00:55:13 -0400 (EDT) From: malteskarupke@fastmail.fm To: libc-alpha@sourceware.org Cc: Malte Skarupke Subject: [PATCH v2 9/9] nptl: Use all of g1_start and g_signals Date: Thu, 4 May 2023 00:55:03 -0400 Message-Id: <20230504045503.83276-10-malteskarupke@fastmail.fm> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230504045503.83276-1-malteskarupke@fastmail.fm> References: <20230504045503.83276-1-malteskarupke@fastmail.fm> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-13.1 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: From: Malte Skarupke The LSB of g_signals was unused. The LSB of g1_start was used to indicate which group is G2. This was used to always go to sleep in pthread_cond_wait if a waiter is in G2. A comment earlier in the file says that this is not correct to do: "Waiters cannot determine whether they are currently in G2 or G1 -- but they do not have to because all they are interested in is whether there are available signals" I either would have had to update the comment, or get rid of the check. I chose to get rid of the check. In fact I don't quite know why it was there. There will never be available signals for group G2, so we didn't need the special case. Even if there were, this would just be a spurious wake. This might have caught some cases where the count has wrapped around, but it wouldn't reliably do that, (and even if it did, why would you want to force a sleep in that case?) and we don't support that many concurrent waiters anyway. Getting rid of it allows us to use one more bit, making us more robust to wraparound. --- nptl/pthread_cond_broadcast.c | 4 ++-- nptl/pthread_cond_common.c | 26 ++++++++++---------------- nptl/pthread_cond_signal.c | 2 +- nptl/pthread_cond_wait.c | 14 +++++--------- 4 files changed, 18 insertions(+), 28 deletions(-) diff --git a/nptl/pthread_cond_broadcast.c b/nptl/pthread_cond_broadcast.c index a07435589a..ef0943cdc5 100644 --- a/nptl/pthread_cond_broadcast.c +++ b/nptl/pthread_cond_broadcast.c @@ -57,7 +57,7 @@ ___pthread_cond_broadcast (pthread_cond_t *cond) { /* Add as many signals as the remaining size of the group. */ atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, - cond->__data.__g_size[g1] << 1); + cond->__data.__g_size[g1]); cond->__data.__g_size[g1] = 0; /* We need to wake G1 waiters before we switch G1 below. */ @@ -73,7 +73,7 @@ ___pthread_cond_broadcast (pthread_cond_t *cond) { /* Step (3): Send signals to all waiters in the old G2 / new G1. */ atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, - cond->__data.__g_size[g1] << 1); + cond->__data.__g_size[g1]); cond->__data.__g_size[g1] = 0; /* TODO Only set it if there are indeed futex waiters. */ do_futex_wake = true; diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c index 3baac4dabc..e48f914321 100644 --- a/nptl/pthread_cond_common.c +++ b/nptl/pthread_cond_common.c @@ -208,9 +208,9 @@ __condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, behavior. Note that this works correctly for a zero-initialized condvar too. */ unsigned int old_orig_size = __condvar_get_orig_size (cond); - uint64_t old_g1_start = __condvar_load_g1_start_relaxed (cond) >> 1; - if (((unsigned) (wseq - old_g1_start - old_orig_size) - + cond->__data.__g_size[g1 ^ 1]) == 0) + uint64_t old_g1_start = __condvar_load_g1_start_relaxed (cond); + uint64_t new_g1_start = old_g1_start + old_orig_size; + if (((unsigned) (wseq - new_g1_start) + cond->__data.__g_size[g1 ^ 1]) == 0) return false; /* We have to consider the following kinds of waiters: @@ -221,16 +221,10 @@ __condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, are not affected. * Waiters in G1 have already received a signal and been woken. */ - /* Update __g1_start, which closes this group. The value we add will never - be negative because old_orig_size can only be zero when we switch groups - the first time after a condvar was initialized, in which case G1 will be - at index 1 and we will add a value of 1. Relaxed MO is fine because the - change comes with no additional constraints that others would have to - observe. */ - __condvar_add_g1_start_relaxed (cond, - (old_orig_size << 1) + (g1 == 1 ? 1 : - 1)); - - unsigned int lowseq = ((old_g1_start + old_orig_size) << 1) & ~1U; + /* Update __g1_start, which closes this group. Relaxed MO is fine because + the change comes with no additional constraints that others would have + to observe. */ + __condvar_add_g1_start_relaxed (cond, old_orig_size); /* At this point, the old G1 is now a valid new G2 (but not in use yet). No old waiter can neither grab a signal nor acquire a reference without @@ -242,13 +236,13 @@ __condvar_switch_g1 (pthread_cond_t *cond, uint64_t wseq, g1 ^= 1; *g1index ^= 1; - /* Now advance the new G1 g_signals to the new lowseq, giving it + /* Now advance the new G1 g_signals to the new g1_start, giving it an effective signal count of 0 to start. */ - atomic_store_release (cond->__data.__g_signals + g1, lowseq); + atomic_store_release (cond->__data.__g_signals + g1, (unsigned)new_g1_start); /* These values are just observed by signalers, and thus protected by the lock. */ - unsigned int orig_size = wseq - (old_g1_start + old_orig_size); + unsigned int orig_size = wseq - new_g1_start; __condvar_set_orig_size (cond, orig_size); /* Use and addition to not loose track of cancellations in what was previously G2. */ diff --git a/nptl/pthread_cond_signal.c b/nptl/pthread_cond_signal.c index a9bc10dcca..07427369aa 100644 --- a/nptl/pthread_cond_signal.c +++ b/nptl/pthread_cond_signal.c @@ -80,7 +80,7 @@ ___pthread_cond_signal (pthread_cond_t *cond) release-MO store when initializing a group in __condvar_switch_g1 because we use an atomic read-modify-write and thus extend that store's release sequence. */ - atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, 2); + atomic_fetch_add_relaxed (cond->__data.__g_signals + g1, 1); cond->__data.__g_size[g1]--; /* TODO Only set it if there are indeed futex waiters. */ do_futex_wake = true; diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index 346880c5a7..1b6c983150 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -84,7 +84,7 @@ __condvar_cancel_waiting (pthread_cond_t *cond, uint64_t seq, unsigned int g, not hold a reference on the group. */ __condvar_acquire_lock (cond, private); - uint64_t g1_start = __condvar_load_g1_start_relaxed (cond) >> 1; + uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); if (g1_start > seq) { /* Our group is closed, so someone provided enough signals for it. @@ -259,7 +259,6 @@ __condvar_cleanup_waiting (void *arg) * Waiters fetch-add while having acquire the mutex associated with the condvar. Signalers load it and fetch-xor it concurrently. __g1_start: Starting position of G1 (inclusive) - * LSB is index of current G2. * Modified by signalers while having acquired the condvar-internal lock and observed concurrently by waiters. __g1_orig_size: Initial size of G1 @@ -280,11 +279,9 @@ __condvar_cleanup_waiting (void *arg) * Reference count used by waiters concurrently with signalers that have acquired the condvar-internal lock. __g_signals: The number of signals that can still be consumed, relative to - the current g1_start. (i.e. bits 31 to 1 of __g_signals are bits - 31 to 1 of g1_start with the signal count added) + the current g1_start. (i.e. g1_start with the signal count added) * Used as a futex word by waiters. Used concurrently by waiters and signalers. - * LSB is currently reserved and 0. __g_size: Waiters remaining in this group (i.e., which have not been signaled yet. * Accessed by signalers and waiters that cancel waiting (both do so only @@ -391,9 +388,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, too. */ unsigned int signals = atomic_load_acquire (cond->__data.__g_signals + g); uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); - unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - if (seq < (g1_start >> 1)) + if (seq < g1_start) { /* If the group is closed already, then this waiter originally had enough extra signals to @@ -406,13 +402,13 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, by now, perhaps in the process of switching back to an older G2, but in either case we're allowed to consume the available signal and should not block anymore. */ - if ((int)(signals - lowseq) >= 2) + if ((int)(signals - (unsigned int)g1_start) > 0) { /* Try to grab a signal. See above for MO. (if we do another loop iteration we need to see the correct value of g1_start) */ if (atomic_compare_exchange_weak_acquire ( cond->__data.__g_signals + g, - &signals, signals - 2)) + &signals, signals - 1)) break; else continue; -- 2.34.1