From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by sourceware.org (Postfix) with ESMTPS id 7137B3858C33 for ; Tue, 9 May 2023 17:56:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 7137B3858C33 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=fastmail.fm Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=fastmail.fm Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 4562D5C03E8; Tue, 9 May 2023 13:56:15 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Tue, 09 May 2023 13:56:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastmail.fm; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1683654975; x= 1683741375; bh=vpVmnr9SpXcF/Sr6L8SEre2nK1xHAreA9ByceSWFfjg=; b=H 9GXh6EniD2Ov9Snf1DDoA+9Huri5jjb35TAG2iwJ7kqBAYBeN2Q12/FZtGw8KY6Q 4BeBfcZDo2BES3hHdqxojWfR2EgUvI2q86awvWUy0VMo7G8fLYSnn1epSCs6fEhm NlkxtCIZvZnJgJ5GmRIzu8eWDRlqy2Al5EPtdmmcpyAWBNhm47xS7t1/O35fuaxH Xt7fvfpuhk64Unks3D1q9Rg/9mkW5oNvMtZJof170nFVondkzYt7gy6T/6bKz0Wa RPoywR2EJy1M5zEREUAM0/nRC6vgNn4Bg7N744JB4z2K9Z1AkZV09rxOIld99m5H U4AiEjzD3DnaS8F2iAgGQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1683654975; x= 1683741375; bh=vpVmnr9SpXcF/Sr6L8SEre2nK1xHAreA9ByceSWFfjg=; b=W Wu2pEbbnKh/0Lw2zGrF1Ab1tviBH+8CqgHx0khroNL7XYtXJc9s1xy01EL9fhkkB YQxnAQOt4BqJ01QvfSSonY76KQn4dRMKfQFdDxOsB/oL+O6DwaHiV56pV8vKsdpN jpqbT/zKvWXHyEirzq7C1fEwbO5R434iPavkrNcVkbfhJQv7YJ0Sbcwa9SnZ0JDV 1t+D/SU8lJrKtQ8aFlA2SjDsvaXXkV5hd25WFhSZ/S5Di7VxK1aUmHrtZ55Ss1oW tAFkYAGqNkTTw/XLTtrFejQ0npEuuwMOjLOLj8o7bmKoO11yeMtArDGK8C9c1aV/ FRIUdaWYLrvZigQmqpYzg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeguddggeegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmhgrlhht vghskhgrrhhuphhkvgesfhgrshhtmhgrihhlrdhfmhenucggtffrrghtthgvrhhnpeetge elgfeggeeuleeuffetveefgffgjedvgeehffdthfekteegtdeguefhffeftdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghlthgvshhkrg hruhhpkhgvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Feedback-ID: ifa6c408f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 9 May 2023 13:56:13 -0400 (EDT) From: malteskarupke@fastmail.fm To: libc-alpha@sourceware.org Cc: Malte Skarupke Subject: [PATCH v4 2/9] nptl: Update comments and indentation for new condvar implementation Date: Tue, 9 May 2023 13:55:51 -0400 Message-Id: <20230509175558.10014-3-malteskarupke@fastmail.fm> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230509175558.10014-1-malteskarupke@fastmail.fm> References: <20230509175558.10014-1-malteskarupke@fastmail.fm> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-12.9 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_PASS,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: From: Malte Skarupke Some comments were wrong after the most recent commit. This fixes that. Also fixing indentation where it was using spaces instead of tabs. Signed-off-by: Malte Skarupke --- nptl/pthread_cond_common.c | 5 +++-- nptl/pthread_cond_wait.c | 39 +++++++++++++++++++------------------- 2 files changed, 22 insertions(+), 22 deletions(-) diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c index a55eee3e6b..350a16fab2 100644 --- a/nptl/pthread_cond_common.c +++ b/nptl/pthread_cond_common.c @@ -221,8 +221,9 @@ __condvar_quiesce_and_switch_g1 (pthread_cond_t *cond, uint64_t wseq, * New waiters arriving concurrently with the group switching will all go into G2 until we atomically make the switch. Waiters existing in G2 are not affected. - * Waiters in G1 will be closed out immediately by the advancing of - __g_signals to the next "lowseq" (low 31 bits of the new g1_start), + * Waiters in G1 have already received a signal and been woken. If they + haven't woken yet, they will be closed out immediately by the advancing + of __g_signals to the next "lowseq" (low 31 bits of the new g1_start), which will prevent waiters from blocking using a futex on __g_signals since it provides enough signals for all possible remaining waiters. As a result, they can each consume a signal diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index 1cb3dbf7b0..cee1968756 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -249,7 +249,7 @@ __condvar_cleanup_waiting (void *arg) figure out whether they are in a group that has already been completely signaled (i.e., if the current G1 starts at a later position that the waiter's position). Waiters cannot determine whether they are currently - in G2 or G1 -- but they do not have too because all they are interested in + in G2 or G1 -- but they do not have to because all they are interested in is whether there are available signals, and they always start in G2 (whose group slot they know because of the bit in the waiter sequence. Signalers will simply fill the right group until it is completely signaled and can @@ -412,7 +412,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, } /* Now wait until a signal is available in our group or it is closed. - Acquire MO so that if we observe a value of zero written after group + Acquire MO so that if we observe (signals == lowseq) after group switching in __condvar_quiesce_and_switch_g1, we synchronize with that store and will see the prior update of __g1_start done while switching groups too. */ @@ -422,8 +422,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, { while (1) { - uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); - unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); + unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; /* Spin-wait first. Note that spinning first without checking whether a timeout @@ -447,21 +447,21 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, /* Reload signals. See above for MO. */ signals = atomic_load_acquire (cond->__data.__g_signals + g); - g1_start = __condvar_load_g1_start_relaxed (cond); - lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + g1_start = __condvar_load_g1_start_relaxed (cond); + lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; spin--; } - if (seq < (g1_start >> 1)) + if (seq < (g1_start >> 1)) { - /* If the group is closed already, + /* If the group is closed already, then this waiter originally had enough extra signals to consume, up until the time its group was closed. */ goto done; - } + } /* If there is an available signal, don't block. - If __g1_start has advanced at all, then we must be in G1 + If __g1_start has advanced at all, then we must be in G1 by now, perhaps in the process of switching back to an older G2, but in either case we're allowed to consume the available signal and should not block anymore. */ @@ -483,22 +483,23 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, sequence. */ atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); signals = atomic_load_acquire (cond->__data.__g_signals + g); - g1_start = __condvar_load_g1_start_relaxed (cond); - lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; + g1_start = __condvar_load_g1_start_relaxed (cond); + lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - if (seq < (g1_start >> 1)) + if (seq < (g1_start >> 1)) { - /* group is closed already, so don't block */ + /* group is closed already, so don't block */ __condvar_dec_grefs (cond, g, private); goto done; } if ((int)(signals - lowseq) >= 2) { - /* a signal showed up or G1/G2 switched after we grabbed the refcount */ + /* a signal showed up or G1/G2 switched after we grabbed the + refcount */ __condvar_dec_grefs (cond, g, private); break; - } + } // Now block. struct _pthread_cleanup_buffer buffer; @@ -536,10 +537,8 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) goto done; } - /* Try to grab a signal. Use acquire MO so that we see an up-to-date value - of __g1_start below (see spinning above for a similar case). In - particular, if we steal from a more recent group, we will also see a - more recent __g1_start below. */ + /* Try to grab a signal. See above for MO. (if we do another loop + iteration we need to see the correct value of g1_start) */ while (!atomic_compare_exchange_weak_acquire (cond->__data.__g_signals + g, &signals, signals - 2)); -- 2.34.1