From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out2-smtp.messagingengine.com (out2-smtp.messagingengine.com [66.111.4.26]) by sourceware.org (Postfix) with ESMTPS id 3BA573857717 for ; Tue, 9 May 2023 17:56:19 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 3BA573857717 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=fastmail.fm Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=fastmail.fm Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id 0F5BD5C0401; Tue, 9 May 2023 13:56:19 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Tue, 09 May 2023 13:56:19 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastmail.fm; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm1; t=1683654979; x= 1683741379; bh=XwwUXjCDg7u3ejZZtsl0qrBPsxE1ssYA29PH5lUa4og=; b=o rtdDxbK9hERUFdkLT7LF8OrwXWlba3vPAKjHU0PJTwip42IhoqACtV5TUOCUsKJ0 70GPy2QJLU6krVEEChQSBCHjFAz8saOAC9x/VZ3IEw1OQSSK0BwojeJzB3pYvLft /HZyviYIbeIUj5z6Pr3IDogXBP48gJ85o3MZGiDGYbhATXbDRxmr15BX9h9mM2VR Tm/7fo7FilbVEMqXgUFEbvetWg7foD7qS45TUCF89E2n031deWCN/j8HwQmFhWz0 D6Suvmv4ZClLMk7VygvsvZHm2taHdtPCTDuEUbAQIfHtnMtxzsrROGR/VwcD176H h8U323zzFAtLo/4fXpM8A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1683654979; x= 1683741379; bh=XwwUXjCDg7u3ejZZtsl0qrBPsxE1ssYA29PH5lUa4og=; b=W uZbXqJcgmVsCOFZJKyhih/8n92yQUH2shas8J2GQJsC6EdrV4JEyHgd09ttlD8Wl McCENVDTZhAXP/GgXZ7h9/zgMn7zSJlzm46C+3lK4Qirdx/anOSFN4tTIwfg3BXT EZiH6OiPEHLyUMaYibp50CTf0nOdVf8FKJPWLWSYsHQX3t73P9jLhXScQ3oFCHQ1 crPEHrXQVWATJqqmFSlsP8BEB940n6q7Vfj2oQtepMAGga44VJujTKKbClKJaNp/ v9p7DIK59RzEd/JsP6NBVUICztXxXwl2e5z2jnfvg0zhKi81MshlFt0UJ1700JQo XRyQpyhFxugWva/Ji01gQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfeeguddggeegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmhgrlhht vghskhgrrhhuphhkvgesfhgrshhtmhgrihhlrdhfmhenucggtffrrghtthgvrhhnpeetge elgfeggeeuleeuffetveefgffgjedvgeehffdthfekteegtdeguefhffeftdenucevlhhu shhtvghrufhiiigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghlthgvshhkrg hruhhpkhgvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Feedback-ID: ifa6c408f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 9 May 2023 13:56:17 -0400 (EDT) From: malteskarupke@fastmail.fm To: libc-alpha@sourceware.org Cc: Malte Skarupke Subject: [PATCH v4 4/9] nptl: Remove unnecessary quadruple check in pthread_cond_wait Date: Tue, 9 May 2023 13:55:53 -0400 Message-Id: <20230509175558.10014-5-malteskarupke@fastmail.fm> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230509175558.10014-1-malteskarupke@fastmail.fm> References: <20230509175558.10014-1-malteskarupke@fastmail.fm> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-12.9 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H3,RCVD_IN_MSPIKE_WL,SPF_HELO_PASS,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: From: Malte Skarupke pthread_cond_wait was checking whether it was in a closed group no less than four times. Checking once is enough. Here are the four checks: 1. While spin-waiting. This was dead code: maxspin is set to 0 and has been for years. 2. Before deciding to go to sleep, and before incrementing grefs: I kept this 3. After incrementing grefs. There is no reason to think that the group would close while we do an atomic increment. Obviously it could close at any point, but that doesn't mean we have to recheck after every step. This check was equally good as check 2, except it has to do more work. 4. When we find ourselves in a group that has a signal. We only get here after we check that we're not in a closed group. There is no need to check again. The check would only have helped in cases where the compare_exchange in the next line would also have failed. Relying on the compare_exchange is fine. Removing the duplicate checks clarifies the code. Signed-off-by: Malte Skarupke --- nptl/pthread_cond_wait.c | 49 ---------------------------------------- 1 file changed, 49 deletions(-) diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index cee1968756..47e834cade 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -366,7 +366,6 @@ static __always_inline int __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, clockid_t clockid, const struct __timespec64 *abstime) { - const int maxspin = 0; int err; int result = 0; @@ -425,33 +424,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - /* Spin-wait first. - Note that spinning first without checking whether a timeout - passed might lead to what looks like a spurious wake-up even - though we should return ETIMEDOUT (e.g., if the caller provides - an absolute timeout that is clearly in the past). However, - (1) spurious wake-ups are allowed, (2) it seems unlikely that a - user will (ab)use pthread_cond_wait as a check for whether a - point in time is in the past, and (3) spinning first without - having to compare against the current time seems to be the right - choice from a performance perspective for most use cases. */ - unsigned int spin = maxspin; - while (spin > 0 && ((int)(signals - lowseq) < 2)) - { - /* Check that we are not spinning on a group that's already - closed. */ - if (seq < (g1_start >> 1)) - break; - - /* TODO Back off. */ - - /* Reload signals. See above for MO. */ - signals = atomic_load_acquire (cond->__data.__g_signals + g); - g1_start = __condvar_load_g1_start_relaxed (cond); - lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - spin--; - } - if (seq < (g1_start >> 1)) { /* If the group is closed already, @@ -482,24 +454,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, an atomic read-modify-write operation and thus extend the release sequence. */ atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); - signals = atomic_load_acquire (cond->__data.__g_signals + g); - g1_start = __condvar_load_g1_start_relaxed (cond); - lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - - if (seq < (g1_start >> 1)) - { - /* group is closed already, so don't block */ - __condvar_dec_grefs (cond, g, private); - goto done; - } - - if ((int)(signals - lowseq) >= 2) - { - /* a signal showed up or G1/G2 switched after we grabbed the - refcount */ - __condvar_dec_grefs (cond, g, private); - break; - } // Now block. struct _pthread_cleanup_buffer buffer; @@ -533,9 +487,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, /* Reload signals. See above for MO. */ signals = atomic_load_acquire (cond->__data.__g_signals + g); } - - if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) - goto done; } /* Try to grab a signal. See above for MO. (if we do another loop iteration we need to see the correct value of g1_start) */ -- 2.34.1