From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by sourceware.org (Postfix) with ESMTPS id 24F093858C33 for ; Thu, 4 May 2023 04:55:11 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 24F093858C33 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=fastmail.fm Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=fastmail.fm Received: from compute2.internal (compute2.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id E9EA15C037F; Thu, 4 May 2023 00:55:10 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Thu, 04 May 2023 00:55:10 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fastmail.fm; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; t=1683176110; x= 1683262510; bh=95f/dyPHSOagcMSt43Ppu5Q9ZHZ32lliTsVfbekh7Ss=; b=g RW1+SZP44IdgOVWCs4xqOG7hkgv/DeHLI3KeBjZtPzQoP0k+WFTCRjUz/4EoY4P9 VfCPaCBzj8Up5NILg0V1EoB6xiSK3ZYT4RuKuGqZY88Yqjp7UugLp46c4kdPLscY X3NM28qwlAoRi3nebs0EqvMOz/xraqe6co3iZr9rNlYdVWo8nhClR9OyqtlGJCxY IpLzyn8e+fOAMTcTN05CeJ2LFQUofoH1VJq9IveLTYUPN6GiLbqEglpFbTbkpwVQ uqdenMRnUTRW2eeIs8BLtwGmOlc8AQNhL/8uGYOhlAPKh67iuILgXgbyYLyJEFkN nNuhtZgASsMnWqQOQTupw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to:x-me-proxy:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1683176110; x= 1683262510; bh=95f/dyPHSOagcMSt43Ppu5Q9ZHZ32lliTsVfbekh7Ss=; b=B zgqx8vYp2jAAUt2geAfJc+vOYrqpPoAQ9bH6HT8wu20O2ZTxvDaN/2o90C5h86w/ +qprqZs46bQ0Bz0ST6SCdiSXqr3yu9IEMEBCJ1BbhECnvme54NLDi7FsDbbsWhNC Zkhr10jt3wlas5NF5qa5lvqlWA0lCB/1I9eT4u3liFnRKy9WdfQsKhSPPed9qZ1G +mfcYFvXSYCR60KDXszv9CBFxbifmUj9LTNwbOnkUBLtBA/qAz4yirocbHPClbW9 wDULQP7aXz/pdYdXDtImnDGdXCZRL5eozK0Sg74+s3gVmPGX1L5/LRJl8s1qrH3B pLKFDTwdi5LnOh7OKvL6g== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedrfedvledgkeejucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmhgrlhht vghskhgrrhhuphhkvgesfhgrshhtmhgrihhlrdhfmhenucggtffrrghtthgvrhhnpeetge elgfeggeeuleeuffetveefgffgjedvgeehffdthfekteegtdeguefhffeftdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghlthgvshhkrg hruhhpkhgvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Feedback-ID: ifa6c408f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 4 May 2023 00:55:10 -0400 (EDT) From: malteskarupke@fastmail.fm To: libc-alpha@sourceware.org Cc: Malte Skarupke Subject: [PATCH v2 4/9] nptl: Remove unnecessary quadruple check in pthread_cond_wait Date: Thu, 4 May 2023 00:54:58 -0400 Message-Id: <20230504045503.83276-5-malteskarupke@fastmail.fm> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230504045503.83276-1-malteskarupke@fastmail.fm> References: <20230504045503.83276-1-malteskarupke@fastmail.fm> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-12.8 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,GIT_PATCH_0,RCVD_IN_DNSWL_LOW,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: From: Malte Skarupke pthread_cond_wait was checking whether it was in a closed group no less than four times. Checking once is enough. Here are the four checks: 1. While spin-waiting. This was dead code: maxspin is set to 0 and has been for years. 2. Before deciding to go to sleep, and before incrementing grefs: I kept this 3. After incrementing grefs. There is no reason to think that the group would close while we do an atomic increment. Obviously it could close at any point, but that doesn't mean we have to recheck after every step. This check was equally good as check 2, except it has to do more work. 4. When we find ourselves in a group that has a signal. We only get here after we check that we're not in a closed group. There is no need to check again. The check would only have helped in cases where the compare_exchange in the next line would also have failed. Relying on the compare_exchange is fine. Removing the duplicate checks clarifies the code. --- nptl/pthread_cond_wait.c | 49 ---------------------------------------- 1 file changed, 49 deletions(-) diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index cee1968756..47e834cade 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -366,7 +366,6 @@ static __always_inline int __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, clockid_t clockid, const struct __timespec64 *abstime) { - const int maxspin = 0; int err; int result = 0; @@ -425,33 +424,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - /* Spin-wait first. - Note that spinning first without checking whether a timeout - passed might lead to what looks like a spurious wake-up even - though we should return ETIMEDOUT (e.g., if the caller provides - an absolute timeout that is clearly in the past). However, - (1) spurious wake-ups are allowed, (2) it seems unlikely that a - user will (ab)use pthread_cond_wait as a check for whether a - point in time is in the past, and (3) spinning first without - having to compare against the current time seems to be the right - choice from a performance perspective for most use cases. */ - unsigned int spin = maxspin; - while (spin > 0 && ((int)(signals - lowseq) < 2)) - { - /* Check that we are not spinning on a group that's already - closed. */ - if (seq < (g1_start >> 1)) - break; - - /* TODO Back off. */ - - /* Reload signals. See above for MO. */ - signals = atomic_load_acquire (cond->__data.__g_signals + g); - g1_start = __condvar_load_g1_start_relaxed (cond); - lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - spin--; - } - if (seq < (g1_start >> 1)) { /* If the group is closed already, @@ -482,24 +454,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, an atomic read-modify-write operation and thus extend the release sequence. */ atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); - signals = atomic_load_acquire (cond->__data.__g_signals + g); - g1_start = __condvar_load_g1_start_relaxed (cond); - lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - - if (seq < (g1_start >> 1)) - { - /* group is closed already, so don't block */ - __condvar_dec_grefs (cond, g, private); - goto done; - } - - if ((int)(signals - lowseq) >= 2) - { - /* a signal showed up or G1/G2 switched after we grabbed the - refcount */ - __condvar_dec_grefs (cond, g, private); - break; - } // Now block. struct _pthread_cleanup_buffer buffer; @@ -533,9 +487,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, /* Reload signals. See above for MO. */ signals = atomic_load_acquire (cond->__data.__g_signals + g); } - - if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) - goto done; } /* Try to grab a signal. See above for MO. (if we do another loop iteration we need to see the correct value of g1_start) */ -- 2.34.1