From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from eu-smtp-delivery-151.mimecast.com (eu-smtp-delivery-151.mimecast.com [185.58.86.151]) by sourceware.org (Postfix) with ESMTPS id E07263858C60 for ; Thu, 28 Sep 2023 15:51:56 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org E07263858C60 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=ACULAB.COM Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=aculab.com Received: from AcuMS.aculab.com (156.67.243.121 [156.67.243.121]) by relay.mimecast.com with ESMTP with both STARTTLS and AUTH (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id uk-mta-263-t3_o_l23MdiJDZ5ofAWdtA-1; Thu, 28 Sep 2023 16:51:49 +0100 X-MC-Unique: t3_o_l23MdiJDZ5ofAWdtA-1 Received: from AcuMS.Aculab.com (10.202.163.6) by AcuMS.aculab.com (10.202.163.6) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 16:51:47 +0100 Received: from AcuMS.Aculab.com ([::1]) by AcuMS.aculab.com ([::1]) with mapi id 15.00.1497.048; Thu, 28 Sep 2023 16:51:47 +0100 From: David Laight To: 'Steven Rostedt' , Peter Zijlstra CC: Mathieu Desnoyers , "linux-kernel@vger.kernel.org" , "Thomas Gleixner" , "Paul E . McKenney" , Boqun Feng , "H . Peter Anvin" , "Paul Turner" , "linux-api@vger.kernel.org" , Christian Brauner , "Florian Weimer" , "carlos@redhat.com" , "Peter Oskolkov" , Alexander Mikhalitsyn , Chris Kennelly , Ingo Molnar , "Darren Hart" , Davidlohr Bueso , =?iso-8859-1?Q?Andr=E9_Almeida?= , "libc-alpha@sourceware.org" , Jonathan Corbet , Noah Goldstein , Daniel Colascione , "longman@redhat.com" , "Florian Weimer" Subject: RE: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq Thread-Topic: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq Thread-Index: AQHZ8hog0Ykvpvbq8USYUjzds++7brAwU+jQ Date: Thu, 28 Sep 2023 15:51:47 +0000 Message-ID: <40b76cbd00d640e49f727abbd0c39693@AcuMS.aculab.com> References: <20230529191416.53955-1-mathieu.desnoyers@efficios.com> <20230529191416.53955-2-mathieu.desnoyers@efficios.com> <20230928103926.GI9829@noisy.programming.kicks-ass.net> <20230928104321.490782a7@rorschach.local.home> In-Reply-To: <20230928104321.490782a7@rorschach.local.home> Accept-Language: en-GB, en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ms-exchange-transport-fromentityheader: Hosted x-originating-ip: [10.202.205.107] MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: aculab.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00,KAM_DMARC_STATUS,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_NONE,SPF_PASS,TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: From: Steven Rostedt > Sent: 28 September 2023 15:43 >=20 > On Thu, 28 Sep 2023 12:39:26 +0200 > Peter Zijlstra wrote: >=20 > > As always, are syscalls really *that* expensive? Why can't we busy wait > > in the kernel instead? >=20 > Yes syscalls are that expensive. Several years ago I had a good talk > with Robert Haas (one of the PostgreSQL maintainers) at Linux Plumbers, > and I asked him if they used futexes. His answer was "no". He told me > how they did several benchmarks and it was a huge performance hit (and > this was before Spectre/Meltdown made things much worse). He explained > to me that most locks are taken just to flip a few bits. Going into the > kernel and coming back was orders of magnitude longer than the critical > sections. By going into the kernel, it caused a ripple effect and lead > to even more contention. There answer was to implement their locking > completely in user space without any help from the kernel. That matches what I found with code that was using a mutex to take work items off a global list. Although the mutex was only held for a few instructions (probably several 100 because the list wasn't that well written), what happened was that as soon as there was any contention (which might start with a hardware interrupt) performance when through the floor. The fix was to replace the linked list with and array and use atomic add to 'grab' blocks of entries. (Even the atomic operations slowed things down.) > This is when I thought that having an adaptive spinner that could get > hints from the kernel via memory mapping would be extremely useful. Did you consider writing a timestamp into the mutex when it was acquired - or even as the 'acquired' value? A 'moderately synched TSC' should do. Then the waiter should be able to tell how long the mutex has been held for - and then not spin if it had been held ages. > The obvious problem with their implementation is that if the owner is > sleeping, there's no point in spinning. Worse, the owner may even be > waiting for the spinner to get off the CPU before it can run again. But > according to Robert, the gain in the general performance greatly > outweighed the few times this happened in practice. Unless you can use atomics (ok for bits and linked lists) you always have the problem that userspace can't disable interrupts. So, unlike the kernel, you can't implement a proper spinlock. I've NFI how CONFIG_RT manages to get anything done with all the spinlocks replaced by sleep locks. Clearly there are a spinlocks that are held for far too long. But you really do want to spin most of the time. ... =09David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1= PT, UK Registration No: 1397386 (Wales)