From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
To: Florian Weimer <fweimer@redhat.com>
Cc: "Peter Zijlstra" <peterz@infradead.org>,
linux-kernel@vger.kernel.org,
"Thomas Gleixner" <tglx@linutronix.de>,
"Paul E . McKenney" <paulmck@kernel.org>,
"Boqun Feng" <boqun.feng@gmail.com>,
"H . Peter Anvin" <hpa@zytor.com>, "Paul Turner" <pjt@google.com>,
linux-api@vger.kernel.org,
"Christian Brauner" <brauner@kernel.org>,
David.Laight@ACULAB.COM, carlos@redhat.com,
"Peter Oskolkov" <posk@posk.io>,
"Alexander Mikhalitsyn" <alexander@mihalicyn.com>,
"Chris Kennelly" <ckennelly@google.com>,
"Ingo Molnar" <mingo@redhat.com>,
"Darren Hart" <dvhart@infradead.org>,
"Davidlohr Bueso" <dave@stgolabs.net>,
"André Almeida" <andrealmeid@igalia.com>,
libc-alpha@sourceware.org, "Steven Rostedt" <rostedt@goodmis.org>,
"Jonathan Corbet" <corbet@lwn.net>,
"Noah Goldstein" <goldstein.w.n@gmail.com>,
longman@redhat.com
Subject: Re: [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq
Date: Tue, 30 May 2023 10:25:57 -0400 [thread overview]
Message-ID: <b0416e8c-9b8f-9b25-dd0c-3b7882e5746f@efficios.com> (raw)
In-Reply-To: <87sfbew7ra.fsf@oldenburg.str.redhat.com>
On 5/30/23 04:20, Florian Weimer wrote:
> * Mathieu Desnoyers:
>
>>> I don't see why we can't stick this directly into struct rseq because
>>> it's all public anyway.
>>
>> The motivation for moving this to a different cache line is to handle
>> the prior comment from Boqun, who is concerned that busy-waiting
>> repeatedly loading a field from struct rseq will cause false-sharing
>> and make other stores to that cache line slower, especially stores to
>> rseq_cs to begin rseq critical sections, thus slightly increasing the
>> overhead of rseq critical sections taken while mutexes are held.
>
> Hmm. For context, in glibc, we have to place struct rseq on a fixed
> offset from the start of a page (or even some larger alignment) for all
> threads. In the future (once we move the thread control block off the
> top of the userspace stack, where it resides since the LinuxThreads
> days), it is likely that the pointer difference between different
> threads will also be a multiple of a fairly large power of two
> (something like 2**20 might be common). Maybe this will make caching
> even more difficult?
>
>> If we want to embed this field into struct rseq with its own cache
>> line, then we need to add a lot of padding, which is inconvenient.
>>
>> That being said, perhaps this is premature optimization, what do you
>> think ?
>
> Maybe? I don't know how the access patterns will look like. But I
> suspect that once we hit this case, performance will not be great
> anyway, so the optimization is perhaps unnecessary?
What I dislike though is that contention for any lock which busy-waits
on the rseq sched_state would slow down all rseq critical sections of
that thread, which is a side-effect we want to avoid.
I've done some more additional benchmarks on my 8-core AMD laptop, and I
notice that things get especially bad whenever the store to
rseq_abi->rseq_cs is surrounded by other instructions that need to be
ordered with that store, e.g. a for loop doing 10 stores to a local
variables. If it's surrounded by instructions that don't need to be
ordered wrt that store (e.g. a for loop of 10 iterations issuing
barrier() "memory" asm clobbers), then the overhead cannot be noticed
anymore.
>
> The challenge is that once we put stuff at fixed offsets, we can't
> transparently fix it later. It would need more auxv entries with
> further offsets, or accessing this data through some indirection,
> perhaps via vDSO helpers.
Perhaps this is more flexibility/complexity than we really need. One
possible approach would be to split struct rseq into sub-structures, e.g.:
rseq_len = overall size of all sub-structures.
auxv AT_RSEQ_ALIGN = 256
auxv AT_RSEQ_FEATURE_SIZE = size of first portion of struct rseq,
at most 256 bytes, meant to contain fields
stored/loaded from the thread doing the
registration.
auxv AT_RSEQ_SHARED_FEATURE_SIZE =
size of 2nd portion of struct rseq,
starts at offset 256, at most 256 bytes,
meant to contain fields stored/loaded by
any thread.
Then we have this layout:
struct rseq {
struct rseq_local {
/* Fields accessed from local thread. */
} __attribute__((aligned((256));
struct rseq_shared {
/* Shared fields. */
} __attribute__((aligned(256));
} __attribute__((aligned(256));
And if someday AT_RSEQ_FEATURE_SIZE needs to grow over 256 bytes
(32 * u64), we can still extend with a new auxv entry after the "shared"
features.
>
>>> The TID field would be useful in its own right.
>>
>> Indeed, good point.
>>
>> While we are there, I wonder if we should use the thread_pointer() as
>> lock identifier, or if the address of struct rseq is fine ?
>
> Hard to tell until we'll see what the futex integration looks like, I
> think.
Good point. I can choose one way or another for the prototype, and then
we'll see how things go with futex integration.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com
next prev parent reply other threads:[~2023-05-30 14:25 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-29 19:14 [RFC PATCH v2 0/4] Extend rseq with sched_state_ptr field Mathieu Desnoyers
2023-05-29 19:14 ` [RFC PATCH v2 1/4] rseq: Add sched_state field to struct rseq Mathieu Desnoyers
2023-05-29 19:35 ` Florian Weimer
2023-05-29 19:48 ` Mathieu Desnoyers
2023-05-30 8:20 ` Florian Weimer
2023-05-30 14:25 ` Mathieu Desnoyers [this message]
2023-05-30 15:13 ` Mathieu Desnoyers
2023-09-26 20:52 ` Dmitry Vyukov
2023-09-26 23:49 ` Dmitry Vyukov
2023-09-26 23:54 ` Dmitry Vyukov
2023-09-27 4:51 ` Florian Weimer
2023-09-27 15:58 ` Dmitry Vyukov
2023-09-28 8:52 ` Florian Weimer
2023-09-28 14:44 ` Dmitry Vyukov
2023-09-28 14:47 ` Dmitry Vyukov
2023-09-28 10:39 ` Peter Zijlstra
2023-09-28 11:22 ` David Laight
2023-09-28 13:20 ` Mathieu Desnoyers
2023-09-28 14:26 ` Peter Zijlstra
2023-09-28 14:33 ` David Laight
2023-09-28 15:05 ` André Almeida
2023-09-28 14:43 ` Steven Rostedt
2023-09-28 15:51 ` David Laight
2023-10-02 16:51 ` Steven Rostedt
2023-10-02 17:22 ` David Laight
2023-10-02 17:56 ` Steven Rostedt
2023-09-28 20:21 ` Thomas Gleixner
2023-09-28 20:43 ` Mathieu Desnoyers
2023-09-28 20:54 ` Thomas Gleixner
2023-09-28 22:11 ` Mathieu Desnoyers
2023-05-29 19:14 ` [RFC PATCH v2 2/4] selftests/rseq: Add sched_state rseq field and getter Mathieu Desnoyers
2023-05-29 19:14 ` [RFC PATCH v2 3/4] selftests/rseq: Implement sched state test program Mathieu Desnoyers
2023-05-29 19:14 ` [RFC PATCH v2 4/4] selftests/rseq: Implement rseq_mutex " Mathieu Desnoyers
2023-09-28 19:55 ` Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b0416e8c-9b8f-9b25-dd0c-3b7882e5746f@efficios.com \
--to=mathieu.desnoyers@efficios.com \
--cc=David.Laight@ACULAB.COM \
--cc=alexander@mihalicyn.com \
--cc=andrealmeid@igalia.com \
--cc=boqun.feng@gmail.com \
--cc=brauner@kernel.org \
--cc=carlos@redhat.com \
--cc=ckennelly@google.com \
--cc=corbet@lwn.net \
--cc=dave@stgolabs.net \
--cc=dvhart@infradead.org \
--cc=fweimer@redhat.com \
--cc=goldstein.w.n@gmail.com \
--cc=hpa@zytor.com \
--cc=libc-alpha@sourceware.org \
--cc=linux-api@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=longman@redhat.com \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=posk@posk.io \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).