public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
To: Evan Green <evan@rivosinc.com>
Cc: Palmer Dabbelt <palmer@rivosinc.com>,
	libc-alpha@sourceware.org, slewis@rivosinc.com,
	Vineet Gupta <vineetg@rivosinc.com>,
	Arnd Bergmann <arnd@arndb.de>
Subject: Re: [PATCH v2 0/3] RISC-V: ifunced memcpy using new kernel hwprobe interface
Date: Thu, 30 Mar 2023 16:43:34 -0300	[thread overview]
Message-ID: <b53ab29a-b78c-11b3-2ce5-f406396eccae@linaro.org> (raw)
In-Reply-To: <CALs-Hsvz7=jt9sRZ3LQzzWWLT1jn1zA8UwSg1u217T5uQ-ZGzQ@mail.gmail.com>



On 30/03/23 15:31, Evan Green wrote:
> Hi Adhemerval,
> 
> On Wed, Mar 29, 2023 at 1:13 PM Adhemerval Zanella Netto
> <adhemerval.zanella@linaro.org> wrote:
>>
>>
>>
>> On 29/03/23 16:45, Palmer Dabbelt wrote:
>>> On Wed, 29 Mar 2023 12:16:39 PDT (-0700), adhemerval.zanella@linaro.org wrote:
>>>>
>>>>
>>>> On 28/03/23 21:01, Palmer Dabbelt wrote:
>>>>> On Tue, 28 Mar 2023 16:41:10 PDT (-0700), adhemerval.zanella@linaro.org wrote:
>>>>>>
>>>>>>
>>>>>> On 28/03/23 19:54, Palmer Dabbelt wrote:
>>>>>>> On Tue, 21 Feb 2023 11:15:34 PST (-0800), Evan Green wrote:
>>>>>>>>
>>>>>>>> This series illustrates the use of a proposed Linux syscall that
>>>>>>>> enumerates architectural information about the RISC-V cores the system
>>>>>>>> is running on. In this series we expose a small wrapper function around
>>>>>>>> the syscall. An ifunc selector for memcpy queries it to see if unaligned
>>>>>>>> access is "fast" on this hardware. If it is, it selects a newly provided
>>>>>>>> implementation of memcpy that doesn't work hard at aligning the src and
>>>>>>>> destination buffers.
>>>>>>>>
>>>>>>>> This is somewhat of a proof of concept for the syscall itself, but I do
>>>>>>>> find that in my goofy memcpy test [1], the unaligned memcpy performed at
>>>>>>>> least as well as the generic C version. This is however on Qemu on an M1
>>>>>>>> mac, so not a test of any real hardware (more a smoke test that the
>>>>>>>> implementation isn't silly).
>>>>>>>
>>>>>>> QEMU isn't a good enough benchmark to justify a new memcpy routine in glibc.  Evan has a D1, which does support misaligned access and runs some simple benchmarks faster.  There's also been some minor changes to the Linux side of things that warrant a v3 anyway, so he'll just post some benchmarks on HW along with that.
>>>>>>>
>>>>>>> Aside from those comments,
>>>>>>>
>>>>>>> Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
>>>>>>>
>>>>>>> There's a lot more stuff to probe for, but I think we've got enough of a proof of concept for the hwprobe stuff that we can move forward with the core interface bits in Linux/glibc and then unleash the chaos...
>>>>>>>
>>>>>>> Unless anyone else has comments?
>>>>>>
>>>>>> Until riscv_hwprobe is not on Linus tree as official Linux ABI this patchset
>>>>>> can not be installed.  We failed to enforce it on some occasion (like Intel
>>>>>> CET) and it turned out a complete mess after some years...
>>>>>
>>>>> Sorry if that wasn't clear, I was asking if there were any more comments from the glibc side of things before merging the Linux code.
>>>>
>>>> Right, so is this already settle to be the de-factor ABI to query for system
>>>> information in RISCV? Or is it still being discussed? Is it in a next branch
>>>> already, and/or have been tested with a patch glibc?
>>>
>>> It's not in for-next yet, but various patch sets / proposals have been on the lists for a few months and it seems like discussion on the kernel side has pretty much died down.  That's why I was pinging the glibc side of things, if anyone here has comments on the interface then it's time to chime in.  If there's no comments then we're likely to end up with this in the next release (so queue into for-next soon, Linus' master in a month or so).
>>>
>>> IIUC Evan's been testing the kernel+glibc stuff on QEMU, but he should be able to ack that explicitly (it's a little vague in the cover letter).  There's also a glibc-independent kselftest as part of the kernel patch set: https://lore.kernel.org/all/20230327163203.2918455-6-evan@rivosinc.com/ .
>>
>> I am not sure if this is latest thread, but it seems that from cover letter link
>> Arnd has raised some concerns about the interface [1] that has not been fully
>> addressed.
> 
> I've replied to that thread.
> 
>>
>> From libc perspective, the need to specify the query key on riscv_hwprobe should
>> not be a problem (libc must know what tohandle, unknown tags are no use) and it
>> simplifies the buffer management (so there is no need to query for unknown set of
>> keys of a allocate a large buffer to handle multiple non-required pairs).
>>
>> However, I agree with Arnd that there should be no need to optimize for hardware
>> that has an asymmetric set of features and, at least for glibc usage and most
>> runtime feature selection, it does not make sense to query per-cpu information
>> (unless you some very specific programming, like pine the process to specific
>> cores and enable core-specific code).
> 
> I pushed back on that in my reply upstream, feel free to jump in
> there. I think you're right that glibc probably wouldn't ever use the
> cpuset aspect of the interface, but the gist of my reply upstream is
> that more specialized apps may.

Well, I still think providing the userland with asymmetric set of features is
a complexity that does not pay off, but at least the interface does allow
to return a concise view of the supported features.

> 
>>
>> I also wonder how hotplug or cpusets would play with the vDSO support, and how
>> kernel would synchronize the update, if any, to the prive vDSO data.
> 
> The good news is that the cached data in the vDSO is not ABI, it's
> hidden behind the vDSO function. So as things like hotplug start
> evolving and interacting with the vDSO cache data, we can update what
> data we cache and when we fall back to the syscall.

Right, I was just curious how one would synchronize the vDSO code with the
concurrent update from kernel.  Some time ago, I was working with another
kernel developer on a vDSO getrandom and it required a lot of boilerplate and
even though we did not come with a good interface for concurrent access with
a structure that kernel might change concurrently.

  reply	other threads:[~2023-03-30 19:43 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-21 19:15 Evan Green
2023-02-21 19:15 ` [PATCH v2 1/3] riscv: Add Linux hwprobe syscall support Evan Green
2023-03-29 18:38   ` Adhemerval Zanella Netto
2023-02-21 19:15 ` [PATCH v2 2/3] riscv: Add hwprobe vdso call support Evan Green
2023-03-29 18:39   ` Adhemerval Zanella Netto
2023-02-21 19:15 ` [PATCH v2 3/3] riscv: Add and use alignment-ignorant memcpy Evan Green
2023-03-28 22:54 ` [PATCH v2 0/3] RISC-V: ifunced memcpy using new kernel hwprobe interface Palmer Dabbelt
2023-03-28 23:41   ` Adhemerval Zanella Netto
2023-03-29  0:01     ` Palmer Dabbelt
2023-03-29 19:16       ` Adhemerval Zanella Netto
2023-03-29 19:45         ` Palmer Dabbelt
2023-03-29 20:13           ` Adhemerval Zanella Netto
2023-03-30 18:31             ` Evan Green
2023-03-30 19:43               ` Adhemerval Zanella Netto [this message]
2023-03-30  6:20           ` Jeff Law
2023-03-30 18:43             ` Evan Green
2023-03-31  5:09               ` Jeff Law
2023-03-30 19:38             ` Adhemerval Zanella Netto
2023-03-31 18:07               ` Jeff Law
2023-03-31 18:34                 ` Palmer Dabbelt
2023-03-31 19:32                   ` Adhemerval Zanella Netto
2023-03-31 20:19                     ` Jeff Law
2023-03-31 21:03                       ` Palmer Dabbelt
2023-03-31 21:35                         ` Jeff Law
2023-03-31 21:38                           ` Palmer Dabbelt
2023-03-31 22:10                             ` Jeff Law
2023-04-07 15:36                               ` Palmer Dabbelt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b53ab29a-b78c-11b3-2ce5-f406396eccae@linaro.org \
    --to=adhemerval.zanella@linaro.org \
    --cc=arnd@arndb.de \
    --cc=evan@rivosinc.com \
    --cc=libc-alpha@sourceware.org \
    --cc=palmer@rivosinc.com \
    --cc=slewis@rivosinc.com \
    --cc=vineetg@rivosinc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).