public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Evan Green <evan@rivosinc.com>
To: libc-alpha@sourceware.org
Cc: vineetg@rivosinc.com, slewis@rivosinc.com, palmer@rivosinc.com,
	 Florian Weimer <fweimer@redhat.com>
Subject: Re: [PATCH v5 0/4] RISC-V: ifunced memcpy using new kernel hwprobe interface
Date: Wed, 12 Jul 2023 15:35:28 -0700	[thread overview]
Message-ID: <CALs-HstAyvqBWs_5F2ss3_HJdPs-fDvUxiRNWsRxofTeVJ4vCQ@mail.gmail.com> (raw)
In-Reply-To: <20230712193629.2880253-1-evan@rivosinc.com>

On Wed, Jul 12, 2023 at 12:36 PM Evan Green <evan@rivosinc.com> wrote:
>
>
> This series illustrates the use of a recently accepted Linux syscall that
> enumerates architectural information about the RISC-V cores the system
> is running on. In this series we expose a small wrapper function around
> the syscall. An ifunc selector for memcpy queries it to see if unaligned
> access is "fast" on this hardware. If it is, it selects a newly provided
> implementation of memcpy that doesn't work hard at aligning the src and
> destination buffers.
>
> For applications and libraries outside of glibc that want to use
> __riscv_hwprobe() in ifunc selectors, this series also introduces
> __riscv_hwprobe_early(), which works correctly even before all symbols
> have been resolved.
>
> The memcpy implementation is independent enough from the rest of the
> series that it can be omitted safely if desired.
>
> Performance numbers were compared using a small test program [1], run on
> a D1 Nezha board, which supports fast unaligned access. "Fast" here
> means copying unaligned words is faster than copying byte-wise, but
> still slower than copying aligned words. Here's the speed of various
> memcpy()s with the generic implementation. The numbers before are using
> v4's memcpy implementation, with the "copy last byte via overlapping
> misaligned word" fix this should get even better, though I'm having
> trouble with my setup right now and wasn't able to re-run the numbers
> on the same hardware. I'll keep working on that.

Sigh, so the mysterious "issue" I was facing was in fact that my
assembly code was totally broken. I have an actually-tested version of
the assembly I can send the next spin anytime, but I figure I'll wait
for any comments Florian might have about my ifunc-friendly hwprobe
patch. Sorry for the noise.

Revised performance numbers with v5 are improved over v4 for odd
numbered sizes. Copies with already aligned buffers and sizes take a
small penalty over v4, which is expected due to the extra conditional.
But both v4 and v5 are an improvement over what's in-tree now:

memcpy size 1 count 1000000 offset 0 took 82887 us
memcpy size 3 count 1000000 offset 0 took 102711 us
memcpy size 4 count 1000000 offset 0 took 111857 us
memcpy size 7 count 1000000 offset 0 took 142587 us
memcpy size 8 count 1000000 offset 0 took 96730 us
memcpy size f count 1000000 offset 0 took 107694 us
memcpy size f count 1000000 offset 1 took 110674 us
memcpy size f count 1000000 offset 3 took 111728 us
memcpy size f count 1000000 offset 7 took 111767 us
memcpy size f count 1000000 offset 8 took 107698 us
memcpy size f count 1000000 offset 9 took 110699 us
memcpy size 10 count 1000000 offset 0 took 108755 us
memcpy size 11 count 1000000 offset 0 took 128412 us
memcpy size 17 count 1000000 offset 0 took 127772 us
memcpy size 18 count 1000000 offset 0 took 116655 us
memcpy size 19 count 1000000 offset 0 took 127676 us
memcpy size 1f count 1000000 offset 0 took 127628 us
memcpy size 20 count 1000000 offset 0 took 128680 us
memcpy size 21 count 1000000 offset 0 took 142597 us
memcpy size 3f count 1000000 offset 0 took 178168 us
memcpy size 40 count 1000000 offset 0 took 172904 us
memcpy size 41 count 1000000 offset 0 took 185312 us
memcpy size 7c count 100000 offset 0 took 26748 us
memcpy size 7f count 100000 offset 0 took 26798 us
memcpy size ff count 100000 offset 0 took 34023 us
memcpy size ff count 100000 offset 0 took 34038 us
memcpy size 100 count 100000 offset 0 took 22458 us
memcpy size 200 count 100000 offset 0 took 36446 us
memcpy size 27f count 100000 offset 0 took 55046 us
memcpy size 400 count 100000 offset 0 took 65218 us
memcpy size 407 count 100000 offset 0 took 66900 us
memcpy size 800 count 100000 offset 0 took 122498 us
memcpy size 87f count 100000 offset 0 took 141846 us
memcpy size 87f count 100000 offset 3 took 143608 us
memcpy size 1000 count 100000 offset 0 took 233596 us
memcpy size 1000 count 100000 offset 1 took 256441 us
memcpy size 1000 count 100000 offset 3 took 257418 us
memcpy size 1000 count 100000 offset 4 took 256342 us
memcpy size 1000 count 100000 offset 5 took 256154 us
memcpy size 1000 count 100000 offset 7 took 256306 us
memcpy size 1000 count 100000 offset 8 took 233670 us
memcpy size 1000 count 100000 offset 9 took 257956 us
memcpy size 3000 count 50000 offset 0 took 506898 us
memcpy size 3000 count 50000 offset 1 took 516202 us
memcpy size 3000 count 50000 offset 3 took 516558 us
memcpy size 3000 count 50000 offset 4 took 519054 us
memcpy size 3000 count 50000 offset 5 took 520583 us
memcpy size 3000 count 50000 offset 7 took 515544 us
memcpy size 3000 count 50000 offset 8 took 504335 us
memcpy size 3000 count 50000 offset 9 took 518367 us

-Evan

>
> memcpy size 1 count 1000000 offset 0 took 109564 us
> memcpy size 3 count 1000000 offset 0 took 138425 us
> memcpy size 4 count 1000000 offset 0 took 148374 us
> memcpy size 7 count 1000000 offset 0 took 178433 us
> memcpy size 8 count 1000000 offset 0 took 188430 us
> memcpy size f count 1000000 offset 0 took 266118 us
> memcpy size f count 1000000 offset 1 took 265940 us
> memcpy size f count 1000000 offset 3 took 265934 us
> memcpy size f count 1000000 offset 7 took 266215 us
> memcpy size f count 1000000 offset 8 took 265954 us
> memcpy size f count 1000000 offset 9 took 265886 us
> memcpy size 10 count 1000000 offset 0 took 195308 us
> memcpy size 11 count 1000000 offset 0 took 205161 us
> memcpy size 17 count 1000000 offset 0 took 274376 us
> memcpy size 18 count 1000000 offset 0 took 199188 us
> memcpy size 19 count 1000000 offset 0 took 209258 us
> memcpy size 1f count 1000000 offset 0 took 278263 us
> memcpy size 20 count 1000000 offset 0 took 207364 us
> memcpy size 21 count 1000000 offset 0 took 217143 us
> memcpy size 3f count 1000000 offset 0 took 300023 us
> memcpy size 40 count 1000000 offset 0 took 231063 us
> memcpy size 41 count 1000000 offset 0 took 241259 us
> memcpy size 7c count 100000 offset 0 took 32807 us
> memcpy size 7f count 100000 offset 0 took 36274 us
> memcpy size ff count 100000 offset 0 took 47818 us
> memcpy size ff count 100000 offset 0 took 47932 us
> memcpy size 100 count 100000 offset 0 took 40468 us
> memcpy size 200 count 100000 offset 0 took 64245 us
> memcpy size 27f count 100000 offset 0 took 82549 us
> memcpy size 400 count 100000 offset 0 took 111254 us
> memcpy size 407 count 100000 offset 0 took 119364 us
> memcpy size 800 count 100000 offset 0 took 203899 us
> memcpy size 87f count 100000 offset 0 took 222465 us
> memcpy size 87f count 100000 offset 3 took 222289 us
> memcpy size 1000 count 100000 offset 0 took 388846 us
> memcpy size 1000 count 100000 offset 1 took 468827 us
> memcpy size 1000 count 100000 offset 3 took 397098 us
> memcpy size 1000 count 100000 offset 4 took 397379 us
> memcpy size 1000 count 100000 offset 5 took 397368 us
> memcpy size 1000 count 100000 offset 7 took 396867 us
> memcpy size 1000 count 100000 offset 8 took 389227 us
> memcpy size 1000 count 100000 offset 9 took 395949 us
> memcpy size 3000 count 50000 offset 0 took 674837 us
> memcpy size 3000 count 50000 offset 1 took 676944 us
> memcpy size 3000 count 50000 offset 3 took 679709 us
> memcpy size 3000 count 50000 offset 4 took 680829 us
> memcpy size 3000 count 50000 offset 5 took 678024 us
> memcpy size 3000 count 50000 offset 7 took 681097 us
> memcpy size 3000 count 50000 offset 8 took 670004 us
> memcpy size 3000 count 50000 offset 9 took 674553 us
>
> Here is that same test run with the assembly memcpy() in this series:
> memcpy size 1 count 1000000 offset 0 took 92703 us
> memcpy size 3 count 1000000 offset 0 took 112527 us
> memcpy size 4 count 1000000 offset 0 took 120481 us
> memcpy size 7 count 1000000 offset 0 took 149558 us
> memcpy size 8 count 1000000 offset 0 took 90617 us
> memcpy size f count 1000000 offset 0 took 174373 us
> memcpy size f count 1000000 offset 1 took 178615 us
> memcpy size f count 1000000 offset 3 took 178845 us
> memcpy size f count 1000000 offset 7 took 178636 us
> memcpy size f count 1000000 offset 8 took 174442 us
> memcpy size f count 1000000 offset 9 took 178660 us
> memcpy size 10 count 1000000 offset 0 took 99845 us
> memcpy size 11 count 1000000 offset 0 took 112522 us
> memcpy size 17 count 1000000 offset 0 took 179735 us
> memcpy size 18 count 1000000 offset 0 took 110870 us
> memcpy size 19 count 1000000 offset 0 took 121472 us
> memcpy size 1f count 1000000 offset 0 took 188231 us
> memcpy size 20 count 1000000 offset 0 took 119571 us
> memcpy size 21 count 1000000 offset 0 took 132429 us
> memcpy size 3f count 1000000 offset 0 took 227021 us
> memcpy size 40 count 1000000 offset 0 took 166416 us
> memcpy size 41 count 1000000 offset 0 took 180206 us
> memcpy size 7c count 100000 offset 0 took 28602 us
> memcpy size 7f count 100000 offset 0 took 31676 us
> memcpy size ff count 100000 offset 0 took 39257 us
> memcpy size ff count 100000 offset 0 took 39176 us
> memcpy size 100 count 100000 offset 0 took 21928 us
> memcpy size 200 count 100000 offset 0 took 35814 us
> memcpy size 27f count 100000 offset 0 took 60315 us
> memcpy size 400 count 100000 offset 0 took 63652 us
> memcpy size 407 count 100000 offset 0 took 73160 us
> memcpy size 800 count 100000 offset 0 took 121532 us
> memcpy size 87f count 100000 offset 0 took 147269 us
> memcpy size 87f count 100000 offset 3 took 144744 us
> memcpy size 1000 count 100000 offset 0 took 232057 us
> memcpy size 1000 count 100000 offset 1 took 254319 us
> memcpy size 1000 count 100000 offset 3 took 256973 us
> memcpy size 1000 count 100000 offset 4 took 257655 us
> memcpy size 1000 count 100000 offset 5 took 259456 us
> memcpy size 1000 count 100000 offset 7 took 260849 us
> memcpy size 1000 count 100000 offset 8 took 232347 us
> memcpy size 1000 count 100000 offset 9 took 254330 us
> memcpy size 3000 count 50000 offset 0 took 382376 us
> memcpy size 3000 count 50000 offset 1 took 389872 us
> memcpy size 3000 count 50000 offset 3 took 385310 us
> memcpy size 3000 count 50000 offset 4 took 389748 us
> memcpy size 3000 count 50000 offset 5 took 391707 us
> memcpy size 3000 count 50000 offset 7 took 386778 us
> memcpy size 3000 count 50000 offset 8 took 385691 us
> memcpy size 3000 count 50000 offset 9 took 392030 us
>
> The assembly routine is measurably better.
>
> [1] https://pastebin.com/DRyECNQW
>
>
> Changes in v5:
>  - Introduced __riscv_hwprobe_early()
>  - Do unaligned word access for final trailing bytes (Richard)
>
> Changes in v4:
>  - Remove __USE_GNU (Florian)
>  - __nonnull, __wur, __THROW, and  __fortified_attr_access decorations
>   (Florian)
>  - change long to long int (Florian)
>  - Fix comment formatting (Florian)
>  - Update backup kernel header content copy.
>  - Fix function declaration formatting (Florian)
>  - Changed export versions to 2.38
>  - Fixed comment style (Florian)
>
> Changes in v3:
>  - Update argument types to match v4 kernel interface
>  - Add the "return" to the vsyscall
>  - Fix up vdso arg types to match kernel v4 version
>  - Remove ifdef around INLINE_VSYSCALL (Adhemerval)
>  - Word align dest for large memcpy()s.
>  - Add tags
>  - Remove spurious blank line from sysdeps/riscv/memcpy.c
>
> Changes in v2:
>  - hwprobe.h: Use __has_include and duplicate Linux content to make
>    compilation work when Linux headers are absent (Adhemerval)
>  - hwprobe.h: Put declaration under __USE_GNU (Adhemerval)
>  - Use INLINE_SYSCALL_CALL (Adhemerval)
>  - Update versions
>  - Update UNALIGNED_MASK to match kernel v3 series.
>  - Add vDSO interface
>  - Used _MASK instead of _FAST value itself.
>
> Evan Green (4):
>   riscv: Add Linux hwprobe syscall support
>   riscv: Add hwprobe vdso call support
>   riscv: Add ifunc-compatible hwprobe function
>   riscv: Add and use alignment-ignorant memcpy
>
>  sysdeps/riscv/memcopy.h                       |  26 ++++
>  sysdeps/riscv/memcpy.c                        |  64 +++++++++
>  sysdeps/riscv/memcpy_noalignment.S            | 134 ++++++++++++++++++
>  sysdeps/unix/sysv/linux/dl-vdso-setup.c       |  10 ++
>  sysdeps/unix/sysv/linux/dl-vdso-setup.h       |   3 +
>  sysdeps/unix/sysv/linux/riscv/Makefile        |   9 +-
>  sysdeps/unix/sysv/linux/riscv/Versions        |   4 +
>  sysdeps/unix/sysv/linux/riscv/hwprobe.c       |  33 +++++
>  .../unix/sysv/linux/riscv/hwprobe_static.c    |  36 +++++
>  .../unix/sysv/linux/riscv/memcpy-generic.c    |  24 ++++
>  .../unix/sysv/linux/riscv/rv32/libc.abilist   |   2 +
>  .../unix/sysv/linux/riscv/rv64/libc.abilist   |   2 +
>  sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h   |  93 ++++++++++++
>  sysdeps/unix/sysv/linux/riscv/sysdep.h        |   1 +
>  14 files changed, 439 insertions(+), 2 deletions(-)
>  create mode 100644 sysdeps/riscv/memcopy.h
>  create mode 100644 sysdeps/riscv/memcpy.c
>  create mode 100644 sysdeps/riscv/memcpy_noalignment.S
>  create mode 100644 sysdeps/unix/sysv/linux/riscv/hwprobe.c
>  create mode 100644 sysdeps/unix/sysv/linux/riscv/hwprobe_static.c
>  create mode 100644 sysdeps/unix/sysv/linux/riscv/memcpy-generic.c
>  create mode 100644 sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h
>
> --
> 2.34.1
>

      parent reply	other threads:[~2023-07-12 22:36 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-12 19:36 Evan Green
2023-07-12 19:36 ` [PATCH v5 1/4] riscv: Add Linux hwprobe syscall support Evan Green
2023-07-19 22:44   ` Joseph Myers
2023-07-12 19:36 ` [PATCH v5 2/4] riscv: Add hwprobe vdso call support Evan Green
2023-07-12 19:36 ` [PATCH v5 3/4] riscv: Add ifunc-compatible hwprobe function Evan Green
2023-07-13  7:07   ` Florian Weimer
2023-07-13 16:33     ` Evan Green
2023-07-13 16:47       ` Palmer Dabbelt
2023-07-13 18:21         ` Evan Green
2023-07-14  6:54           ` Florian Weimer
2023-07-12 19:36 ` [PATCH v5 4/4] riscv: Add and use alignment-ignorant memcpy Evan Green
2023-07-12 22:35 ` Evan Green [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALs-HstAyvqBWs_5F2ss3_HJdPs-fDvUxiRNWsRxofTeVJ4vCQ@mail.gmail.com \
    --to=evan@rivosinc.com \
    --cc=fweimer@redhat.com \
    --cc=libc-alpha@sourceware.org \
    --cc=palmer@rivosinc.com \
    --cc=slewis@rivosinc.com \
    --cc=vineetg@rivosinc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).