public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface
@ 2023-08-02 15:58 Evan Green
  2023-08-02 15:58 ` [PATCH v6 1/5] riscv: Add Linux hwprobe syscall support Evan Green
                   ` (5 more replies)
  0 siblings, 6 replies; 27+ messages in thread
From: Evan Green @ 2023-08-02 15:58 UTC (permalink / raw)
  To: libc-alpha; +Cc: slewis, Florian Weimer, palmer, vineetg, Evan Green


This series illustrates the use of a recently accepted Linux syscall that
enumerates architectural information about the RISC-V cores the system
is running on. In this series we expose a small wrapper function around
the syscall. An ifunc selector for memcpy queries it to see if unaligned
access is "fast" on this hardware. If it is, it selects a newly provided
implementation of memcpy that doesn't work hard at aligning the src and
destination buffers.

For applications and libraries outside of glibc that want to use
__riscv_hwprobe() in ifunc selectors, this series also introduces
__riscv_hwprobe_early(), which works correctly even before all symbols
have been resolved.

The memcpy implementation is independent enough from the rest of the
series that it can be omitted safely if desired.

Performance numbers were compared using a small test program [1], run on
a D1 Nezha board, which supports fast unaligned access. "Fast" here
means copying unaligned words is faster than copying byte-wise, but
still slower than copying aligned words. Here's the speed of various
memcpy()s with the generic implementation. The numbers before are using
v4's memcpy implementation, with the "copy last byte via overlapping
misaligned word" fix this should get even better, though I'm having
trouble with my setup right now and wasn't able to re-run the numbers
on the same hardware. I'll keep working on that.

memcpy size 1 count 1000000 offset 0 took 109564 us
memcpy size 3 count 1000000 offset 0 took 138425 us
memcpy size 4 count 1000000 offset 0 took 148374 us
memcpy size 7 count 1000000 offset 0 took 178433 us
memcpy size 8 count 1000000 offset 0 took 188430 us
memcpy size f count 1000000 offset 0 took 266118 us
memcpy size f count 1000000 offset 1 took 265940 us
memcpy size f count 1000000 offset 3 took 265934 us
memcpy size f count 1000000 offset 7 took 266215 us
memcpy size f count 1000000 offset 8 took 265954 us
memcpy size f count 1000000 offset 9 took 265886 us
memcpy size 10 count 1000000 offset 0 took 195308 us
memcpy size 11 count 1000000 offset 0 took 205161 us
memcpy size 17 count 1000000 offset 0 took 274376 us
memcpy size 18 count 1000000 offset 0 took 199188 us
memcpy size 19 count 1000000 offset 0 took 209258 us
memcpy size 1f count 1000000 offset 0 took 278263 us
memcpy size 20 count 1000000 offset 0 took 207364 us
memcpy size 21 count 1000000 offset 0 took 217143 us
memcpy size 3f count 1000000 offset 0 took 300023 us
memcpy size 40 count 1000000 offset 0 took 231063 us
memcpy size 41 count 1000000 offset 0 took 241259 us
memcpy size 7c count 100000 offset 0 took 32807 us
memcpy size 7f count 100000 offset 0 took 36274 us
memcpy size ff count 100000 offset 0 took 47818 us
memcpy size ff count 100000 offset 0 took 47932 us
memcpy size 100 count 100000 offset 0 took 40468 us
memcpy size 200 count 100000 offset 0 took 64245 us
memcpy size 27f count 100000 offset 0 took 82549 us
memcpy size 400 count 100000 offset 0 took 111254 us
memcpy size 407 count 100000 offset 0 took 119364 us
memcpy size 800 count 100000 offset 0 took 203899 us
memcpy size 87f count 100000 offset 0 took 222465 us
memcpy size 87f count 100000 offset 3 took 222289 us
memcpy size 1000 count 100000 offset 0 took 388846 us
memcpy size 1000 count 100000 offset 1 took 468827 us
memcpy size 1000 count 100000 offset 3 took 397098 us
memcpy size 1000 count 100000 offset 4 took 397379 us
memcpy size 1000 count 100000 offset 5 took 397368 us
memcpy size 1000 count 100000 offset 7 took 396867 us
memcpy size 1000 count 100000 offset 8 took 389227 us
memcpy size 1000 count 100000 offset 9 took 395949 us
memcpy size 3000 count 50000 offset 0 took 674837 us
memcpy size 3000 count 50000 offset 1 took 676944 us
memcpy size 3000 count 50000 offset 3 took 679709 us
memcpy size 3000 count 50000 offset 4 took 680829 us
memcpy size 3000 count 50000 offset 5 took 678024 us
memcpy size 3000 count 50000 offset 7 took 681097 us
memcpy size 3000 count 50000 offset 8 took 670004 us
memcpy size 3000 count 50000 offset 9 took 674553 us

Here is that same test run with the assembly memcpy() in this series:
memcpy size 1 count 1000000 offset 0 took 92703 us
memcpy size 3 count 1000000 offset 0 took 112527 us
memcpy size 4 count 1000000 offset 0 took 120481 us
memcpy size 7 count 1000000 offset 0 took 149558 us
memcpy size 8 count 1000000 offset 0 took 90617 us
memcpy size f count 1000000 offset 0 took 174373 us
memcpy size f count 1000000 offset 1 took 178615 us
memcpy size f count 1000000 offset 3 took 178845 us
memcpy size f count 1000000 offset 7 took 178636 us
memcpy size f count 1000000 offset 8 took 174442 us
memcpy size f count 1000000 offset 9 took 178660 us
memcpy size 10 count 1000000 offset 0 took 99845 us
memcpy size 11 count 1000000 offset 0 took 112522 us
memcpy size 17 count 1000000 offset 0 took 179735 us
memcpy size 18 count 1000000 offset 0 took 110870 us
memcpy size 19 count 1000000 offset 0 took 121472 us
memcpy size 1f count 1000000 offset 0 took 188231 us
memcpy size 20 count 1000000 offset 0 took 119571 us
memcpy size 21 count 1000000 offset 0 took 132429 us
memcpy size 3f count 1000000 offset 0 took 227021 us
memcpy size 40 count 1000000 offset 0 took 166416 us
memcpy size 41 count 1000000 offset 0 took 180206 us
memcpy size 7c count 100000 offset 0 took 28602 us
memcpy size 7f count 100000 offset 0 took 31676 us
memcpy size ff count 100000 offset 0 took 39257 us
memcpy size ff count 100000 offset 0 took 39176 us
memcpy size 100 count 100000 offset 0 took 21928 us
memcpy size 200 count 100000 offset 0 took 35814 us
memcpy size 27f count 100000 offset 0 took 60315 us
memcpy size 400 count 100000 offset 0 took 63652 us
memcpy size 407 count 100000 offset 0 took 73160 us
memcpy size 800 count 100000 offset 0 took 121532 us
memcpy size 87f count 100000 offset 0 took 147269 us
memcpy size 87f count 100000 offset 3 took 144744 us
memcpy size 1000 count 100000 offset 0 took 232057 us
memcpy size 1000 count 100000 offset 1 took 254319 us
memcpy size 1000 count 100000 offset 3 took 256973 us
memcpy size 1000 count 100000 offset 4 took 257655 us
memcpy size 1000 count 100000 offset 5 took 259456 us
memcpy size 1000 count 100000 offset 7 took 260849 us
memcpy size 1000 count 100000 offset 8 took 232347 us
memcpy size 1000 count 100000 offset 9 took 254330 us
memcpy size 3000 count 50000 offset 0 took 382376 us
memcpy size 3000 count 50000 offset 1 took 389872 us
memcpy size 3000 count 50000 offset 3 took 385310 us
memcpy size 3000 count 50000 offset 4 took 389748 us
memcpy size 3000 count 50000 offset 5 took 391707 us
memcpy size 3000 count 50000 offset 7 took 386778 us
memcpy size 3000 count 50000 offset 8 took 385691 us
memcpy size 3000 count 50000 offset 9 took 392030 us

The assembly routine is measurably better.

[1] https://pastebin.com/DRyECNQW


Changes in v6:
 - Prefixed __riscv_hwprobe() parameters names with __ to avoid user
   macro namespace pollution (Joseph)
 - Introduced riscv-ifunc.h for multi-arg ifunc selectors.
 - Fix a couple regressions in the assembly from v5 :/
 - Use passed hwprobe pointer in memcpy ifunc selector.

Changes in v5:
 - Do unaligned word access for final trailing bytes (Richard)

Changes in v4:
 - Remove __USE_GNU (Florian)
 - __nonnull, __wur, __THROW, and  __fortified_attr_access decorations
  (Florian)
 - change long to long int (Florian)
 - Fix comment formatting (Florian)
 - Update backup kernel header content copy.
 - Fix function declaration formatting (Florian)
 - Changed export versions to 2.38
 - Fixed comment style (Florian)

Changes in v3:
 - Update argument types to match v4 kernel interface
 - Add the "return" to the vsyscall
 - Fix up vdso arg types to match kernel v4 version
 - Remove ifdef around INLINE_VSYSCALL (Adhemerval)
 - Word align dest for large memcpy()s.
 - Add tags
 - Remove spurious blank line from sysdeps/riscv/memcpy.c

Changes in v2:
 - hwprobe.h: Use __has_include and duplicate Linux content to make
   compilation work when Linux headers are absent (Adhemerval)
 - hwprobe.h: Put declaration under __USE_GNU (Adhemerval)
 - Use INLINE_SYSCALL_CALL (Adhemerval)
 - Update versions
 - Update UNALIGNED_MASK to match kernel v3 series.
 - Add vDSO interface
 - Used _MASK instead of _FAST value itself.

Evan Green (5):
  riscv: Add Linux hwprobe syscall support
  riscv: Add hwprobe vdso call support
  riscv: Add __riscv_hwprobe pointer to ifunc calls
  riscv: Enable multi-arg ifunc resolvers
  riscv: Add and use alignment-ignorant memcpy

 include/libc-symbols.h                        |  28 ++--
 sysdeps/riscv/dl-irel.h                       |   8 +-
 sysdeps/riscv/memcopy.h                       |  26 ++++
 sysdeps/riscv/memcpy.c                        |  66 +++++++++
 sysdeps/riscv/memcpy_noalignment.S            | 138 ++++++++++++++++++
 sysdeps/riscv/riscv-ifunc.h                   |  27 ++++
 sysdeps/unix/sysv/linux/dl-vdso-setup.c       |  10 ++
 sysdeps/unix/sysv/linux/dl-vdso-setup.h       |   3 +
 sysdeps/unix/sysv/linux/riscv/Makefile        |   8 +-
 sysdeps/unix/sysv/linux/riscv/Versions        |   3 +
 sysdeps/unix/sysv/linux/riscv/hwprobe.c       |  32 ++++
 .../unix/sysv/linux/riscv/memcpy-generic.c    |  24 +++
 .../unix/sysv/linux/riscv/rv32/libc.abilist   |   1 +
 .../unix/sysv/linux/riscv/rv64/libc.abilist   |   1 +
 sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h   |  82 +++++++++++
 sysdeps/unix/sysv/linux/riscv/sysdep.h        |   1 +
 16 files changed, 441 insertions(+), 17 deletions(-)
 create mode 100644 sysdeps/riscv/memcopy.h
 create mode 100644 sysdeps/riscv/memcpy.c
 create mode 100644 sysdeps/riscv/memcpy_noalignment.S
 create mode 100644 sysdeps/riscv/riscv-ifunc.h
 create mode 100644 sysdeps/unix/sysv/linux/riscv/hwprobe.c
 create mode 100644 sysdeps/unix/sysv/linux/riscv/memcpy-generic.c
 create mode 100644 sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h

-- 
2.34.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v6 1/5] riscv: Add Linux hwprobe syscall support
  2023-08-02 15:58 [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface Evan Green
@ 2023-08-02 15:58 ` Evan Green
  2023-08-02 16:52   ` Joseph Myers
  2023-08-03  7:24   ` Florian Weimer
  2023-08-02 15:59 ` [PATCH v6 2/5] riscv: Add hwprobe vdso call support Evan Green
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 27+ messages in thread
From: Evan Green @ 2023-08-02 15:58 UTC (permalink / raw)
  To: libc-alpha; +Cc: slewis, Florian Weimer, palmer, vineetg, Evan Green

Add awareness and a thin wrapper function around a new Linux system call
that allows callers to get architecture and microarchitecture
information about the CPUs from the kernel. This can be used to
do things like dynamically choose a memcpy implementation.

Signed-off-by: Evan Green <evan@rivosinc.com>
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
---

Changes in v6:
 - Prefixed __riscv_hwprobe() parameters names with __ to avoid user
   macro namespace pollution (Joseph)

Changes in v4:
 - Remove __USE_GNU (Florian)
 - __nonnull, __wur, __THROW, and  __fortified_attr_access decorations
  (Florian)
 - change long to long int (Florian)
 - Fix comment formatting (Florian)
 - Update backup kernel header content copy.
 - Fix function declaration formatting (Florian)
 - Changed export versions to 2.38

Changes in v3:
 - Update argument types to match v4 kernel interface

Changes in v2:
 - hwprobe.h: Use __has_include and duplicate Linux content to make
   compilation work when Linux headers are absent (Adhemerval)
 - hwprobe.h: Put declaration under __USE_GNU (Adhemerval)
 - Use INLINE_SYSCALL_CALL (Adhemerval)
 - Update versions
 - Update UNALIGNED_MASK to match kernel v3 series.

 sysdeps/unix/sysv/linux/riscv/Makefile        |  4 +-
 sysdeps/unix/sysv/linux/riscv/Versions        |  3 +
 sysdeps/unix/sysv/linux/riscv/hwprobe.c       | 30 ++++++++
 .../unix/sysv/linux/riscv/rv32/libc.abilist   |  1 +
 .../unix/sysv/linux/riscv/rv64/libc.abilist   |  1 +
 sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h   | 72 +++++++++++++++++++
 6 files changed, 109 insertions(+), 2 deletions(-)
 create mode 100644 sysdeps/unix/sysv/linux/riscv/hwprobe.c
 create mode 100644 sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h

diff --git a/sysdeps/unix/sysv/linux/riscv/Makefile b/sysdeps/unix/sysv/linux/riscv/Makefile
index 4b6eacb32f..45cc29e40d 100644
--- a/sysdeps/unix/sysv/linux/riscv/Makefile
+++ b/sysdeps/unix/sysv/linux/riscv/Makefile
@@ -1,6 +1,6 @@
 ifeq ($(subdir),misc)
-sysdep_headers += sys/cachectl.h
-sysdep_routines += flush-icache
+sysdep_headers += sys/cachectl.h sys/hwprobe.h
+sysdep_routines += flush-icache hwprobe
 endif
 
 ifeq ($(subdir),stdlib)
diff --git a/sysdeps/unix/sysv/linux/riscv/Versions b/sysdeps/unix/sysv/linux/riscv/Versions
index 5625d2a0b8..0c4016382d 100644
--- a/sysdeps/unix/sysv/linux/riscv/Versions
+++ b/sysdeps/unix/sysv/linux/riscv/Versions
@@ -8,4 +8,7 @@ libc {
   GLIBC_2.27 {
     __riscv_flush_icache;
   }
+  GLIBC_2.38 {
+    __riscv_hwprobe;
+  }
 }
diff --git a/sysdeps/unix/sysv/linux/riscv/hwprobe.c b/sysdeps/unix/sysv/linux/riscv/hwprobe.c
new file mode 100644
index 0000000000..81f24dbc19
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/riscv/hwprobe.c
@@ -0,0 +1,30 @@
+/* RISC-V hardware feature probing support on Linux
+   Copyright (C) 2023 Free Software Foundation, Inc.
+
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public License as
+   published by the Free Software Foundation; either version 2.1 of the
+   License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sys/syscall.h>
+#include <sys/hwprobe.h>
+#include <sysdep.h>
+
+int __riscv_hwprobe (struct riscv_hwprobe *__pairs, size_t __pair_count,
+		     size_t __cpu_count, unsigned long int *__cpus,
+		     unsigned int __flags)
+{
+  return INLINE_SYSCALL_CALL (riscv_hwprobe, __pairs, __pair_count,
+                              __cpu_count, __cpus, __flags);
+}
diff --git a/sysdeps/unix/sysv/linux/riscv/rv32/libc.abilist b/sysdeps/unix/sysv/linux/riscv/rv32/libc.abilist
index b9740a1afc..8fab4a606f 100644
--- a/sysdeps/unix/sysv/linux/riscv/rv32/libc.abilist
+++ b/sysdeps/unix/sysv/linux/riscv/rv32/libc.abilist
@@ -2436,3 +2436,4 @@ GLIBC_2.38 strlcat F
 GLIBC_2.38 strlcpy F
 GLIBC_2.38 wcslcat F
 GLIBC_2.38 wcslcpy F
+GLIBC_2.38 __riscv_hwprobe F
diff --git a/sysdeps/unix/sysv/linux/riscv/rv64/libc.abilist b/sysdeps/unix/sysv/linux/riscv/rv64/libc.abilist
index e3b4656aa2..1ebb91deed 100644
--- a/sysdeps/unix/sysv/linux/riscv/rv64/libc.abilist
+++ b/sysdeps/unix/sysv/linux/riscv/rv64/libc.abilist
@@ -2636,3 +2636,4 @@ GLIBC_2.38 strlcat F
 GLIBC_2.38 strlcpy F
 GLIBC_2.38 wcslcat F
 GLIBC_2.38 wcslcpy F
+GLIBC_2.38 __riscv_hwprobe F
diff --git a/sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h b/sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h
new file mode 100644
index 0000000000..63372c5a94
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h
@@ -0,0 +1,72 @@
+/* RISC-V architecture probe interface
+   Copyright (C) 2023 Free Software Foundation, Inc.
+
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library.  If not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#ifndef _SYS_HWPROBE_H
+#define _SYS_HWPROBE_H 1
+
+#include <features.h>
+#include <stddef.h>
+#ifdef __has_include
+# if __has_include (<asm/hwprobe.h>)
+#  include <asm/hwprobe.h>
+# endif
+#endif
+
+/* Define a (probably stale) version of the interface if the Linux headers
+   aren't present.  */
+#ifndef RISCV_HWPROBE_KEY_MVENDORID
+struct riscv_hwprobe {
+	signed long long int key;
+	unsigned long long int value;
+};
+
+#define RISCV_HWPROBE_KEY_MVENDORID	0
+#define RISCV_HWPROBE_KEY_MARCHID	1
+#define RISCV_HWPROBE_KEY_MIMPID	2
+#define RISCV_HWPROBE_KEY_BASE_BEHAVIOR	3
+#define		RISCV_HWPROBE_BASE_BEHAVIOR_IMA	(1 << 0)
+#define RISCV_HWPROBE_KEY_IMA_EXT_0	4
+#define		RISCV_HWPROBE_IMA_FD		(1 << 0)
+#define		RISCV_HWPROBE_IMA_C		(1 << 1)
+#define		RISCV_HWPROBE_IMA_V		(1 << 2)
+#define		RISCV_HWPROBE_EXT_ZBA		(1 << 3)
+#define		RISCV_HWPROBE_EXT_ZBB		(1 << 4)
+#define		RISCV_HWPROBE_EXT_ZBS		(1 << 5)
+#define RISCV_HWPROBE_KEY_CPUPERF_0	5
+#define		RISCV_HWPROBE_MISALIGNED_UNKNOWN	(0 << 0)
+#define		RISCV_HWPROBE_MISALIGNED_EMULATED	(1 << 0)
+#define		RISCV_HWPROBE_MISALIGNED_SLOW		(2 << 0)
+#define		RISCV_HWPROBE_MISALIGNED_FAST		(3 << 0)
+#define		RISCV_HWPROBE_MISALIGNED_UNSUPPORTED	(4 << 0)
+#define		RISCV_HWPROBE_MISALIGNED_MASK		(7 << 0)
+
+#endif /* RISCV_HWPROBE_KEY_MVENDORID */
+
+__BEGIN_DECLS
+
+extern int __riscv_hwprobe (struct riscv_hwprobe *__pairs, size_t __pair_count,
+			    size_t __cpu_count, unsigned long int *__cpus,
+			    unsigned int __flags)
+     __THROW __nonnull ((1)) __wur
+     __fortified_attr_access (__read_write__, 1, 2)
+     __fortified_attr_access (__read_only__, 4, 3);
+
+__END_DECLS
+
+#endif /* sys/hwprobe.h */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v6 2/5] riscv: Add hwprobe vdso call support
  2023-08-02 15:58 [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface Evan Green
  2023-08-02 15:58 ` [PATCH v6 1/5] riscv: Add Linux hwprobe syscall support Evan Green
@ 2023-08-02 15:59 ` Evan Green
  2023-08-02 15:59 ` [PATCH v6 3/5] riscv: Add __riscv_hwprobe pointer to ifunc calls Evan Green
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 27+ messages in thread
From: Evan Green @ 2023-08-02 15:59 UTC (permalink / raw)
  To: libc-alpha; +Cc: slewis, Florian Weimer, palmer, vineetg, Evan Green

The new riscv_hwprobe syscall also comes with a vDSO for faster answers
to your most common questions. Call in today to speak with a kernel
representative near you!

Signed-off-by: Evan Green <evan@rivosinc.com>
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
---

(no changes since v3)

Changes in v3:
 - Add the "return" to the vsyscall
 - Fix up vdso arg types to match kernel v4 version
 - Remove ifdef around INLINE_VSYSCALL (Adhemerval)

Changes in v2:
 - Add vDSO interface

 sysdeps/unix/sysv/linux/dl-vdso-setup.c | 10 ++++++++++
 sysdeps/unix/sysv/linux/dl-vdso-setup.h |  3 +++
 sysdeps/unix/sysv/linux/riscv/hwprobe.c |  6 ++++--
 sysdeps/unix/sysv/linux/riscv/sysdep.h  |  1 +
 4 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/sysdeps/unix/sysv/linux/dl-vdso-setup.c b/sysdeps/unix/sysv/linux/dl-vdso-setup.c
index 97eaaeac37..ed8b1ef426 100644
--- a/sysdeps/unix/sysv/linux/dl-vdso-setup.c
+++ b/sysdeps/unix/sysv/linux/dl-vdso-setup.c
@@ -71,6 +71,16 @@ PROCINFO_CLASS int (*_dl_vdso_clock_getres_time64) (clockid_t,
 # ifdef HAVE_GET_TBFREQ
 PROCINFO_CLASS uint64_t (*_dl_vdso_get_tbfreq)(void) RELRO;
 # endif
+
+/* RISC-V specific ones.  */
+# ifdef HAVE_RISCV_HWPROBE
+PROCINFO_CLASS int (*_dl_vdso_riscv_hwprobe)(void *,
+                                             size_t,
+                                             size_t,
+                                             unsigned long *,
+                                             unsigned int) RELRO;
+# endif
+
 #endif
 
 #undef RELRO
diff --git a/sysdeps/unix/sysv/linux/dl-vdso-setup.h b/sysdeps/unix/sysv/linux/dl-vdso-setup.h
index 867072b897..39eafd5316 100644
--- a/sysdeps/unix/sysv/linux/dl-vdso-setup.h
+++ b/sysdeps/unix/sysv/linux/dl-vdso-setup.h
@@ -47,6 +47,9 @@ setup_vdso_pointers (void)
 #ifdef HAVE_GET_TBFREQ
   GLRO(dl_vdso_get_tbfreq) = dl_vdso_vsym (HAVE_GET_TBFREQ);
 #endif
+#ifdef HAVE_RISCV_HWPROBE
+  GLRO(dl_vdso_riscv_hwprobe) = dl_vdso_vsym (HAVE_RISCV_HWPROBE);
+#endif
 }
 
 #endif
diff --git a/sysdeps/unix/sysv/linux/riscv/hwprobe.c b/sysdeps/unix/sysv/linux/riscv/hwprobe.c
index 81f24dbc19..57b06c22a5 100644
--- a/sysdeps/unix/sysv/linux/riscv/hwprobe.c
+++ b/sysdeps/unix/sysv/linux/riscv/hwprobe.c
@@ -20,11 +20,13 @@
 #include <sys/syscall.h>
 #include <sys/hwprobe.h>
 #include <sysdep.h>
+#include <sysdep-vdso.h>
 
 int __riscv_hwprobe (struct riscv_hwprobe *__pairs, size_t __pair_count,
 		     size_t __cpu_count, unsigned long int *__cpus,
 		     unsigned int __flags)
 {
-  return INLINE_SYSCALL_CALL (riscv_hwprobe, __pairs, __pair_count,
-                              __cpu_count, __cpus, __flags);
+ /* The vDSO may be able to provide the answer without a syscall. */
+  return INLINE_VSYSCALL(riscv_hwprobe, 5, __pairs, __pair_count,
+                         __cpu_count, __cpus, __flags);
 }
diff --git a/sysdeps/unix/sysv/linux/riscv/sysdep.h b/sysdeps/unix/sysv/linux/riscv/sysdep.h
index 5583b96d23..ee015dfeb6 100644
--- a/sysdeps/unix/sysv/linux/riscv/sysdep.h
+++ b/sysdeps/unix/sysv/linux/riscv/sysdep.h
@@ -156,6 +156,7 @@
 /* List of system calls which are supported as vsyscalls (for RV32 and
    RV64).  */
 # define HAVE_GETCPU_VSYSCALL		"__vdso_getcpu"
+# define HAVE_RISCV_HWPROBE		"__vdso_riscv_hwprobe"
 
 # undef HAVE_INTERNAL_BRK_ADDR_SYMBOL
 # define HAVE_INTERNAL_BRK_ADDR_SYMBOL 1
-- 
2.34.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v6 3/5] riscv: Add __riscv_hwprobe pointer to ifunc calls
  2023-08-02 15:58 [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface Evan Green
  2023-08-02 15:58 ` [PATCH v6 1/5] riscv: Add Linux hwprobe syscall support Evan Green
  2023-08-02 15:59 ` [PATCH v6 2/5] riscv: Add hwprobe vdso call support Evan Green
@ 2023-08-02 15:59 ` Evan Green
  2023-08-02 15:59 ` [PATCH v6 4/5] riscv: Enable multi-arg ifunc resolvers Evan Green
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 27+ messages in thread
From: Evan Green @ 2023-08-02 15:59 UTC (permalink / raw)
  To: libc-alpha; +Cc: slewis, Florian Weimer, palmer, vineetg, Evan Green

The new __riscv_hwprobe() function is designed to be used by ifunc
selector functions. This presents a challenge for applications and
libraries, as ifunc selectors are invoked before all relocations have
been performed, so an external call to __riscv_hwprobe() from an ifunc
selector won't work. To address this, pass a pointer to the
__riscv_hwprobe() vDSO function into ifunc selectors as the second
argument (alongside dl_hwcap, which was already being passed).

Include a typedef as well for convenience, so that ifunc users don't
have to go through contortions to call this routine. Users will need to
remember to check the second argument for NULL, both to account for
older glibcs that don't pass the function, and older kernels that don't
have the vDSO pointer.

Signed-off-by: Evan Green <evan@rivosinc.com>
---

(no changes since v1)

 sysdeps/riscv/dl-irel.h                     |  8 ++++----
 sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h | 10 ++++++++++
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/sysdeps/riscv/dl-irel.h b/sysdeps/riscv/dl-irel.h
index eaeec5467c..2147504458 100644
--- a/sysdeps/riscv/dl-irel.h
+++ b/sysdeps/riscv/dl-irel.h
@@ -31,10 +31,10 @@ static inline ElfW(Addr)
 __attribute ((always_inline))
 elf_ifunc_invoke (ElfW(Addr) addr)
 {
-  /* The second argument is a void pointer to preserve the extension
-     fexibility.  */
-  return ((ElfW(Addr) (*) (uint64_t, void *)) (addr))
-	 (GLRO(dl_hwcap), NULL);
+  /* The third argument is a void pointer to preserve the extension
+     flexibility.  */
+  return ((ElfW(Addr) (*) (uint64_t, void *, void *)) (addr))
+	 (GLRO(dl_hwcap), GLRO(dl_vdso_riscv_hwprobe), NULL);
 }
 
 static inline void
diff --git a/sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h b/sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h
index 63372c5a94..1f02416bd8 100644
--- a/sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h
+++ b/sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h
@@ -67,6 +67,16 @@ extern int __riscv_hwprobe (struct riscv_hwprobe *__pairs, size_t __pair_count,
      __fortified_attr_access (__read_write__, 1, 2)
      __fortified_attr_access (__read_only__, 4, 3);
 
+/* A pointer to the __riscv_hwprobe vDSO function is passed as the second
+   argument to ifunc selector routines. Include a function pointer type for
+   convenience in calling the function in those settings. */
+typedef int (*__riscv_hwprobe_t) (struct riscv_hwprobe *__pairs, size_t __pair_count,
+				  size_t __cpu_count, unsigned long int *__cpus,
+				  unsigned int __flags)
+     __THROW __nonnull ((1)) __wur
+     __fortified_attr_access (__read_write__, 1, 2)
+     __fortified_attr_access (__read_only__, 4, 3);
+
 __END_DECLS
 
 #endif /* sys/hwprobe.h */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v6 4/5] riscv: Enable multi-arg ifunc resolvers
  2023-08-02 15:58 [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface Evan Green
                   ` (2 preceding siblings ...)
  2023-08-02 15:59 ` [PATCH v6 3/5] riscv: Add __riscv_hwprobe pointer to ifunc calls Evan Green
@ 2023-08-02 15:59 ` Evan Green
  2023-08-02 15:59 ` [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy Evan Green
  2023-08-02 16:03 ` [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface Evan Green
  5 siblings, 0 replies; 27+ messages in thread
From: Evan Green @ 2023-08-02 15:59 UTC (permalink / raw)
  To: libc-alpha; +Cc: slewis, Florian Weimer, palmer, vineetg, Evan Green

RISC-V is apparently the first architecture to pass more than one
argument to ifunc resolvers. The helper macros in libc-symbols.h,
__ifunc_resolver(), __ifunc(), and __ifunc_hidden(), are incompatible
with this. These macros have an "arg" (non-final) parameter that
represents the parameter signature of the ifunc resolver. The result is
an inability to pass the required comma through in a single preprocessor
argument.

Rearrange the __ifunc_resolver() macro to be variadic, and pass the
types as those variable parameters. Move the guts of __ifunc() and
__ifunc_hidden() into new macros, __ifunc_args(), and
__ifunc_args_hidden(), that pass the variable arguments down through to
__ifunc_resolver(). Then redefine __ifunc() and __ifunc_hidden(), which
are used in a bunch of places, to simply shuffle the arguments down into
__ifunc_args[_hidden]. Finally, define a riscv-ifunc.h header, which
provides convenience macros to those looking to write ifunc selectors
that use both arguments.

Signed-off-by: Evan Green <evan@rivosinc.com>

---

Changes in v6:
 - Introduced riscv-ifunc.h for multi-arg ifunc selectors.

Note: I opted to create another layer of macros (__ifunc_args()) rather
than doing the treewide change to rearrange the signature of __ifunc()
and __ifunc_hidden(). If folks like the overall approach but would
prefer the treewide change, I can do that too.

---
 include/libc-symbols.h      | 28 +++++++++++++++++-----------
 sysdeps/riscv/riscv-ifunc.h | 27 +++++++++++++++++++++++++++
 2 files changed, 44 insertions(+), 11 deletions(-)
 create mode 100644 sysdeps/riscv/riscv-ifunc.h

diff --git a/include/libc-symbols.h b/include/libc-symbols.h
index 5794614488..36b92039c5 100644
--- a/include/libc-symbols.h
+++ b/include/libc-symbols.h
@@ -665,9 +665,9 @@ for linking")
 #endif
 
 /* Helper / base  macros for indirect function symbols.  */
-#define __ifunc_resolver(type_name, name, expr, arg, init, classifier)	\
+#define __ifunc_resolver(type_name, name, expr, init, classifier, ...)	\
   classifier inhibit_stack_protector					\
-  __typeof (type_name) *name##_ifunc (arg)				\
+  __typeof (type_name) *name##_ifunc (__VA_ARGS__)			\
   {									\
     init ();								\
     __typeof (type_name) *res = expr;					\
@@ -675,13 +675,13 @@ for linking")
   }
 
 #ifdef HAVE_GCC_IFUNC
-# define __ifunc(type_name, name, expr, arg, init)			\
+# define __ifunc_args(type_name, name, expr, init, ...)			\
   extern __typeof (type_name) name __attribute__			\
 			      ((ifunc (#name "_ifunc")));		\
-  __ifunc_resolver (type_name, name, expr, arg, init, static)
+  __ifunc_resolver (type_name, name, expr, init, static, __VA_ARGS__)
 
-# define __ifunc_hidden(type_name, name, expr, arg, init)	\
-  __ifunc (type_name, name, expr, arg, init)
+# define __ifunc_args_hidden(type_name, name, expr, init, ...)		\
+  __ifunc (type_name, name, expr, init, __VA_ARGS__)
 #else
 /* Gcc does not support __attribute__ ((ifunc (...))).  Use the old behaviour
    as fallback.  But keep in mind that the debug information for the ifunc
@@ -692,18 +692,24 @@ for linking")
    different signatures.  (Gcc support is disabled at least on a ppc64le
    Ubuntu 14.04 system.)  */
 
-# define __ifunc(type_name, name, expr, arg, init)			\
+# define __ifunc_args(type_name, name, expr, init, ...)			\
   extern __typeof (type_name) name;					\
-  __typeof (type_name) *name##_ifunc (arg) __asm__ (#name);		\
-  __ifunc_resolver (type_name, name, expr, arg, init,)			\
+  __typeof (type_name) *name##_ifunc (__VA_ARGS__) __asm__ (#name);	\
+  __ifunc_resolver (type_name, name, expr, init, , __VA_ARGS__)		\
  __asm__ (".type " #name ", %gnu_indirect_function");
 
-# define __ifunc_hidden(type_name, name, expr, arg, init)		\
+# define __ifunc_args_hidden(type_name, name, expr, init, ...)		\
   extern __typeof (type_name) __libc_##name;				\
-  __ifunc (type_name, __libc_##name, expr, arg, init)			\
+  __ifunc (type_name, __libc_##name, expr, __VA_INIT__, init)		\
   strong_alias (__libc_##name, name);
 #endif /* !HAVE_GCC_IFUNC  */
 
+#define __ifunc(type_name, name, expr, arg, init)			\
+  __ifunc_args (type_name, name, expr, init, arg)
+
+#define __ifunc_hidden(type_name, name, expr, arg, init)		\
+  __ifunc_args_hidden (type_name, expr, init, arg)
+
 /* The following macros are used for indirect function symbols in libc.so.
    First of all, you need to have the function prototyped somewhere,
    say in foo.h:
diff --git a/sysdeps/riscv/riscv-ifunc.h b/sysdeps/riscv/riscv-ifunc.h
new file mode 100644
index 0000000000..7bff591d1e
--- /dev/null
+++ b/sysdeps/riscv/riscv-ifunc.h
@@ -0,0 +1,27 @@
+/* Common definition for ifunc resolvers.  Linux/RISC-V version.
+   This file is part of the GNU C Library.
+   Copyright (C) 2023 Free Software Foundation, Inc.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+#include <ifunc-init.h>
+#include <sys/hwprobe.h>
+
+#define INIT_ARCH()
+
+#define riscv_libc_ifunc(name, expr)				\
+  __ifunc_args (name, name, expr(hwcap, hwprobe), INIT_ARCH,	\
+                uint64_t hwcap, __riscv_hwprobe_t hwprobe)
-- 
2.34.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-02 15:58 [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface Evan Green
                   ` (3 preceding siblings ...)
  2023-08-02 15:59 ` [PATCH v6 4/5] riscv: Enable multi-arg ifunc resolvers Evan Green
@ 2023-08-02 15:59 ` Evan Green
  2023-08-03  7:25   ` Florian Weimer
  2023-08-02 16:03 ` [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface Evan Green
  5 siblings, 1 reply; 27+ messages in thread
From: Evan Green @ 2023-08-02 15:59 UTC (permalink / raw)
  To: libc-alpha; +Cc: slewis, Florian Weimer, palmer, vineetg, Evan Green

For CPU implementations that can perform unaligned accesses with little
or no performance penalty, create a memcpy implementation that does not
bother aligning buffers. It will use a block of integer registers, a
single integer register, and fall back to bytewise copy for the
remainder.

Signed-off-by: Evan Green <evan@rivosinc.com>
Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>

---

Changes in v6:
 - Fix a couple regressions in the assembly from v5 :/
 - Use passed hwprobe pointer in memcpy ifunc selector.

Changes in v5:
 - Do unaligned word access for final trailing bytes (Richard)

Changes in v4:
 - Fixed comment style (Florian)

Changes in v3:
 - Word align dest for large memcpy()s.
 - Add tags
 - Remove spurious blank line from sysdeps/riscv/memcpy.c

Changes in v2:
 - Used _MASK instead of _FAST value itself.


---
 sysdeps/riscv/memcopy.h                       |  26 ++++
 sysdeps/riscv/memcpy.c                        |  66 +++++++++
 sysdeps/riscv/memcpy_noalignment.S            | 138 ++++++++++++++++++
 sysdeps/unix/sysv/linux/riscv/Makefile        |   4 +
 .../unix/sysv/linux/riscv/memcpy-generic.c    |  24 +++
 5 files changed, 258 insertions(+)
 create mode 100644 sysdeps/riscv/memcopy.h
 create mode 100644 sysdeps/riscv/memcpy.c
 create mode 100644 sysdeps/riscv/memcpy_noalignment.S
 create mode 100644 sysdeps/unix/sysv/linux/riscv/memcpy-generic.c

diff --git a/sysdeps/riscv/memcopy.h b/sysdeps/riscv/memcopy.h
new file mode 100644
index 0000000000..2b685c8aa0
--- /dev/null
+++ b/sysdeps/riscv/memcopy.h
@@ -0,0 +1,26 @@
+/* memcopy.h -- definitions for memory copy functions. RISC-V version.
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sysdeps/generic/memcopy.h>
+
+/* Redefine the generic memcpy implementation to __memcpy_generic, so
+   the memcpy ifunc can select between generic and special versions.
+   In rtld, don't bother with all the ifunciness. */
+#if IS_IN (libc)
+#define MEMCPY __memcpy_generic
+#endif
diff --git a/sysdeps/riscv/memcpy.c b/sysdeps/riscv/memcpy.c
new file mode 100644
index 0000000000..ecadd96433
--- /dev/null
+++ b/sysdeps/riscv/memcpy.c
@@ -0,0 +1,66 @@
+/* Multiple versions of memcpy.
+   All versions must be listed in ifunc-impl-list.c.
+   Copyright (C) 2017-2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#if IS_IN (libc)
+/* Redefine memcpy so that the compiler won't complain about the type
+   mismatch with the IFUNC selector in strong_alias, below.  */
+# undef memcpy
+# define memcpy __redirect_memcpy
+# include <stdint.h>
+# include <string.h>
+# include <ifunc-init.h>
+# include <riscv-ifunc.h>
+# include <sys/hwprobe.h>
+
+# define INIT_ARCH()
+
+extern __typeof (__redirect_memcpy) __libc_memcpy;
+
+extern __typeof (__redirect_memcpy) __memcpy_generic attribute_hidden;
+extern __typeof (__redirect_memcpy) __memcpy_noalignment attribute_hidden;
+
+static inline __typeof (__redirect_memcpy) *
+select_memcpy_ifunc (uint64_t dl_hwcap, __riscv_hwprobe_t hwprobe_func)
+{
+  INIT_ARCH ();
+
+  struct riscv_hwprobe pair;
+
+  pair.key = RISCV_HWPROBE_KEY_CPUPERF_0;
+  if (!hwprobe_func || hwprobe_func(&pair, 1, 0, NULL, 0) != 0)
+    return __memcpy_generic;
+
+  if ((pair.key > 0) &&
+      (pair.value & RISCV_HWPROBE_MISALIGNED_MASK) ==
+       RISCV_HWPROBE_MISALIGNED_FAST)
+    return __memcpy_noalignment;
+
+  return __memcpy_generic;
+}
+
+riscv_libc_ifunc (__libc_memcpy, select_memcpy_ifunc);
+
+# undef memcpy
+strong_alias (__libc_memcpy, memcpy);
+# ifdef SHARED
+__hidden_ver1 (memcpy, __GI_memcpy, __redirect_memcpy)
+  __attribute__ ((visibility ("hidden"))) __attribute_copy__ (memcpy);
+# endif
+
+#endif
diff --git a/sysdeps/riscv/memcpy_noalignment.S b/sysdeps/riscv/memcpy_noalignment.S
new file mode 100644
index 0000000000..f3bf8e5867
--- /dev/null
+++ b/sysdeps/riscv/memcpy_noalignment.S
@@ -0,0 +1,138 @@
+/* memcpy for RISC-V, ignoring buffer alignment
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library.  If not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+#include <sys/asm.h>
+
+/* void *memcpy(void *, const void *, size_t) */
+ENTRY (__memcpy_noalignment)
+	move t6, a0  /* Preserve return value */
+
+	/* Bail if 0 */
+	beqz a2, 7f
+
+	/* Jump to byte copy if size < SZREG */
+	li a4, SZREG
+	bltu a2, a4, 5f
+
+	/* Round down to the nearest "page" size */
+	andi a4, a2, ~((16*SZREG)-1)
+	beqz a4, 2f
+	add a3, a1, a4
+
+	/* Copy the first word to get dest word aligned */
+	andi a5, t6, SZREG-1
+	beqz a5, 1f
+	REG_L a6, (a1)
+	REG_S a6, (t6)
+
+	/* Align dst up to a word, move src and size as well. */
+	addi t6, t6, SZREG-1
+	andi t6, t6, ~(SZREG-1)
+	sub a5, t6, a0
+	add a1, a1, a5
+	sub a2, a2, a5
+
+	/* Recompute page count */
+	andi a4, a2, ~((16*SZREG)-1)
+	beqz a4, 2f
+
+1:
+	/* Copy "pages" (chunks of 16 registers) */
+	REG_L a4,       0(a1)
+	REG_L a5,   SZREG(a1)
+	REG_L a6, 2*SZREG(a1)
+	REG_L a7, 3*SZREG(a1)
+	REG_L t0, 4*SZREG(a1)
+	REG_L t1, 5*SZREG(a1)
+	REG_L t2, 6*SZREG(a1)
+	REG_L t3, 7*SZREG(a1)
+	REG_L t4, 8*SZREG(a1)
+	REG_L t5, 9*SZREG(a1)
+	REG_S a4,       0(t6)
+	REG_S a5,   SZREG(t6)
+	REG_S a6, 2*SZREG(t6)
+	REG_S a7, 3*SZREG(t6)
+	REG_S t0, 4*SZREG(t6)
+	REG_S t1, 5*SZREG(t6)
+	REG_S t2, 6*SZREG(t6)
+	REG_S t3, 7*SZREG(t6)
+	REG_S t4, 8*SZREG(t6)
+	REG_S t5, 9*SZREG(t6)
+	REG_L a4, 10*SZREG(a1)
+	REG_L a5, 11*SZREG(a1)
+	REG_L a6, 12*SZREG(a1)
+	REG_L a7, 13*SZREG(a1)
+	REG_L t0, 14*SZREG(a1)
+	REG_L t1, 15*SZREG(a1)
+	addi a1, a1, 16*SZREG
+	REG_S a4, 10*SZREG(t6)
+	REG_S a5, 11*SZREG(t6)
+	REG_S a6, 12*SZREG(t6)
+	REG_S a7, 13*SZREG(t6)
+	REG_S t0, 14*SZREG(t6)
+	REG_S t1, 15*SZREG(t6)
+	addi t6, t6, 16*SZREG
+	bltu a1, a3, 1b
+	andi a2, a2, (16*SZREG)-1  /* Update count */
+
+2:
+	/* Remainder is smaller than a page, compute native word count */
+	beqz a2, 7f
+	andi a5, a2, ~(SZREG-1)
+	andi a2, a2, (SZREG-1)
+	add a3, a1, a5
+	/* Jump directly to last word if no words. */
+	beqz a5, 4f
+
+3:
+	/* Use single native register copy */
+	REG_L a4, 0(a1)
+	addi a1, a1, SZREG
+	REG_S a4, 0(t6)
+	addi t6, t6, SZREG
+	bltu a1, a3, 3b
+
+	/* Jump directly out if no more bytes */
+	beqz a2, 7f
+
+4:
+	/* Copy the last word unaligned */
+	add a3, a1, a2
+	add a4, t6, a2
+	REG_L a5, -SZREG(a3)
+	REG_S a5, -SZREG(a4)
+	ret
+
+5:
+	/* Copy bytes when the total copy is <SZREG */
+	add a3, a1, a2
+
+6:
+	lb a4, 0(a1)
+	addi a1, a1, 1
+	sb a4, 0(t6)
+	addi t6, t6, 1
+	bltu a1, a3, 6b
+
+7:
+	ret
+
+END (__memcpy_noalignment)
+
+hidden_def (__memcpy_noalignment)
diff --git a/sysdeps/unix/sysv/linux/riscv/Makefile b/sysdeps/unix/sysv/linux/riscv/Makefile
index 45cc29e40d..aa9ea443d6 100644
--- a/sysdeps/unix/sysv/linux/riscv/Makefile
+++ b/sysdeps/unix/sysv/linux/riscv/Makefile
@@ -7,6 +7,10 @@ ifeq ($(subdir),stdlib)
 gen-as-const-headers += ucontext_i.sym
 endif
 
+ifeq ($(subdir),string)
+sysdep_routines += memcpy memcpy-generic memcpy_noalignment
+endif
+
 abi-variants := ilp32 ilp32d lp64 lp64d
 
 ifeq (,$(filter $(default-abi),$(abi-variants)))
diff --git a/sysdeps/unix/sysv/linux/riscv/memcpy-generic.c b/sysdeps/unix/sysv/linux/riscv/memcpy-generic.c
new file mode 100644
index 0000000000..0abe03f7f5
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/riscv/memcpy-generic.c
@@ -0,0 +1,24 @@
+/* Re-include the default memcpy implementation.
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <string.h>
+
+extern __typeof (memcpy) __memcpy_generic;
+hidden_proto(__memcpy_generic)
+
+#include <string/memcpy.c>
-- 
2.34.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface
  2023-08-02 15:58 [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface Evan Green
                   ` (4 preceding siblings ...)
  2023-08-02 15:59 ` [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy Evan Green
@ 2023-08-02 16:03 ` Evan Green
  5 siblings, 0 replies; 27+ messages in thread
From: Evan Green @ 2023-08-02 16:03 UTC (permalink / raw)
  To: libc-alpha; +Cc: slewis, Florian Weimer, palmer, vineetg

On Wed, Aug 2, 2023 at 8:59 AM Evan Green <evan@rivosinc.com> wrote:
>
>
> This series illustrates the use of a recently accepted Linux syscall that
> enumerates architectural information about the RISC-V cores the system
> is running on. In this series we expose a small wrapper function around
> the syscall. An ifunc selector for memcpy queries it to see if unaligned
> access is "fast" on this hardware. If it is, it selects a newly provided
> implementation of memcpy that doesn't work hard at aligning the src and
> destination buffers.
>
> For applications and libraries outside of glibc that want to use
> __riscv_hwprobe() in ifunc selectors, this series also introduces
> __riscv_hwprobe_early(), which works correctly even before all symbols
> have been resolved.

Minor correction:
This paragraph above is stale. In this series I implemented Florian's
suggestion of passing a pointer to __riscv_hwprobe as a second
argument to ifunc selectors. It works well, except it requires a
little bit of libc-symbols.h macro rearranging to allow something like
libc_ifunc() to handle multiple arguments. You'll see that in a new
patch within this series.

>
> The memcpy implementation is independent enough from the rest of the
> series that it can be omitted safely if desired.
>
> Performance numbers were compared using a small test program [1], run on
> a D1 Nezha board, which supports fast unaligned access. "Fast" here
> means copying unaligned words is faster than copying byte-wise, but
> still slower than copying aligned words. Here's the speed of various
> memcpy()s with the generic implementation. The numbers before are using
> v4's memcpy implementation, with the "copy last byte via overlapping
> misaligned word" fix this should get even better, though I'm having
> trouble with my setup right now and wasn't able to re-run the numbers
> on the same hardware. I'll keep working on that.
>
> memcpy size 1 count 1000000 offset 0 took 109564 us
> memcpy size 3 count 1000000 offset 0 took 138425 us
> memcpy size 4 count 1000000 offset 0 took 148374 us
> memcpy size 7 count 1000000 offset 0 took 178433 us
> memcpy size 8 count 1000000 offset 0 took 188430 us
> memcpy size f count 1000000 offset 0 took 266118 us
> memcpy size f count 1000000 offset 1 took 265940 us
> memcpy size f count 1000000 offset 3 took 265934 us
> memcpy size f count 1000000 offset 7 took 266215 us
> memcpy size f count 1000000 offset 8 took 265954 us
> memcpy size f count 1000000 offset 9 took 265886 us
> memcpy size 10 count 1000000 offset 0 took 195308 us
> memcpy size 11 count 1000000 offset 0 took 205161 us
> memcpy size 17 count 1000000 offset 0 took 274376 us
> memcpy size 18 count 1000000 offset 0 took 199188 us
> memcpy size 19 count 1000000 offset 0 took 209258 us
> memcpy size 1f count 1000000 offset 0 took 278263 us
> memcpy size 20 count 1000000 offset 0 took 207364 us
> memcpy size 21 count 1000000 offset 0 took 217143 us
> memcpy size 3f count 1000000 offset 0 took 300023 us
> memcpy size 40 count 1000000 offset 0 took 231063 us
> memcpy size 41 count 1000000 offset 0 took 241259 us
> memcpy size 7c count 100000 offset 0 took 32807 us
> memcpy size 7f count 100000 offset 0 took 36274 us
> memcpy size ff count 100000 offset 0 took 47818 us
> memcpy size ff count 100000 offset 0 took 47932 us
> memcpy size 100 count 100000 offset 0 took 40468 us
> memcpy size 200 count 100000 offset 0 took 64245 us
> memcpy size 27f count 100000 offset 0 took 82549 us
> memcpy size 400 count 100000 offset 0 took 111254 us
> memcpy size 407 count 100000 offset 0 took 119364 us
> memcpy size 800 count 100000 offset 0 took 203899 us
> memcpy size 87f count 100000 offset 0 took 222465 us
> memcpy size 87f count 100000 offset 3 took 222289 us
> memcpy size 1000 count 100000 offset 0 took 388846 us
> memcpy size 1000 count 100000 offset 1 took 468827 us
> memcpy size 1000 count 100000 offset 3 took 397098 us
> memcpy size 1000 count 100000 offset 4 took 397379 us
> memcpy size 1000 count 100000 offset 5 took 397368 us
> memcpy size 1000 count 100000 offset 7 took 396867 us
> memcpy size 1000 count 100000 offset 8 took 389227 us
> memcpy size 1000 count 100000 offset 9 took 395949 us
> memcpy size 3000 count 50000 offset 0 took 674837 us
> memcpy size 3000 count 50000 offset 1 took 676944 us
> memcpy size 3000 count 50000 offset 3 took 679709 us
> memcpy size 3000 count 50000 offset 4 took 680829 us
> memcpy size 3000 count 50000 offset 5 took 678024 us
> memcpy size 3000 count 50000 offset 7 took 681097 us
> memcpy size 3000 count 50000 offset 8 took 670004 us
> memcpy size 3000 count 50000 offset 9 took 674553 us
>
> Here is that same test run with the assembly memcpy() in this series:
> memcpy size 1 count 1000000 offset 0 took 92703 us
> memcpy size 3 count 1000000 offset 0 took 112527 us
> memcpy size 4 count 1000000 offset 0 took 120481 us
> memcpy size 7 count 1000000 offset 0 took 149558 us
> memcpy size 8 count 1000000 offset 0 took 90617 us
> memcpy size f count 1000000 offset 0 took 174373 us
> memcpy size f count 1000000 offset 1 took 178615 us
> memcpy size f count 1000000 offset 3 took 178845 us
> memcpy size f count 1000000 offset 7 took 178636 us
> memcpy size f count 1000000 offset 8 took 174442 us
> memcpy size f count 1000000 offset 9 took 178660 us
> memcpy size 10 count 1000000 offset 0 took 99845 us
> memcpy size 11 count 1000000 offset 0 took 112522 us
> memcpy size 17 count 1000000 offset 0 took 179735 us
> memcpy size 18 count 1000000 offset 0 took 110870 us
> memcpy size 19 count 1000000 offset 0 took 121472 us
> memcpy size 1f count 1000000 offset 0 took 188231 us
> memcpy size 20 count 1000000 offset 0 took 119571 us
> memcpy size 21 count 1000000 offset 0 took 132429 us
> memcpy size 3f count 1000000 offset 0 took 227021 us
> memcpy size 40 count 1000000 offset 0 took 166416 us
> memcpy size 41 count 1000000 offset 0 took 180206 us
> memcpy size 7c count 100000 offset 0 took 28602 us
> memcpy size 7f count 100000 offset 0 took 31676 us
> memcpy size ff count 100000 offset 0 took 39257 us
> memcpy size ff count 100000 offset 0 took 39176 us
> memcpy size 100 count 100000 offset 0 took 21928 us
> memcpy size 200 count 100000 offset 0 took 35814 us
> memcpy size 27f count 100000 offset 0 took 60315 us
> memcpy size 400 count 100000 offset 0 took 63652 us
> memcpy size 407 count 100000 offset 0 took 73160 us
> memcpy size 800 count 100000 offset 0 took 121532 us
> memcpy size 87f count 100000 offset 0 took 147269 us
> memcpy size 87f count 100000 offset 3 took 144744 us
> memcpy size 1000 count 100000 offset 0 took 232057 us
> memcpy size 1000 count 100000 offset 1 took 254319 us
> memcpy size 1000 count 100000 offset 3 took 256973 us
> memcpy size 1000 count 100000 offset 4 took 257655 us
> memcpy size 1000 count 100000 offset 5 took 259456 us
> memcpy size 1000 count 100000 offset 7 took 260849 us
> memcpy size 1000 count 100000 offset 8 took 232347 us
> memcpy size 1000 count 100000 offset 9 took 254330 us
> memcpy size 3000 count 50000 offset 0 took 382376 us
> memcpy size 3000 count 50000 offset 1 took 389872 us
> memcpy size 3000 count 50000 offset 3 took 385310 us
> memcpy size 3000 count 50000 offset 4 took 389748 us
> memcpy size 3000 count 50000 offset 5 took 391707 us
> memcpy size 3000 count 50000 offset 7 took 386778 us
> memcpy size 3000 count 50000 offset 8 took 385691 us
> memcpy size 3000 count 50000 offset 9 took 392030 us
>
> The assembly routine is measurably better.
>
> [1] https://pastebin.com/DRyECNQW
>
>
> Changes in v6:
>  - Prefixed __riscv_hwprobe() parameters names with __ to avoid user
>    macro namespace pollution (Joseph)
>  - Introduced riscv-ifunc.h for multi-arg ifunc selectors.
>  - Fix a couple regressions in the assembly from v5 :/
>  - Use passed hwprobe pointer in memcpy ifunc selector.
>
> Changes in v5:
>  - Do unaligned word access for final trailing bytes (Richard)
>
> Changes in v4:
>  - Remove __USE_GNU (Florian)
>  - __nonnull, __wur, __THROW, and  __fortified_attr_access decorations
>   (Florian)
>  - change long to long int (Florian)
>  - Fix comment formatting (Florian)
>  - Update backup kernel header content copy.
>  - Fix function declaration formatting (Florian)
>  - Changed export versions to 2.38
>  - Fixed comment style (Florian)
>
> Changes in v3:
>  - Update argument types to match v4 kernel interface
>  - Add the "return" to the vsyscall
>  - Fix up vdso arg types to match kernel v4 version
>  - Remove ifdef around INLINE_VSYSCALL (Adhemerval)
>  - Word align dest for large memcpy()s.
>  - Add tags
>  - Remove spurious blank line from sysdeps/riscv/memcpy.c
>
> Changes in v2:
>  - hwprobe.h: Use __has_include and duplicate Linux content to make
>    compilation work when Linux headers are absent (Adhemerval)
>  - hwprobe.h: Put declaration under __USE_GNU (Adhemerval)
>  - Use INLINE_SYSCALL_CALL (Adhemerval)
>  - Update versions
>  - Update UNALIGNED_MASK to match kernel v3 series.
>  - Add vDSO interface
>  - Used _MASK instead of _FAST value itself.
>
> Evan Green (5):
>   riscv: Add Linux hwprobe syscall support
>   riscv: Add hwprobe vdso call support
>   riscv: Add __riscv_hwprobe pointer to ifunc calls
>   riscv: Enable multi-arg ifunc resolvers
>   riscv: Add and use alignment-ignorant memcpy
>
>  include/libc-symbols.h                        |  28 ++--
>  sysdeps/riscv/dl-irel.h                       |   8 +-
>  sysdeps/riscv/memcopy.h                       |  26 ++++
>  sysdeps/riscv/memcpy.c                        |  66 +++++++++
>  sysdeps/riscv/memcpy_noalignment.S            | 138 ++++++++++++++++++
>  sysdeps/riscv/riscv-ifunc.h                   |  27 ++++
>  sysdeps/unix/sysv/linux/dl-vdso-setup.c       |  10 ++
>  sysdeps/unix/sysv/linux/dl-vdso-setup.h       |   3 +
>  sysdeps/unix/sysv/linux/riscv/Makefile        |   8 +-
>  sysdeps/unix/sysv/linux/riscv/Versions        |   3 +
>  sysdeps/unix/sysv/linux/riscv/hwprobe.c       |  32 ++++
>  .../unix/sysv/linux/riscv/memcpy-generic.c    |  24 +++
>  .../unix/sysv/linux/riscv/rv32/libc.abilist   |   1 +
>  .../unix/sysv/linux/riscv/rv64/libc.abilist   |   1 +
>  sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h   |  82 +++++++++++
>  sysdeps/unix/sysv/linux/riscv/sysdep.h        |   1 +
>  16 files changed, 441 insertions(+), 17 deletions(-)
>  create mode 100644 sysdeps/riscv/memcopy.h
>  create mode 100644 sysdeps/riscv/memcpy.c
>  create mode 100644 sysdeps/riscv/memcpy_noalignment.S
>  create mode 100644 sysdeps/riscv/riscv-ifunc.h
>  create mode 100644 sysdeps/unix/sysv/linux/riscv/hwprobe.c
>  create mode 100644 sysdeps/unix/sysv/linux/riscv/memcpy-generic.c
>  create mode 100644 sysdeps/unix/sysv/linux/riscv/sys/hwprobe.h
>
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 1/5] riscv: Add Linux hwprobe syscall support
  2023-08-02 15:58 ` [PATCH v6 1/5] riscv: Add Linux hwprobe syscall support Evan Green
@ 2023-08-02 16:52   ` Joseph Myers
  2023-08-03  7:24   ` Florian Weimer
  1 sibling, 0 replies; 27+ messages in thread
From: Joseph Myers @ 2023-08-02 16:52 UTC (permalink / raw)
  To: Evan Green; +Cc: libc-alpha, slewis, Florian Weimer, palmer, vineetg

On Wed, 2 Aug 2023, Evan Green wrote:

> diff --git a/sysdeps/unix/sysv/linux/riscv/Versions b/sysdeps/unix/sysv/linux/riscv/Versions
> index 5625d2a0b8..0c4016382d 100644
> --- a/sysdeps/unix/sysv/linux/riscv/Versions
> +++ b/sysdeps/unix/sysv/linux/riscv/Versions
> @@ -8,4 +8,7 @@ libc {
>    GLIBC_2.27 {
>      __riscv_flush_icache;
>    }
> +  GLIBC_2.38 {
> +    __riscv_hwprobe;
> +  }
>  }

2.38 has been released, new symbols now need to be at version 2.39.

-- 
Joseph S. Myers
joseph@codesourcery.com

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 1/5] riscv: Add Linux hwprobe syscall support
  2023-08-02 15:58 ` [PATCH v6 1/5] riscv: Add Linux hwprobe syscall support Evan Green
  2023-08-02 16:52   ` Joseph Myers
@ 2023-08-03  7:24   ` Florian Weimer
  1 sibling, 0 replies; 27+ messages in thread
From: Florian Weimer @ 2023-08-03  7:24 UTC (permalink / raw)
  To: Evan Green; +Cc: libc-alpha, slewis, palmer, vineetg

* Evan Green:

> +int __riscv_hwprobe (struct riscv_hwprobe *__pairs, size_t __pair_count,
> +		     size_t __cpu_count, unsigned long int *__cpus,
> +		     unsigned int __flags)
> +{
> +  return INLINE_SYSCALL_CALL (riscv_hwprobe, __pairs, __pair_count,
> +                              __cpu_count, __cpus, __flags);
> +}

INLINE_SYSCALL_CALL uses errno, and the caller might not be able to
access that (in case of an unrelocated IFUNC resolver).  Consider using
INTERNAL_SYSCALL_CALL (perhaps negated) instead.

Thanks,
Florian


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-02 15:59 ` [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy Evan Green
@ 2023-08-03  7:25   ` Florian Weimer
  2023-08-03 17:50     ` Richard Henderson
  0 siblings, 1 reply; 27+ messages in thread
From: Florian Weimer @ 2023-08-03  7:25 UTC (permalink / raw)
  To: Evan Green; +Cc: libc-alpha, slewis, palmer, vineetg

* Evan Green:

> +static inline __typeof (__redirect_memcpy) *
> +select_memcpy_ifunc (uint64_t dl_hwcap, __riscv_hwprobe_t hwprobe_func)
> +{
> +  INIT_ARCH ();
> +
> +  struct riscv_hwprobe pair;
> +
> +  pair.key = RISCV_HWPROBE_KEY_CPUPERF_0;
> +  if (!hwprobe_func || hwprobe_func(&pair, 1, 0, NULL, 0) != 0)
> +    return __memcpy_generic;
> +
> +  if ((pair.key > 0) &&
> +      (pair.value & RISCV_HWPROBE_MISALIGNED_MASK) ==
> +       RISCV_HWPROBE_MISALIGNED_FAST)
> +    return __memcpy_noalignment;
> +
> +  return __memcpy_generic;
> +}

In libc, you could call __riscv_hwprobe directly, so the additional
argument isn't needed after all.

Thanks,
Florian


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-03  7:25   ` Florian Weimer
@ 2023-08-03 17:50     ` Richard Henderson
  2023-08-03 18:42       ` Evan Green
  0 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-08-03 17:50 UTC (permalink / raw)
  To: Florian Weimer, Evan Green; +Cc: libc-alpha, slewis, palmer, vineetg

On 8/3/23 00:25, Florian Weimer via Libc-alpha wrote:
> * Evan Green:
> 
>> +static inline __typeof (__redirect_memcpy) *
>> +select_memcpy_ifunc (uint64_t dl_hwcap, __riscv_hwprobe_t hwprobe_func)
>> +{
>> +  INIT_ARCH ();
>> +
>> +  struct riscv_hwprobe pair;
>> +
>> +  pair.key = RISCV_HWPROBE_KEY_CPUPERF_0;
>> +  if (!hwprobe_func || hwprobe_func(&pair, 1, 0, NULL, 0) != 0)
>> +    return __memcpy_generic;
>> +
>> +  if ((pair.key > 0) &&
>> +      (pair.value & RISCV_HWPROBE_MISALIGNED_MASK) ==
>> +       RISCV_HWPROBE_MISALIGNED_FAST)
>> +    return __memcpy_noalignment;
>> +
>> +  return __memcpy_generic;
>> +}
> 
> In libc, you could call __riscv_hwprobe directly, so the additional
> argument isn't needed after all.

Outside libc something is required.

An extra parameter to ifunc is surprising though, and clearly not ideal per the extra 
hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that 
could always be called directly.


r~

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-03 17:50     ` Richard Henderson
@ 2023-08-03 18:42       ` Evan Green
  2023-08-03 22:30         ` Richard Henderson
  0 siblings, 1 reply; 27+ messages in thread
From: Evan Green @ 2023-08-03 18:42 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 8/3/23 00:25, Florian Weimer via Libc-alpha wrote:
> > * Evan Green:
> >
> >> +static inline __typeof (__redirect_memcpy) *
> >> +select_memcpy_ifunc (uint64_t dl_hwcap, __riscv_hwprobe_t hwprobe_func)
> >> +{
> >> +  INIT_ARCH ();
> >> +
> >> +  struct riscv_hwprobe pair;
> >> +
> >> +  pair.key = RISCV_HWPROBE_KEY_CPUPERF_0;
> >> +  if (!hwprobe_func || hwprobe_func(&pair, 1, 0, NULL, 0) != 0)
> >> +    return __memcpy_generic;
> >> +
> >> +  if ((pair.key > 0) &&
> >> +      (pair.value & RISCV_HWPROBE_MISALIGNED_MASK) ==
> >> +       RISCV_HWPROBE_MISALIGNED_FAST)
> >> +    return __memcpy_noalignment;
> >> +
> >> +  return __memcpy_generic;
> >> +}
> >
> > In libc, you could call __riscv_hwprobe directly, so the additional
> > argument isn't needed after all.

So you think I should drop the libc-symbols.h change and call
__riscv_hwprobe directly here? Sure, I can do that.

>
> Outside libc something is required.
>
> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> could always be called directly.

My previous spin took that approach, defining a
__riscv_hwprobe_early() in libc_nonshared that could route to the real
function if available, or make the syscall directly if not. But that
approach had the drawback that ifunc users couldn't take advantage of
the vDSO, and then all users had to comprehend the difference between
__riscv_hwprobe() and __riscv_hwprobe_early().

In contrast, IMO this approach is much nicer. Ifunc writers are
already used to getting hwcap info via a parameter. Adding this second
parameter, which also provides hwcap-like things, seems like a natural
extension. I didn't quite follow what you meant by the "extra hoops
above". If you meant the previous patch to libc-symbols.h, that's all
glibc-internal-isms, and shouldn't affect external callers. As per
Florian's comment above I can drop that patch for now since I don't
strictly need it.

-Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-03 18:42       ` Evan Green
@ 2023-08-03 22:30         ` Richard Henderson
  2023-08-07 22:10           ` Evan Green
  0 siblings, 1 reply; 27+ messages in thread
From: Richard Henderson @ 2023-08-03 22:30 UTC (permalink / raw)
  To: Evan Green; +Cc: Florian Weimer, libc-alpha, slewis, palmer, vineetg

On 8/3/23 11:42, Evan Green wrote:
> On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> <richard.henderson@linaro.org> wrote:
>> Outside libc something is required.
>>
>> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
>> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
>> could always be called directly.
> 
> My previous spin took that approach, defining a
> __riscv_hwprobe_early() in libc_nonshared that could route to the real
> function if available, or make the syscall directly if not. But that
> approach had the drawback that ifunc users couldn't take advantage of
> the vDSO, and then all users had to comprehend the difference between
> __riscv_hwprobe() and __riscv_hwprobe_early().

I would define __riscv_hwprobe such that it could take advantage of the vDSO once 
initialization reaches a certain point, but cope with being run earlier than that point by 
falling back to the syscall.

That constrains the implementation, I guess, in that it can't set errno, but just 
returning the negative errno from the syscall seems fine.

It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I 
would hope that some application of __attribute__((weak)) might correctly get you a NULL 
prior to full relocations being complete.


> In contrast, IMO this approach is much nicer. Ifunc writers are
> already used to getting hwcap info via a parameter. Adding this second
> parameter, which also provides hwcap-like things, seems like a natural
> extension. I didn't quite follow what you meant by the "extra hoops
> above".

The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is 
going to be used.

It might be worth defining some helper functions for probing a single key or a single 
field.  E.g.

uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
{
   struct riscv_hwprobe pair = { .key = key };
   int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
   if (err)
     return err;
   if (pair.key == -1)
     return -ENOENT;
   return pair.value;
}

This implementation requires that no future hwprobe key define a value which as a valid 
value in the errno range (or better, bit 63 unused).  Alternately, or additionally:

bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
{
   struct riscv_hwprobe pair = { .key = key };
   return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
           && pair.key != -1
           && (pair.value & mask) == val);
}

These yield either

     int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
     if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
       return __memcpy_noalignment;
     return __memcpy_generic;

or

     if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
       return __memcpy_noalignment;
     return __memcpy_generic;

which to my mind looks much better for a pattern you'll be replicating so very many times 
across all of the ifunc implementations in the system.


r~

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-03 22:30         ` Richard Henderson
@ 2023-08-07 22:10           ` Evan Green
  2023-08-07 22:21             ` Florian Weimer
  2023-08-07 22:48             ` enh
  0 siblings, 2 replies; 27+ messages in thread
From: Evan Green @ 2023-08-07 22:10 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 8/3/23 11:42, Evan Green wrote:
> > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > <richard.henderson@linaro.org> wrote:
> >> Outside libc something is required.
> >>
> >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> >> could always be called directly.
> >
> > My previous spin took that approach, defining a
> > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > function if available, or make the syscall directly if not. But that
> > approach had the drawback that ifunc users couldn't take advantage of
> > the vDSO, and then all users had to comprehend the difference between
> > __riscv_hwprobe() and __riscv_hwprobe_early().
>
> I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> initialization reaches a certain point, but cope with being run earlier than that point by
> falling back to the syscall.
>
> That constrains the implementation, I guess, in that it can't set errno, but just
> returning the negative errno from the syscall seems fine.
>
> It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> would hope that some application of __attribute__((weak)) might correctly get you a NULL
> prior to full relocations being complete.

Right, this is what we had in the previous iteration of this series,
and it did work ok. But it wasn't as good since it meant ifunc
selectors always got stuck in the null/fallback case and were forced
to make the syscall. With this mechanism they get to take advantage of
the vDSO.

>
>
> > In contrast, IMO this approach is much nicer. Ifunc writers are
> > already used to getting hwcap info via a parameter. Adding this second
> > parameter, which also provides hwcap-like things, seems like a natural
> > extension. I didn't quite follow what you meant by the "extra hoops
> > above".
>
> The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> going to be used.
>
> It might be worth defining some helper functions for probing a single key or a single
> field.  E.g.
>
> uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> {
>    struct riscv_hwprobe pair = { .key = key };
>    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
>    if (err)
>      return err;
>    if (pair.key == -1)
>      return -ENOENT;
>    return pair.value;
> }
>
> This implementation requires that no future hwprobe key define a value which as a valid
> value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
>
> bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> {
>    struct riscv_hwprobe pair = { .key = key };
>    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
>            && pair.key != -1
>            && (pair.value & mask) == val);
> }
>
> These yield either
>
>      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
>      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
>        return __memcpy_noalignment;
>      return __memcpy_generic;
>
> or
>
>      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
>        return __memcpy_noalignment;
>      return __memcpy_generic;
>
> which to my mind looks much better for a pattern you'll be replicating so very many times
> across all of the ifunc implementations in the system.

Ah, I see. I could make a static inline function in the header that
looks something like this (mangled by gmail, sorry):

/* Helper function usable from ifunc selectors that probes a single key. */
static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
signed long long int key,
unsigned long long int *value)
{
struct riscv_hwprobe pair;
int rc;

if (!hwprobe_func)
return -ENOSYS;

pair.key = key;
rc = hwprobe_func(&pair, 1, 0, NULL, 0);
if (rc) {
return rc;
}

if (pair.key < 0) {
return -ENOENT;
}

*value = pair.value;
return 0;
}

The ifunc selector would then be significantly cleaned up, looking
something like:

if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
return __memcpy_generic;

if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
return __memcpy_noalignment;

-Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-07 22:10           ` Evan Green
@ 2023-08-07 22:21             ` Florian Weimer
  2023-08-07 22:30               ` Evan Green
  2023-08-07 22:48             ` enh
  1 sibling, 1 reply; 27+ messages in thread
From: Florian Weimer @ 2023-08-07 22:21 UTC (permalink / raw)
  To: Evan Green; +Cc: Richard Henderson, libc-alpha, slewis, palmer, vineetg

* Evan Green:

> Right, this is what we had in the previous iteration of this series,
> and it did work ok. But it wasn't as good since it meant ifunc
> selectors always got stuck in the null/fallback case and were forced
> to make the syscall. With this mechanism they get to take advantage of
> the vDSO.

The system call is only required when the IFUNC resolver is called in
advance of relocation.  In most cases, the ELF dependencies work as
expected and ensure that the object containing the IFUNC resolver is
already relocation, and use of the fallback is avoided.

Thanks,
Florian


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-07 22:21             ` Florian Weimer
@ 2023-08-07 22:30               ` Evan Green
  0 siblings, 0 replies; 27+ messages in thread
From: Evan Green @ 2023-08-07 22:30 UTC (permalink / raw)
  To: Florian Weimer; +Cc: Richard Henderson, libc-alpha, slewis, palmer, vineetg

On Mon, Aug 7, 2023 at 3:21 PM Florian Weimer <fweimer@redhat.com> wrote:
>
> * Evan Green:
>
> > Right, this is what we had in the previous iteration of this series,
> > and it did work ok. But it wasn't as good since it meant ifunc
> > selectors always got stuck in the null/fallback case and were forced
> > to make the syscall. With this mechanism they get to take advantage of
> > the vDSO.
>
> The system call is only required when the IFUNC resolver is called in
> advance of relocation.  In most cases, the ELF dependencies work as
> expected and ensure that the object containing the IFUNC resolver is
> already relocation, and use of the fallback is avoided.

Ah that's true, we did have to go through some hoops with LD_BIND_NOW
and LD_PRELOAD to observe problems. So the cases where the
__riscv_hwprobe_early() approach doesn't get to use the vDSO isn't
"always" as I stated above but "certain exotic cases". I'm still
leaning towards the ifunc parameter (now with inline helper in the
header), but could be convinced to go back to the _early() function if
there's some advantage to it.

-Evan


>
> Thanks,
> Florian
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-07 22:10           ` Evan Green
  2023-08-07 22:21             ` Florian Weimer
@ 2023-08-07 22:48             ` enh
  2023-08-08  0:01               ` Evan Green
  1 sibling, 1 reply; 27+ messages in thread
From: enh @ 2023-08-07 22:48 UTC (permalink / raw)
  To: Evan Green
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
>
> On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> <richard.henderson@linaro.org> wrote:
> >
> > On 8/3/23 11:42, Evan Green wrote:
> > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > <richard.henderson@linaro.org> wrote:
> > >> Outside libc something is required.
> > >>
> > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > >> could always be called directly.
> > >
> > > My previous spin took that approach, defining a
> > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > function if available, or make the syscall directly if not. But that
> > > approach had the drawback that ifunc users couldn't take advantage of
> > > the vDSO, and then all users had to comprehend the difference between
> > > __riscv_hwprobe() and __riscv_hwprobe_early().
> >
> > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > initialization reaches a certain point, but cope with being run earlier than that point by
> > falling back to the syscall.
> >
> > That constrains the implementation, I guess, in that it can't set errno, but just
> > returning the negative errno from the syscall seems fine.
> >
> > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > prior to full relocations being complete.
>
> Right, this is what we had in the previous iteration of this series,
> and it did work ok. But it wasn't as good since it meant ifunc
> selectors always got stuck in the null/fallback case and were forced
> to make the syscall. With this mechanism they get to take advantage of
> the vDSO.
>
> >
> >
> > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > already used to getting hwcap info via a parameter. Adding this second
> > > parameter, which also provides hwcap-like things, seems like a natural
> > > extension. I didn't quite follow what you meant by the "extra hoops
> > > above".
> >
> > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > going to be used.
> >
> > It might be worth defining some helper functions for probing a single key or a single
> > field.  E.g.
> >
> > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > {
> >    struct riscv_hwprobe pair = { .key = key };
> >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> >    if (err)
> >      return err;
> >    if (pair.key == -1)
> >      return -ENOENT;
> >    return pair.value;
> > }
> >
> > This implementation requires that no future hwprobe key define a value which as a valid
> > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> >
> > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > {
> >    struct riscv_hwprobe pair = { .key = key };
> >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> >            && pair.key != -1
> >            && (pair.value & mask) == val);
> > }
> >
> > These yield either
> >
> >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> >        return __memcpy_noalignment;
> >      return __memcpy_generic;
> >
> > or
> >
> >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> >        return __memcpy_noalignment;
> >      return __memcpy_generic;
> >
> > which to my mind looks much better for a pattern you'll be replicating so very many times
> > across all of the ifunc implementations in the system.
>
> Ah, I see. I could make a static inline function in the header that
> looks something like this (mangled by gmail, sorry):
>
> /* Helper function usable from ifunc selectors that probes a single key. */
> static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> signed long long int key,
> unsigned long long int *value)
> {
> struct riscv_hwprobe pair;
> int rc;
>
> if (!hwprobe_func)
> return -ENOSYS;
>
> pair.key = key;
> rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> if (rc) {
> return rc;
> }
>
> if (pair.key < 0) {
> return -ENOENT;
> }
>
> *value = pair.value;
> return 0;
> }
>
> The ifunc selector would then be significantly cleaned up, looking
> something like:
>
> if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> return __memcpy_generic;
>
> if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> return __memcpy_noalignment;

(Android's libc maintainer here, having joined the list just to talk
about risc-v ifuncs :-) )

has anyone thought about calling ifunc resolvers more like this...

--same part of the dynamic loader that caches the two getauxval()s for arm64--
static struct riscv_hwprobe probes[] = {
 {.value = RISCV_HWPROBE_KEY_MVENDORID},
 {.value = RISCV_HWPROBE_KEY_MARCHID},
 {.value = RISCV_HWPROBE_KEY_MIMPID},
 {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
 {.value = RISCV_HWPROBE_KEY_IMA_EXT},
 {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
... // every time a new key is added to the kernel, we add it here
};
__riscv_hwprobe(...); // called once

--part of the dynamic loader that calls ifunc resolvers--
(*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);

this is similar to what we already have for arm64 (where there's a
getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
potentially others), but more uniform, and avoiding the source
(in)compatibility issues of adding new fields to a struct [even if it
does have a size_t to "version" it like the arm64 ifunc struct].

yes, it means everyone pays to get all the hwprobes, but that gets
amortized. and lookup in the ifunc resolver is simple and quick. if we
know that the keys will be kept dense, we can even have code in ifunc
resolvers like

if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...

though personally for the "big ticket items" that get a letter to
themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
probe_count, probes_ptr)` to the resolver, but i hear that's
controversial :-)

> -Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-07 22:48             ` enh
@ 2023-08-08  0:01               ` Evan Green
  2023-08-12  0:01                 ` enh
  0 siblings, 1 reply; 27+ messages in thread
From: Evan Green @ 2023-08-08  0:01 UTC (permalink / raw)
  To: enh
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Mon, Aug 7, 2023 at 3:48 PM enh <enh@google.com> wrote:
>
> On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
> >
> > On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> > <richard.henderson@linaro.org> wrote:
> > >
> > > On 8/3/23 11:42, Evan Green wrote:
> > > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > > <richard.henderson@linaro.org> wrote:
> > > >> Outside libc something is required.
> > > >>
> > > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > > >> could always be called directly.
> > > >
> > > > My previous spin took that approach, defining a
> > > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > > function if available, or make the syscall directly if not. But that
> > > > approach had the drawback that ifunc users couldn't take advantage of
> > > > the vDSO, and then all users had to comprehend the difference between
> > > > __riscv_hwprobe() and __riscv_hwprobe_early().
> > >
> > > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > > initialization reaches a certain point, but cope with being run earlier than that point by
> > > falling back to the syscall.
> > >
> > > That constrains the implementation, I guess, in that it can't set errno, but just
> > > returning the negative errno from the syscall seems fine.
> > >
> > > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > > prior to full relocations being complete.
> >
> > Right, this is what we had in the previous iteration of this series,
> > and it did work ok. But it wasn't as good since it meant ifunc
> > selectors always got stuck in the null/fallback case and were forced
> > to make the syscall. With this mechanism they get to take advantage of
> > the vDSO.
> >
> > >
> > >
> > > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > > already used to getting hwcap info via a parameter. Adding this second
> > > > parameter, which also provides hwcap-like things, seems like a natural
> > > > extension. I didn't quite follow what you meant by the "extra hoops
> > > > above".
> > >
> > > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > > going to be used.
> > >
> > > It might be worth defining some helper functions for probing a single key or a single
> > > field.  E.g.
> > >
> > > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > > {
> > >    struct riscv_hwprobe pair = { .key = key };
> > >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> > >    if (err)
> > >      return err;
> > >    if (pair.key == -1)
> > >      return -ENOENT;
> > >    return pair.value;
> > > }
> > >
> > > This implementation requires that no future hwprobe key define a value which as a valid
> > > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> > >
> > > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > > {
> > >    struct riscv_hwprobe pair = { .key = key };
> > >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> > >            && pair.key != -1
> > >            && (pair.value & mask) == val);
> > > }
> > >
> > > These yield either
> > >
> > >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> > >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> > >        return __memcpy_noalignment;
> > >      return __memcpy_generic;
> > >
> > > or
> > >
> > >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> > >        return __memcpy_noalignment;
> > >      return __memcpy_generic;
> > >
> > > which to my mind looks much better for a pattern you'll be replicating so very many times
> > > across all of the ifunc implementations in the system.
> >
> > Ah, I see. I could make a static inline function in the header that
> > looks something like this (mangled by gmail, sorry):
> >
> > /* Helper function usable from ifunc selectors that probes a single key. */
> > static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> > signed long long int key,
> > unsigned long long int *value)
> > {
> > struct riscv_hwprobe pair;
> > int rc;
> >
> > if (!hwprobe_func)
> > return -ENOSYS;
> >
> > pair.key = key;
> > rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> > if (rc) {
> > return rc;
> > }
> >
> > if (pair.key < 0) {
> > return -ENOENT;
> > }
> >
> > *value = pair.value;
> > return 0;
> > }
> >
> > The ifunc selector would then be significantly cleaned up, looking
> > something like:
> >
> > if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> > return __memcpy_generic;
> >
> > if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> > return __memcpy_noalignment;
>
> (Android's libc maintainer here, having joined the list just to talk
> about risc-v ifuncs :-) )
>
> has anyone thought about calling ifunc resolvers more like this...
>
> --same part of the dynamic loader that caches the two getauxval()s for arm64--
> static struct riscv_hwprobe probes[] = {
>  {.value = RISCV_HWPROBE_KEY_MVENDORID},
>  {.value = RISCV_HWPROBE_KEY_MARCHID},
>  {.value = RISCV_HWPROBE_KEY_MIMPID},
>  {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
>  {.value = RISCV_HWPROBE_KEY_IMA_EXT},
>  {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
> ... // every time a new key is added to the kernel, we add it here
> };
> __riscv_hwprobe(...); // called once
>
> --part of the dynamic loader that calls ifunc resolvers--
> (*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);
>
> this is similar to what we already have for arm64 (where there's a
> getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
> potentially others), but more uniform, and avoiding the source
> (in)compatibility issues of adding new fields to a struct [even if it
> does have a size_t to "version" it like the arm64 ifunc struct].
>
> yes, it means everyone pays to get all the hwprobes, but that gets
> amortized. and lookup in the ifunc resolver is simple and quick. if we
> know that the keys will be kept dense, we can even have code in ifunc
> resolvers like
>
> if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...
>
> though personally for the "big ticket items" that get a letter to
> themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
> probe_count, probes_ptr)` to the resolver, but i hear that's
> controversial :-)

Hello, welcome to the fun! :)

What you're describing here is almost exactly what we did inside the
vDSO function. The vDSO function acts as a front for a handful of
probe values that we've already completed and cached in userspace. We
opted to make it a function, rather than exposing the data itself via
vDSO, so that we had future flexibility in what elements we cached in
userspace and their storage format. We can update the kernel as needed
to cache the hottest things in userspace, even if that means
rearranging the data format, passing through some extra information,
or adding an extra snip of code. My hope is callers can directly
interact with the vDSO function (though maybe as Richard suggested
maybe with the help of a tidy inline helper), rather than trying to
add a second layer of userspace caching.

-Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-08  0:01               ` Evan Green
@ 2023-08-12  0:01                 ` enh
  2023-08-15 16:40                   ` Evan Green
  0 siblings, 1 reply; 27+ messages in thread
From: enh @ 2023-08-12  0:01 UTC (permalink / raw)
  To: Evan Green
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Mon, Aug 7, 2023 at 5:01 PM Evan Green <evan@rivosinc.com> wrote:
>
> On Mon, Aug 7, 2023 at 3:48 PM enh <enh@google.com> wrote:
> >
> > On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
> > >
> > > On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> > > <richard.henderson@linaro.org> wrote:
> > > >
> > > > On 8/3/23 11:42, Evan Green wrote:
> > > > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > > > <richard.henderson@linaro.org> wrote:
> > > > >> Outside libc something is required.
> > > > >>
> > > > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > > > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > > > >> could always be called directly.
> > > > >
> > > > > My previous spin took that approach, defining a
> > > > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > > > function if available, or make the syscall directly if not. But that
> > > > > approach had the drawback that ifunc users couldn't take advantage of
> > > > > the vDSO, and then all users had to comprehend the difference between
> > > > > __riscv_hwprobe() and __riscv_hwprobe_early().
> > > >
> > > > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > > > initialization reaches a certain point, but cope with being run earlier than that point by
> > > > falling back to the syscall.
> > > >
> > > > That constrains the implementation, I guess, in that it can't set errno, but just
> > > > returning the negative errno from the syscall seems fine.
> > > >
> > > > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > > > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > > > prior to full relocations being complete.
> > >
> > > Right, this is what we had in the previous iteration of this series,
> > > and it did work ok. But it wasn't as good since it meant ifunc
> > > selectors always got stuck in the null/fallback case and were forced
> > > to make the syscall. With this mechanism they get to take advantage of
> > > the vDSO.
> > >
> > > >
> > > >
> > > > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > > > already used to getting hwcap info via a parameter. Adding this second
> > > > > parameter, which also provides hwcap-like things, seems like a natural
> > > > > extension. I didn't quite follow what you meant by the "extra hoops
> > > > > above".
> > > >
> > > > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > > > going to be used.
> > > >
> > > > It might be worth defining some helper functions for probing a single key or a single
> > > > field.  E.g.
> > > >
> > > > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > > > {
> > > >    struct riscv_hwprobe pair = { .key = key };
> > > >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> > > >    if (err)
> > > >      return err;
> > > >    if (pair.key == -1)
> > > >      return -ENOENT;
> > > >    return pair.value;
> > > > }
> > > >
> > > > This implementation requires that no future hwprobe key define a value which as a valid
> > > > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> > > >
> > > > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > > > {
> > > >    struct riscv_hwprobe pair = { .key = key };
> > > >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> > > >            && pair.key != -1
> > > >            && (pair.value & mask) == val);
> > > > }
> > > >
> > > > These yield either
> > > >
> > > >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> > > >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> > > >        return __memcpy_noalignment;
> > > >      return __memcpy_generic;
> > > >
> > > > or
> > > >
> > > >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> > > >        return __memcpy_noalignment;
> > > >      return __memcpy_generic;
> > > >
> > > > which to my mind looks much better for a pattern you'll be replicating so very many times
> > > > across all of the ifunc implementations in the system.
> > >
> > > Ah, I see. I could make a static inline function in the header that
> > > looks something like this (mangled by gmail, sorry):
> > >
> > > /* Helper function usable from ifunc selectors that probes a single key. */
> > > static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> > > signed long long int key,
> > > unsigned long long int *value)
> > > {
> > > struct riscv_hwprobe pair;
> > > int rc;
> > >
> > > if (!hwprobe_func)
> > > return -ENOSYS;
> > >
> > > pair.key = key;
> > > rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> > > if (rc) {
> > > return rc;
> > > }
> > >
> > > if (pair.key < 0) {
> > > return -ENOENT;
> > > }
> > >
> > > *value = pair.value;
> > > return 0;
> > > }
> > >
> > > The ifunc selector would then be significantly cleaned up, looking
> > > something like:
> > >
> > > if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> > > return __memcpy_generic;
> > >
> > > if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> > > return __memcpy_noalignment;
> >
> > (Android's libc maintainer here, having joined the list just to talk
> > about risc-v ifuncs :-) )
> >
> > has anyone thought about calling ifunc resolvers more like this...
> >
> > --same part of the dynamic loader that caches the two getauxval()s for arm64--
> > static struct riscv_hwprobe probes[] = {
> >  {.value = RISCV_HWPROBE_KEY_MVENDORID},
> >  {.value = RISCV_HWPROBE_KEY_MARCHID},
> >  {.value = RISCV_HWPROBE_KEY_MIMPID},
> >  {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
> >  {.value = RISCV_HWPROBE_KEY_IMA_EXT},
> >  {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
> > ... // every time a new key is added to the kernel, we add it here
> > };
> > __riscv_hwprobe(...); // called once
> >
> > --part of the dynamic loader that calls ifunc resolvers--
> > (*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);
> >
> > this is similar to what we already have for arm64 (where there's a
> > getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
> > potentially others), but more uniform, and avoiding the source
> > (in)compatibility issues of adding new fields to a struct [even if it
> > does have a size_t to "version" it like the arm64 ifunc struct].
> >
> > yes, it means everyone pays to get all the hwprobes, but that gets
> > amortized. and lookup in the ifunc resolver is simple and quick. if we
> > know that the keys will be kept dense, we can even have code in ifunc
> > resolvers like
> >
> > if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...
> >
> > though personally for the "big ticket items" that get a letter to
> > themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
> > probe_count, probes_ptr)` to the resolver, but i hear that's
> > controversial :-)
>
> Hello, welcome to the fun! :)

(sorry for the delay. i've been thinking :-) )

> What you're describing here is almost exactly what we did inside the
> vDSO function. The vDSO function acts as a front for a handful of
> probe values that we've already completed and cached in userspace. We
> opted to make it a function, rather than exposing the data itself via
> vDSO, so that we had future flexibility in what elements we cached in
> userspace and their storage format. We can update the kernel as needed
> to cache the hottest things in userspace, even if that means
> rearranging the data format, passing through some extra information,
> or adding an extra snip of code. My hope is callers can directly
> interact with the vDSO function (though maybe as Richard suggested
> maybe with the help of a tidy inline helper), rather than trying to
> add a second layer of userspace caching.

on reflection i think i might be too focused on the FMV use case, in
part because we're looking at those compiler-generated ifuncs for
arm64 on Android atm. i think i'm imagining a world where there's a
lot of that, and worrying about having to pay for the setup, call, and
loop for each ifunc, and wondering why we don't just pay once instead.
(as a bit of background context, Android "app" start is actually a
dlopen() in a clone of an existing zygote process, and in general app
launch time is one of the key metrics anyone who's serious is
optimizing for. you'd be surprised how much of my life i spend
explaining to people that if they want dlopen() to be faster, maybe
they shouldn't ask us to run thousands of ELF constructors.)

but... the more time i spend looking at what we actually need in
third-party open source libraries right now i realize that libc and
FMV (which is still a future thing for us anyway) are really the only
_actual_ ifunc users. perhaps in part because macOS/iOS don't have
ifuncs, all the libraries that are part of the OS itself, for example,
are just doing their own thing with function pointers and
pthread_once() or whatever.

(i have yet to try to get any data on actual apps. i have no reason to
think they'll be very different, but that could easily be skewed by
popular middleware or a popular game engine using ifuncs, so i do plan
on following up on that.)

"how do they decide what to set that function pointer to?". well, it
looks like in most cases cpuid on x86 and calls to getauxval()
everywhere else. in some cases that's actually via some other library:
https://github.com/pytorch/cpuinfo or
https://github.com/google/cpu_features for example. so they have a
layer of caching there, even in cases where they don't have a single
function that sets all the function pointers.

so assuming i don't find that apps look very different from the OS
(that is: that apps use lots of ifuncs), i probably don't care at all
until we get to FMV. and i probably don't care for FMV, because
compiler-rt (or gcc's equivalent) will be the "caching layer" there.
(and on Android it'll be a while before i have to worry about libc's
ifuncs because we'll require V and not use ifuncs there for the
foreseeable future.)

so, yeah, given that i've adopted the "pass a null pointer rather than
no arguments" convention you have, we have room for expansion if/when
FMV is a big thing, and until then -- unless i'm shocked by what i
find looking at actual apps -- i don't think i have any reason to
believe that ifuncs matter that much, and if compiler-rt makes one
__riscv_hwprobe() call per .so, that's probably fine. (i already spend
a big chunk of my life advising people to just have one .so file,
exporting nothing but a JNI_OnLoad symbol, so this will just make that
advice even better advice :-) )

> -Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-12  0:01                 ` enh
@ 2023-08-15 16:40                   ` Evan Green
  2023-08-15 21:53                     ` enh
  0 siblings, 1 reply; 27+ messages in thread
From: Evan Green @ 2023-08-15 16:40 UTC (permalink / raw)
  To: enh
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Fri, Aug 11, 2023 at 5:01 PM enh <enh@google.com> wrote:
>
> On Mon, Aug 7, 2023 at 5:01 PM Evan Green <evan@rivosinc.com> wrote:
> >
> > On Mon, Aug 7, 2023 at 3:48 PM enh <enh@google.com> wrote:
> > >
> > > On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
> > > >
> > > > On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> > > > <richard.henderson@linaro.org> wrote:
> > > > >
> > > > > On 8/3/23 11:42, Evan Green wrote:
> > > > > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > > > > <richard.henderson@linaro.org> wrote:
> > > > > >> Outside libc something is required.
> > > > > >>
> > > > > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > > > > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > > > > >> could always be called directly.
> > > > > >
> > > > > > My previous spin took that approach, defining a
> > > > > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > > > > function if available, or make the syscall directly if not. But that
> > > > > > approach had the drawback that ifunc users couldn't take advantage of
> > > > > > the vDSO, and then all users had to comprehend the difference between
> > > > > > __riscv_hwprobe() and __riscv_hwprobe_early().
> > > > >
> > > > > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > > > > initialization reaches a certain point, but cope with being run earlier than that point by
> > > > > falling back to the syscall.
> > > > >
> > > > > That constrains the implementation, I guess, in that it can't set errno, but just
> > > > > returning the negative errno from the syscall seems fine.
> > > > >
> > > > > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > > > > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > > > > prior to full relocations being complete.
> > > >
> > > > Right, this is what we had in the previous iteration of this series,
> > > > and it did work ok. But it wasn't as good since it meant ifunc
> > > > selectors always got stuck in the null/fallback case and were forced
> > > > to make the syscall. With this mechanism they get to take advantage of
> > > > the vDSO.
> > > >
> > > > >
> > > > >
> > > > > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > > > > already used to getting hwcap info via a parameter. Adding this second
> > > > > > parameter, which also provides hwcap-like things, seems like a natural
> > > > > > extension. I didn't quite follow what you meant by the "extra hoops
> > > > > > above".
> > > > >
> > > > > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > > > > going to be used.
> > > > >
> > > > > It might be worth defining some helper functions for probing a single key or a single
> > > > > field.  E.g.
> > > > >
> > > > > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > > > > {
> > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> > > > >    if (err)
> > > > >      return err;
> > > > >    if (pair.key == -1)
> > > > >      return -ENOENT;
> > > > >    return pair.value;
> > > > > }
> > > > >
> > > > > This implementation requires that no future hwprobe key define a value which as a valid
> > > > > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> > > > >
> > > > > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > > > > {
> > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> > > > >            && pair.key != -1
> > > > >            && (pair.value & mask) == val);
> > > > > }
> > > > >
> > > > > These yield either
> > > > >
> > > > >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> > > > >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> > > > >        return __memcpy_noalignment;
> > > > >      return __memcpy_generic;
> > > > >
> > > > > or
> > > > >
> > > > >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> > > > >        return __memcpy_noalignment;
> > > > >      return __memcpy_generic;
> > > > >
> > > > > which to my mind looks much better for a pattern you'll be replicating so very many times
> > > > > across all of the ifunc implementations in the system.
> > > >
> > > > Ah, I see. I could make a static inline function in the header that
> > > > looks something like this (mangled by gmail, sorry):
> > > >
> > > > /* Helper function usable from ifunc selectors that probes a single key. */
> > > > static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> > > > signed long long int key,
> > > > unsigned long long int *value)
> > > > {
> > > > struct riscv_hwprobe pair;
> > > > int rc;
> > > >
> > > > if (!hwprobe_func)
> > > > return -ENOSYS;
> > > >
> > > > pair.key = key;
> > > > rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> > > > if (rc) {
> > > > return rc;
> > > > }
> > > >
> > > > if (pair.key < 0) {
> > > > return -ENOENT;
> > > > }
> > > >
> > > > *value = pair.value;
> > > > return 0;
> > > > }
> > > >
> > > > The ifunc selector would then be significantly cleaned up, looking
> > > > something like:
> > > >
> > > > if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> > > > return __memcpy_generic;
> > > >
> > > > if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> > > > return __memcpy_noalignment;
> > >
> > > (Android's libc maintainer here, having joined the list just to talk
> > > about risc-v ifuncs :-) )
> > >
> > > has anyone thought about calling ifunc resolvers more like this...
> > >
> > > --same part of the dynamic loader that caches the two getauxval()s for arm64--
> > > static struct riscv_hwprobe probes[] = {
> > >  {.value = RISCV_HWPROBE_KEY_MVENDORID},
> > >  {.value = RISCV_HWPROBE_KEY_MARCHID},
> > >  {.value = RISCV_HWPROBE_KEY_MIMPID},
> > >  {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
> > >  {.value = RISCV_HWPROBE_KEY_IMA_EXT},
> > >  {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
> > > ... // every time a new key is added to the kernel, we add it here
> > > };
> > > __riscv_hwprobe(...); // called once
> > >
> > > --part of the dynamic loader that calls ifunc resolvers--
> > > (*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);
> > >
> > > this is similar to what we already have for arm64 (where there's a
> > > getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
> > > potentially others), but more uniform, and avoiding the source
> > > (in)compatibility issues of adding new fields to a struct [even if it
> > > does have a size_t to "version" it like the arm64 ifunc struct].
> > >
> > > yes, it means everyone pays to get all the hwprobes, but that gets
> > > amortized. and lookup in the ifunc resolver is simple and quick. if we
> > > know that the keys will be kept dense, we can even have code in ifunc
> > > resolvers like
> > >
> > > if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...
> > >
> > > though personally for the "big ticket items" that get a letter to
> > > themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
> > > probe_count, probes_ptr)` to the resolver, but i hear that's
> > > controversial :-)
> >
> > Hello, welcome to the fun! :)
>
> (sorry for the delay. i've been thinking :-) )
>
> > What you're describing here is almost exactly what we did inside the
> > vDSO function. The vDSO function acts as a front for a handful of
> > probe values that we've already completed and cached in userspace. We
> > opted to make it a function, rather than exposing the data itself via
> > vDSO, so that we had future flexibility in what elements we cached in
> > userspace and their storage format. We can update the kernel as needed
> > to cache the hottest things in userspace, even if that means
> > rearranging the data format, passing through some extra information,
> > or adding an extra snip of code. My hope is callers can directly
> > interact with the vDSO function (though maybe as Richard suggested
> > maybe with the help of a tidy inline helper), rather than trying to
> > add a second layer of userspace caching.
>
> on reflection i think i might be too focused on the FMV use case, in
> part because we're looking at those compiler-generated ifuncs for
> arm64 on Android atm. i think i'm imagining a world where there's a
> lot of that, and worrying about having to pay for the setup, call, and
> loop for each ifunc, and wondering why we don't just pay once instead.
> (as a bit of background context, Android "app" start is actually a
> dlopen() in a clone of an existing zygote process, and in general app
> launch time is one of the key metrics anyone who's serious is
> optimizing for. you'd be surprised how much of my life i spend
> explaining to people that if they want dlopen() to be faster, maybe
> they shouldn't ask us to run thousands of ELF constructors.)
>
> but... the more time i spend looking at what we actually need in
> third-party open source libraries right now i realize that libc and
> FMV (which is still a future thing for us anyway) are really the only
> _actual_ ifunc users. perhaps in part because macOS/iOS don't have
> ifuncs, all the libraries that are part of the OS itself, for example,
> are just doing their own thing with function pointers and
> pthread_once() or whatever.
>
> (i have yet to try to get any data on actual apps. i have no reason to
> think they'll be very different, but that could easily be skewed by
> popular middleware or a popular game engine using ifuncs, so i do plan
> on following up on that.)
>
> "how do they decide what to set that function pointer to?". well, it
> looks like in most cases cpuid on x86 and calls to getauxval()
> everywhere else. in some cases that's actually via some other library:
> https://github.com/pytorch/cpuinfo or
> https://github.com/google/cpu_features for example. so they have a
> layer of caching there, even in cases where they don't have a single
> function that sets all the function pointers.

Right, function multi-versioning is just the sort of spot where we'd
imagine hwprobe gets used, since it's providing similar/equivalent
information to what cpuid does on x86. It may not be quite as fast as
cpuid (I don't know how fast cpuid actually is). But with the vDSO
function+data in userspace it should be able to match getauxval() in
performance, as they're both a function pointer plus a loop. We're
sort of planning for a world in which RISC-V has a wider set of these
values to fetch, such that a ifunc selector may need a more complex
set of information. Hwprobe and the vDSO gives us the ability both to
answer multiple queries fast, and freely allocate more keys that may
represent versioned features or even compound features.

>
> so assuming i don't find that apps look very different from the OS
> (that is: that apps use lots of ifuncs), i probably don't care at all
> until we get to FMV. and i probably don't care for FMV, because
> compiler-rt (or gcc's equivalent) will be the "caching layer" there.
> (and on Android it'll be a while before i have to worry about libc's
> ifuncs because we'll require V and not use ifuncs there for the
> foreseeable future.)
>
> so, yeah, given that i've adopted the "pass a null pointer rather than
> no arguments" convention you have, we have room for expansion if/when
> FMV is a big thing, and until then -- unless i'm shocked by what i
> find looking at actual apps -- i don't think i have any reason to
> believe that ifuncs matter that much, and if compiler-rt makes one
> __riscv_hwprobe() call per .so, that's probably fine. (i already spend
> a big chunk of my life advising people to just have one .so file,
> exporting nothing but a JNI_OnLoad symbol, so this will just make that
> advice even better advice :-) )

Just to confirm, by "pass a null pointer", you're saying that the
Android libc also passes NULL as the second ifunc selector argument
(or first)? That's good. It sounds like you're planning to just
continue passing NULL for now, and wait for people to start clamoring
for this in android libc?
-Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-15 16:40                   ` Evan Green
@ 2023-08-15 21:53                     ` enh
  2023-08-15 23:01                       ` Evan Green
  0 siblings, 1 reply; 27+ messages in thread
From: enh @ 2023-08-15 21:53 UTC (permalink / raw)
  To: Evan Green
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Tue, Aug 15, 2023 at 9:41 AM Evan Green <evan@rivosinc.com> wrote:
>
> On Fri, Aug 11, 2023 at 5:01 PM enh <enh@google.com> wrote:
> >
> > On Mon, Aug 7, 2023 at 5:01 PM Evan Green <evan@rivosinc.com> wrote:
> > >
> > > On Mon, Aug 7, 2023 at 3:48 PM enh <enh@google.com> wrote:
> > > >
> > > > On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
> > > > >
> > > > > On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> > > > > <richard.henderson@linaro.org> wrote:
> > > > > >
> > > > > > On 8/3/23 11:42, Evan Green wrote:
> > > > > > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > >> Outside libc something is required.
> > > > > > >>
> > > > > > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > > > > > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > > > > > >> could always be called directly.
> > > > > > >
> > > > > > > My previous spin took that approach, defining a
> > > > > > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > > > > > function if available, or make the syscall directly if not. But that
> > > > > > > approach had the drawback that ifunc users couldn't take advantage of
> > > > > > > the vDSO, and then all users had to comprehend the difference between
> > > > > > > __riscv_hwprobe() and __riscv_hwprobe_early().
> > > > > >
> > > > > > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > > > > > initialization reaches a certain point, but cope with being run earlier than that point by
> > > > > > falling back to the syscall.
> > > > > >
> > > > > > That constrains the implementation, I guess, in that it can't set errno, but just
> > > > > > returning the negative errno from the syscall seems fine.
> > > > > >
> > > > > > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > > > > > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > > > > > prior to full relocations being complete.
> > > > >
> > > > > Right, this is what we had in the previous iteration of this series,
> > > > > and it did work ok. But it wasn't as good since it meant ifunc
> > > > > selectors always got stuck in the null/fallback case and were forced
> > > > > to make the syscall. With this mechanism they get to take advantage of
> > > > > the vDSO.
> > > > >
> > > > > >
> > > > > >
> > > > > > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > > > > > already used to getting hwcap info via a parameter. Adding this second
> > > > > > > parameter, which also provides hwcap-like things, seems like a natural
> > > > > > > extension. I didn't quite follow what you meant by the "extra hoops
> > > > > > > above".
> > > > > >
> > > > > > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > > > > > going to be used.
> > > > > >
> > > > > > It might be worth defining some helper functions for probing a single key or a single
> > > > > > field.  E.g.
> > > > > >
> > > > > > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > > > > > {
> > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> > > > > >    if (err)
> > > > > >      return err;
> > > > > >    if (pair.key == -1)
> > > > > >      return -ENOENT;
> > > > > >    return pair.value;
> > > > > > }
> > > > > >
> > > > > > This implementation requires that no future hwprobe key define a value which as a valid
> > > > > > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> > > > > >
> > > > > > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > > > > > {
> > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> > > > > >            && pair.key != -1
> > > > > >            && (pair.value & mask) == val);
> > > > > > }
> > > > > >
> > > > > > These yield either
> > > > > >
> > > > > >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> > > > > >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> > > > > >        return __memcpy_noalignment;
> > > > > >      return __memcpy_generic;
> > > > > >
> > > > > > or
> > > > > >
> > > > > >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> > > > > >        return __memcpy_noalignment;
> > > > > >      return __memcpy_generic;
> > > > > >
> > > > > > which to my mind looks much better for a pattern you'll be replicating so very many times
> > > > > > across all of the ifunc implementations in the system.
> > > > >
> > > > > Ah, I see. I could make a static inline function in the header that
> > > > > looks something like this (mangled by gmail, sorry):
> > > > >
> > > > > /* Helper function usable from ifunc selectors that probes a single key. */
> > > > > static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> > > > > signed long long int key,
> > > > > unsigned long long int *value)
> > > > > {
> > > > > struct riscv_hwprobe pair;
> > > > > int rc;
> > > > >
> > > > > if (!hwprobe_func)
> > > > > return -ENOSYS;
> > > > >
> > > > > pair.key = key;
> > > > > rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> > > > > if (rc) {
> > > > > return rc;
> > > > > }
> > > > >
> > > > > if (pair.key < 0) {
> > > > > return -ENOENT;
> > > > > }
> > > > >
> > > > > *value = pair.value;
> > > > > return 0;
> > > > > }
> > > > >
> > > > > The ifunc selector would then be significantly cleaned up, looking
> > > > > something like:
> > > > >
> > > > > if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> > > > > return __memcpy_generic;
> > > > >
> > > > > if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> > > > > return __memcpy_noalignment;
> > > >
> > > > (Android's libc maintainer here, having joined the list just to talk
> > > > about risc-v ifuncs :-) )
> > > >
> > > > has anyone thought about calling ifunc resolvers more like this...
> > > >
> > > > --same part of the dynamic loader that caches the two getauxval()s for arm64--
> > > > static struct riscv_hwprobe probes[] = {
> > > >  {.value = RISCV_HWPROBE_KEY_MVENDORID},
> > > >  {.value = RISCV_HWPROBE_KEY_MARCHID},
> > > >  {.value = RISCV_HWPROBE_KEY_MIMPID},
> > > >  {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
> > > >  {.value = RISCV_HWPROBE_KEY_IMA_EXT},
> > > >  {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
> > > > ... // every time a new key is added to the kernel, we add it here
> > > > };
> > > > __riscv_hwprobe(...); // called once
> > > >
> > > > --part of the dynamic loader that calls ifunc resolvers--
> > > > (*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);
> > > >
> > > > this is similar to what we already have for arm64 (where there's a
> > > > getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
> > > > potentially others), but more uniform, and avoiding the source
> > > > (in)compatibility issues of adding new fields to a struct [even if it
> > > > does have a size_t to "version" it like the arm64 ifunc struct].
> > > >
> > > > yes, it means everyone pays to get all the hwprobes, but that gets
> > > > amortized. and lookup in the ifunc resolver is simple and quick. if we
> > > > know that the keys will be kept dense, we can even have code in ifunc
> > > > resolvers like
> > > >
> > > > if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...
> > > >
> > > > though personally for the "big ticket items" that get a letter to
> > > > themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
> > > > probe_count, probes_ptr)` to the resolver, but i hear that's
> > > > controversial :-)
> > >
> > > Hello, welcome to the fun! :)
> >
> > (sorry for the delay. i've been thinking :-) )
> >
> > > What you're describing here is almost exactly what we did inside the
> > > vDSO function. The vDSO function acts as a front for a handful of
> > > probe values that we've already completed and cached in userspace. We
> > > opted to make it a function, rather than exposing the data itself via
> > > vDSO, so that we had future flexibility in what elements we cached in
> > > userspace and their storage format. We can update the kernel as needed
> > > to cache the hottest things in userspace, even if that means
> > > rearranging the data format, passing through some extra information,
> > > or adding an extra snip of code. My hope is callers can directly
> > > interact with the vDSO function (though maybe as Richard suggested
> > > maybe with the help of a tidy inline helper), rather than trying to
> > > add a second layer of userspace caching.
> >
> > on reflection i think i might be too focused on the FMV use case, in
> > part because we're looking at those compiler-generated ifuncs for
> > arm64 on Android atm. i think i'm imagining a world where there's a
> > lot of that, and worrying about having to pay for the setup, call, and
> > loop for each ifunc, and wondering why we don't just pay once instead.
> > (as a bit of background context, Android "app" start is actually a
> > dlopen() in a clone of an existing zygote process, and in general app
> > launch time is one of the key metrics anyone who's serious is
> > optimizing for. you'd be surprised how much of my life i spend
> > explaining to people that if they want dlopen() to be faster, maybe
> > they shouldn't ask us to run thousands of ELF constructors.)
> >
> > but... the more time i spend looking at what we actually need in
> > third-party open source libraries right now i realize that libc and
> > FMV (which is still a future thing for us anyway) are really the only
> > _actual_ ifunc users. perhaps in part because macOS/iOS don't have
> > ifuncs, all the libraries that are part of the OS itself, for example,
> > are just doing their own thing with function pointers and
> > pthread_once() or whatever.
> >
> > (i have yet to try to get any data on actual apps. i have no reason to
> > think they'll be very different, but that could easily be skewed by
> > popular middleware or a popular game engine using ifuncs, so i do plan
> > on following up on that.)
> >
> > "how do they decide what to set that function pointer to?". well, it
> > looks like in most cases cpuid on x86 and calls to getauxval()
> > everywhere else. in some cases that's actually via some other library:
> > https://github.com/pytorch/cpuinfo or
> > https://github.com/google/cpu_features for example. so they have a
> > layer of caching there, even in cases where they don't have a single
> > function that sets all the function pointers.
>
> Right, function multi-versioning is just the sort of spot where we'd
> imagine hwprobe gets used, since it's providing similar/equivalent
> information to what cpuid does on x86. It may not be quite as fast as
> cpuid (I don't know how fast cpuid actually is). But with the vDSO
> function+data in userspace it should be able to match getauxval() in
> performance, as they're both a function pointer plus a loop. We're
> sort of planning for a world in which RISC-V has a wider set of these
> values to fetch, such that a ifunc selector may need a more complex
> set of information. Hwprobe and the vDSO gives us the ability both to
> answer multiple queries fast, and freely allocate more keys that may
> represent versioned features or even compound features.

yeah, my incorrect mental model was that -- primarily because of
x86-64 and cpuid -- every function would get its own ifunc resolver
that would have to make a query. but the [in progress] arm64
implementation shows that that's not really the case anyway, and we
can just cache __riscv_hwprobe() in the same [one] place that
getauxval() is already being cached for arm64.

> > so assuming i don't find that apps look very different from the OS
> > (that is: that apps use lots of ifuncs), i probably don't care at all
> > until we get to FMV. and i probably don't care for FMV, because
> > compiler-rt (or gcc's equivalent) will be the "caching layer" there.
> > (and on Android it'll be a while before i have to worry about libc's
> > ifuncs because we'll require V and not use ifuncs there for the
> > foreseeable future.)
> >
> > so, yeah, given that i've adopted the "pass a null pointer rather than
> > no arguments" convention you have, we have room for expansion if/when
> > FMV is a big thing, and until then -- unless i'm shocked by what i
> > find looking at actual apps -- i don't think i have any reason to
> > believe that ifuncs matter that much, and if compiler-rt makes one
> > __riscv_hwprobe() call per .so, that's probably fine. (i already spend
> > a big chunk of my life advising people to just have one .so file,
> > exporting nothing but a JNI_OnLoad symbol, so this will just make that
> > advice even better advice :-) )
>
> Just to confirm, by "pass a null pointer", you're saying that the
> Android libc also passes NULL as the second ifunc selector argument
> (or first)?

#elif defined(__riscv)
  // This argument and its value is just a placeholder for now,
  // but it means that if we do pass something in future (such as
  // getauxval() and/or hwprobe key/value pairs), callees will be able to
  // recognize what they're being given.
  typedef ElfW(Addr) (*ifunc_resolver_t)(void*);
  return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(nullptr);

it's arm64 that has the initial getauxval() argument:

#if defined(__aarch64__)
  typedef ElfW(Addr) (*ifunc_resolver_t)(uint64_t, __ifunc_arg_t*);
  static __ifunc_arg_t arg;
  static bool initialized = false;
  if (!initialized) {
    initialized = true;
    arg._size = sizeof(__ifunc_arg_t);
    arg._hwcap = getauxval(AT_HWCAP);
    arg._hwcap2 = getauxval(AT_HWCAP2);
  }
  return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(arg._hwcap
| _IFUNC_ARG_HWCAP, &arg);

https://android.googlesource.com/platform/bionic/+/main/libc/bionic/bionic_call_ifunc_resolver.cpp

> That's good. It sounds like you're planning to just
> continue passing NULL for now, and wait for people to start clamoring
> for this in android libc?

yeah, and i'm assuming there will never be any clamor ... yesterday
and today i actually checked a bunch of popular apks, and didn't find
any that were currently using ifuncs.

the only change i'm thinking of making right now is that "there's a
single argument, and it's null" should probably be the default.
obviously since Android doesn't add new architectures very often, this
is only likely to affect x86/x86-64 for the foreseeable future, but
being able to recognize at a glance "am i running under a libc new
enough to pass me arguments?" would certainly have helped for arm64.
even if x86/x86-64 never benefit, it seems like the right default for
the #else clause...

> -Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-15 21:53                     ` enh
@ 2023-08-15 23:01                       ` Evan Green
  2023-08-16 23:18                         ` enh
  0 siblings, 1 reply; 27+ messages in thread
From: Evan Green @ 2023-08-15 23:01 UTC (permalink / raw)
  To: enh
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Tue, Aug 15, 2023 at 2:54 PM enh <enh@google.com> wrote:
>
> On Tue, Aug 15, 2023 at 9:41 AM Evan Green <evan@rivosinc.com> wrote:
> >
> > On Fri, Aug 11, 2023 at 5:01 PM enh <enh@google.com> wrote:
> > >
> > > On Mon, Aug 7, 2023 at 5:01 PM Evan Green <evan@rivosinc.com> wrote:
> > > >
> > > > On Mon, Aug 7, 2023 at 3:48 PM enh <enh@google.com> wrote:
> > > > >
> > > > > On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > >
> > > > > > On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > >
> > > > > > > On 8/3/23 11:42, Evan Green wrote:
> > > > > > > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > >> Outside libc something is required.
> > > > > > > >>
> > > > > > > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > > > > > > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > > > > > > >> could always be called directly.
> > > > > > > >
> > > > > > > > My previous spin took that approach, defining a
> > > > > > > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > > > > > > function if available, or make the syscall directly if not. But that
> > > > > > > > approach had the drawback that ifunc users couldn't take advantage of
> > > > > > > > the vDSO, and then all users had to comprehend the difference between
> > > > > > > > __riscv_hwprobe() and __riscv_hwprobe_early().
> > > > > > >
> > > > > > > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > > > > > > initialization reaches a certain point, but cope with being run earlier than that point by
> > > > > > > falling back to the syscall.
> > > > > > >
> > > > > > > That constrains the implementation, I guess, in that it can't set errno, but just
> > > > > > > returning the negative errno from the syscall seems fine.
> > > > > > >
> > > > > > > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > > > > > > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > > > > > > prior to full relocations being complete.
> > > > > >
> > > > > > Right, this is what we had in the previous iteration of this series,
> > > > > > and it did work ok. But it wasn't as good since it meant ifunc
> > > > > > selectors always got stuck in the null/fallback case and were forced
> > > > > > to make the syscall. With this mechanism they get to take advantage of
> > > > > > the vDSO.
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > > > > > > already used to getting hwcap info via a parameter. Adding this second
> > > > > > > > parameter, which also provides hwcap-like things, seems like a natural
> > > > > > > > extension. I didn't quite follow what you meant by the "extra hoops
> > > > > > > > above".
> > > > > > >
> > > > > > > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > > > > > > going to be used.
> > > > > > >
> > > > > > > It might be worth defining some helper functions for probing a single key or a single
> > > > > > > field.  E.g.
> > > > > > >
> > > > > > > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > > > > > > {
> > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> > > > > > >    if (err)
> > > > > > >      return err;
> > > > > > >    if (pair.key == -1)
> > > > > > >      return -ENOENT;
> > > > > > >    return pair.value;
> > > > > > > }
> > > > > > >
> > > > > > > This implementation requires that no future hwprobe key define a value which as a valid
> > > > > > > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> > > > > > >
> > > > > > > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > > > > > > {
> > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> > > > > > >            && pair.key != -1
> > > > > > >            && (pair.value & mask) == val);
> > > > > > > }
> > > > > > >
> > > > > > > These yield either
> > > > > > >
> > > > > > >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> > > > > > >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> > > > > > >        return __memcpy_noalignment;
> > > > > > >      return __memcpy_generic;
> > > > > > >
> > > > > > > or
> > > > > > >
> > > > > > >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> > > > > > >        return __memcpy_noalignment;
> > > > > > >      return __memcpy_generic;
> > > > > > >
> > > > > > > which to my mind looks much better for a pattern you'll be replicating so very many times
> > > > > > > across all of the ifunc implementations in the system.
> > > > > >
> > > > > > Ah, I see. I could make a static inline function in the header that
> > > > > > looks something like this (mangled by gmail, sorry):
> > > > > >
> > > > > > /* Helper function usable from ifunc selectors that probes a single key. */
> > > > > > static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> > > > > > signed long long int key,
> > > > > > unsigned long long int *value)
> > > > > > {
> > > > > > struct riscv_hwprobe pair;
> > > > > > int rc;
> > > > > >
> > > > > > if (!hwprobe_func)
> > > > > > return -ENOSYS;
> > > > > >
> > > > > > pair.key = key;
> > > > > > rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> > > > > > if (rc) {
> > > > > > return rc;
> > > > > > }
> > > > > >
> > > > > > if (pair.key < 0) {
> > > > > > return -ENOENT;
> > > > > > }
> > > > > >
> > > > > > *value = pair.value;
> > > > > > return 0;
> > > > > > }
> > > > > >
> > > > > > The ifunc selector would then be significantly cleaned up, looking
> > > > > > something like:
> > > > > >
> > > > > > if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> > > > > > return __memcpy_generic;
> > > > > >
> > > > > > if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> > > > > > return __memcpy_noalignment;
> > > > >
> > > > > (Android's libc maintainer here, having joined the list just to talk
> > > > > about risc-v ifuncs :-) )
> > > > >
> > > > > has anyone thought about calling ifunc resolvers more like this...
> > > > >
> > > > > --same part of the dynamic loader that caches the two getauxval()s for arm64--
> > > > > static struct riscv_hwprobe probes[] = {
> > > > >  {.value = RISCV_HWPROBE_KEY_MVENDORID},
> > > > >  {.value = RISCV_HWPROBE_KEY_MARCHID},
> > > > >  {.value = RISCV_HWPROBE_KEY_MIMPID},
> > > > >  {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
> > > > >  {.value = RISCV_HWPROBE_KEY_IMA_EXT},
> > > > >  {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
> > > > > ... // every time a new key is added to the kernel, we add it here
> > > > > };
> > > > > __riscv_hwprobe(...); // called once
> > > > >
> > > > > --part of the dynamic loader that calls ifunc resolvers--
> > > > > (*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);
> > > > >
> > > > > this is similar to what we already have for arm64 (where there's a
> > > > > getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
> > > > > potentially others), but more uniform, and avoiding the source
> > > > > (in)compatibility issues of adding new fields to a struct [even if it
> > > > > does have a size_t to "version" it like the arm64 ifunc struct].
> > > > >
> > > > > yes, it means everyone pays to get all the hwprobes, but that gets
> > > > > amortized. and lookup in the ifunc resolver is simple and quick. if we
> > > > > know that the keys will be kept dense, we can even have code in ifunc
> > > > > resolvers like
> > > > >
> > > > > if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...
> > > > >
> > > > > though personally for the "big ticket items" that get a letter to
> > > > > themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
> > > > > probe_count, probes_ptr)` to the resolver, but i hear that's
> > > > > controversial :-)
> > > >
> > > > Hello, welcome to the fun! :)
> > >
> > > (sorry for the delay. i've been thinking :-) )
> > >
> > > > What you're describing here is almost exactly what we did inside the
> > > > vDSO function. The vDSO function acts as a front for a handful of
> > > > probe values that we've already completed and cached in userspace. We
> > > > opted to make it a function, rather than exposing the data itself via
> > > > vDSO, so that we had future flexibility in what elements we cached in
> > > > userspace and their storage format. We can update the kernel as needed
> > > > to cache the hottest things in userspace, even if that means
> > > > rearranging the data format, passing through some extra information,
> > > > or adding an extra snip of code. My hope is callers can directly
> > > > interact with the vDSO function (though maybe as Richard suggested
> > > > maybe with the help of a tidy inline helper), rather than trying to
> > > > add a second layer of userspace caching.
> > >
> > > on reflection i think i might be too focused on the FMV use case, in
> > > part because we're looking at those compiler-generated ifuncs for
> > > arm64 on Android atm. i think i'm imagining a world where there's a
> > > lot of that, and worrying about having to pay for the setup, call, and
> > > loop for each ifunc, and wondering why we don't just pay once instead.
> > > (as a bit of background context, Android "app" start is actually a
> > > dlopen() in a clone of an existing zygote process, and in general app
> > > launch time is one of the key metrics anyone who's serious is
> > > optimizing for. you'd be surprised how much of my life i spend
> > > explaining to people that if they want dlopen() to be faster, maybe
> > > they shouldn't ask us to run thousands of ELF constructors.)
> > >
> > > but... the more time i spend looking at what we actually need in
> > > third-party open source libraries right now i realize that libc and
> > > FMV (which is still a future thing for us anyway) are really the only
> > > _actual_ ifunc users. perhaps in part because macOS/iOS don't have
> > > ifuncs, all the libraries that are part of the OS itself, for example,
> > > are just doing their own thing with function pointers and
> > > pthread_once() or whatever.
> > >
> > > (i have yet to try to get any data on actual apps. i have no reason to
> > > think they'll be very different, but that could easily be skewed by
> > > popular middleware or a popular game engine using ifuncs, so i do plan
> > > on following up on that.)
> > >
> > > "how do they decide what to set that function pointer to?". well, it
> > > looks like in most cases cpuid on x86 and calls to getauxval()
> > > everywhere else. in some cases that's actually via some other library:
> > > https://github.com/pytorch/cpuinfo or
> > > https://github.com/google/cpu_features for example. so they have a
> > > layer of caching there, even in cases where they don't have a single
> > > function that sets all the function pointers.
> >
> > Right, function multi-versioning is just the sort of spot where we'd
> > imagine hwprobe gets used, since it's providing similar/equivalent
> > information to what cpuid does on x86. It may not be quite as fast as
> > cpuid (I don't know how fast cpuid actually is). But with the vDSO
> > function+data in userspace it should be able to match getauxval() in
> > performance, as they're both a function pointer plus a loop. We're
> > sort of planning for a world in which RISC-V has a wider set of these
> > values to fetch, such that a ifunc selector may need a more complex
> > set of information. Hwprobe and the vDSO gives us the ability both to
> > answer multiple queries fast, and freely allocate more keys that may
> > represent versioned features or even compound features.
>
> yeah, my incorrect mental model was that -- primarily because of
> x86-64 and cpuid -- every function would get its own ifunc resolver
> that would have to make a query. but the [in progress] arm64
> implementation shows that that's not really the case anyway, and we
> can just cache __riscv_hwprobe() in the same [one] place that
> getauxval() is already being cached for arm64.

Sounds good.

>
> > > so assuming i don't find that apps look very different from the OS
> > > (that is: that apps use lots of ifuncs), i probably don't care at all
> > > until we get to FMV. and i probably don't care for FMV, because
> > > compiler-rt (or gcc's equivalent) will be the "caching layer" there.
> > > (and on Android it'll be a while before i have to worry about libc's
> > > ifuncs because we'll require V and not use ifuncs there for the
> > > foreseeable future.)
> > >
> > > so, yeah, given that i've adopted the "pass a null pointer rather than
> > > no arguments" convention you have, we have room for expansion if/when
> > > FMV is a big thing, and until then -- unless i'm shocked by what i
> > > find looking at actual apps -- i don't think i have any reason to
> > > believe that ifuncs matter that much, and if compiler-rt makes one
> > > __riscv_hwprobe() call per .so, that's probably fine. (i already spend
> > > a big chunk of my life advising people to just have one .so file,
> > > exporting nothing but a JNI_OnLoad symbol, so this will just make that
> > > advice even better advice :-) )
> >
> > Just to confirm, by "pass a null pointer", you're saying that the
> > Android libc also passes NULL as the second ifunc selector argument
> > (or first)?
>
> #elif defined(__riscv)
>   // This argument and its value is just a placeholder for now,
>   // but it means that if we do pass something in future (such as
>   // getauxval() and/or hwprobe key/value pairs), callees will be able to
>   // recognize what they're being given.
>   typedef ElfW(Addr) (*ifunc_resolver_t)(void*);
>   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(nullptr);
>
> it's arm64 that has the initial getauxval() argument:
>
> #if defined(__aarch64__)
>   typedef ElfW(Addr) (*ifunc_resolver_t)(uint64_t, __ifunc_arg_t*);
>   static __ifunc_arg_t arg;
>   static bool initialized = false;
>   if (!initialized) {
>     initialized = true;
>     arg._size = sizeof(__ifunc_arg_t);
>     arg._hwcap = getauxval(AT_HWCAP);
>     arg._hwcap2 = getauxval(AT_HWCAP2);
>   }
>   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(arg._hwcap
> | _IFUNC_ARG_HWCAP, &arg);
>
> https://android.googlesource.com/platform/bionic/+/main/libc/bionic/bionic_call_ifunc_resolver.cpp
>
> > That's good. It sounds like you're planning to just
> > continue passing NULL for now, and wait for people to start clamoring
> > for this in android libc?
>
> yeah, and i'm assuming there will never be any clamor ... yesterday
> and today i actually checked a bunch of popular apks, and didn't find
> any that were currently using ifuncs.
>
> the only change i'm thinking of making right now is that "there's a
> single argument, and it's null" should probably be the default.
> obviously since Android doesn't add new architectures very often, this
> is only likely to affect x86/x86-64 for the foreseeable future, but
> being able to recognize at a glance "am i running under a libc new
> enough to pass me arguments?" would certainly have helped for arm64.
> even if x86/x86-64 never benefit, it seems like the right default for
> the #else clause...

Sounds good, thanks for the pointers. The paranoid person in me would
also add a comment in the risc-v section that if a pointer to hwprobe
is added, it should be added as the second argument, behind hwcap as
the first (assuming this change lands).

Come to think of it, the static inline helper I'm proposing in my
discussion with Richard needs to take both arguments, since callers
need to check both ((arg1 != 0) && (arg2 != NULL)) to safely know that
arg2 is a pointer to __riscv_hwprobe().

-Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-15 23:01                       ` Evan Green
@ 2023-08-16 23:18                         ` enh
  2023-08-17 16:27                           ` Evan Green
  0 siblings, 1 reply; 27+ messages in thread
From: enh @ 2023-08-16 23:18 UTC (permalink / raw)
  To: Evan Green
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Tue, Aug 15, 2023 at 4:02 PM Evan Green <evan@rivosinc.com> wrote:
>
> On Tue, Aug 15, 2023 at 2:54 PM enh <enh@google.com> wrote:
> >
> > On Tue, Aug 15, 2023 at 9:41 AM Evan Green <evan@rivosinc.com> wrote:
> > >
> > > On Fri, Aug 11, 2023 at 5:01 PM enh <enh@google.com> wrote:
> > > >
> > > > On Mon, Aug 7, 2023 at 5:01 PM Evan Green <evan@rivosinc.com> wrote:
> > > > >
> > > > > On Mon, Aug 7, 2023 at 3:48 PM enh <enh@google.com> wrote:
> > > > > >
> > > > > > On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > > >
> > > > > > > On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > >
> > > > > > > > On 8/3/23 11:42, Evan Green wrote:
> > > > > > > > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > > >> Outside libc something is required.
> > > > > > > > >>
> > > > > > > > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > > > > > > > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > > > > > > > >> could always be called directly.
> > > > > > > > >
> > > > > > > > > My previous spin took that approach, defining a
> > > > > > > > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > > > > > > > function if available, or make the syscall directly if not. But that
> > > > > > > > > approach had the drawback that ifunc users couldn't take advantage of
> > > > > > > > > the vDSO, and then all users had to comprehend the difference between
> > > > > > > > > __riscv_hwprobe() and __riscv_hwprobe_early().
> > > > > > > >
> > > > > > > > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > > > > > > > initialization reaches a certain point, but cope with being run earlier than that point by
> > > > > > > > falling back to the syscall.
> > > > > > > >
> > > > > > > > That constrains the implementation, I guess, in that it can't set errno, but just
> > > > > > > > returning the negative errno from the syscall seems fine.
> > > > > > > >
> > > > > > > > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > > > > > > > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > > > > > > > prior to full relocations being complete.
> > > > > > >
> > > > > > > Right, this is what we had in the previous iteration of this series,
> > > > > > > and it did work ok. But it wasn't as good since it meant ifunc
> > > > > > > selectors always got stuck in the null/fallback case and were forced
> > > > > > > to make the syscall. With this mechanism they get to take advantage of
> > > > > > > the vDSO.
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > > > > > > > already used to getting hwcap info via a parameter. Adding this second
> > > > > > > > > parameter, which also provides hwcap-like things, seems like a natural
> > > > > > > > > extension. I didn't quite follow what you meant by the "extra hoops
> > > > > > > > > above".
> > > > > > > >
> > > > > > > > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > > > > > > > going to be used.
> > > > > > > >
> > > > > > > > It might be worth defining some helper functions for probing a single key or a single
> > > > > > > > field.  E.g.
> > > > > > > >
> > > > > > > > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > > > > > > > {
> > > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > > >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> > > > > > > >    if (err)
> > > > > > > >      return err;
> > > > > > > >    if (pair.key == -1)
> > > > > > > >      return -ENOENT;
> > > > > > > >    return pair.value;
> > > > > > > > }
> > > > > > > >
> > > > > > > > This implementation requires that no future hwprobe key define a value which as a valid
> > > > > > > > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> > > > > > > >
> > > > > > > > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > > > > > > > {
> > > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > > >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> > > > > > > >            && pair.key != -1
> > > > > > > >            && (pair.value & mask) == val);
> > > > > > > > }
> > > > > > > >
> > > > > > > > These yield either
> > > > > > > >
> > > > > > > >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> > > > > > > >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> > > > > > > >        return __memcpy_noalignment;
> > > > > > > >      return __memcpy_generic;
> > > > > > > >
> > > > > > > > or
> > > > > > > >
> > > > > > > >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> > > > > > > >        return __memcpy_noalignment;
> > > > > > > >      return __memcpy_generic;
> > > > > > > >
> > > > > > > > which to my mind looks much better for a pattern you'll be replicating so very many times
> > > > > > > > across all of the ifunc implementations in the system.
> > > > > > >
> > > > > > > Ah, I see. I could make a static inline function in the header that
> > > > > > > looks something like this (mangled by gmail, sorry):
> > > > > > >
> > > > > > > /* Helper function usable from ifunc selectors that probes a single key. */
> > > > > > > static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> > > > > > > signed long long int key,
> > > > > > > unsigned long long int *value)
> > > > > > > {
> > > > > > > struct riscv_hwprobe pair;
> > > > > > > int rc;
> > > > > > >
> > > > > > > if (!hwprobe_func)
> > > > > > > return -ENOSYS;
> > > > > > >
> > > > > > > pair.key = key;
> > > > > > > rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> > > > > > > if (rc) {
> > > > > > > return rc;
> > > > > > > }
> > > > > > >
> > > > > > > if (pair.key < 0) {
> > > > > > > return -ENOENT;
> > > > > > > }
> > > > > > >
> > > > > > > *value = pair.value;
> > > > > > > return 0;
> > > > > > > }
> > > > > > >
> > > > > > > The ifunc selector would then be significantly cleaned up, looking
> > > > > > > something like:
> > > > > > >
> > > > > > > if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> > > > > > > return __memcpy_generic;
> > > > > > >
> > > > > > > if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> > > > > > > return __memcpy_noalignment;
> > > > > >
> > > > > > (Android's libc maintainer here, having joined the list just to talk
> > > > > > about risc-v ifuncs :-) )
> > > > > >
> > > > > > has anyone thought about calling ifunc resolvers more like this...
> > > > > >
> > > > > > --same part of the dynamic loader that caches the two getauxval()s for arm64--
> > > > > > static struct riscv_hwprobe probes[] = {
> > > > > >  {.value = RISCV_HWPROBE_KEY_MVENDORID},
> > > > > >  {.value = RISCV_HWPROBE_KEY_MARCHID},
> > > > > >  {.value = RISCV_HWPROBE_KEY_MIMPID},
> > > > > >  {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
> > > > > >  {.value = RISCV_HWPROBE_KEY_IMA_EXT},
> > > > > >  {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
> > > > > > ... // every time a new key is added to the kernel, we add it here
> > > > > > };
> > > > > > __riscv_hwprobe(...); // called once
> > > > > >
> > > > > > --part of the dynamic loader that calls ifunc resolvers--
> > > > > > (*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);
> > > > > >
> > > > > > this is similar to what we already have for arm64 (where there's a
> > > > > > getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
> > > > > > potentially others), but more uniform, and avoiding the source
> > > > > > (in)compatibility issues of adding new fields to a struct [even if it
> > > > > > does have a size_t to "version" it like the arm64 ifunc struct].
> > > > > >
> > > > > > yes, it means everyone pays to get all the hwprobes, but that gets
> > > > > > amortized. and lookup in the ifunc resolver is simple and quick. if we
> > > > > > know that the keys will be kept dense, we can even have code in ifunc
> > > > > > resolvers like
> > > > > >
> > > > > > if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...
> > > > > >
> > > > > > though personally for the "big ticket items" that get a letter to
> > > > > > themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
> > > > > > probe_count, probes_ptr)` to the resolver, but i hear that's
> > > > > > controversial :-)
> > > > >
> > > > > Hello, welcome to the fun! :)
> > > >
> > > > (sorry for the delay. i've been thinking :-) )
> > > >
> > > > > What you're describing here is almost exactly what we did inside the
> > > > > vDSO function. The vDSO function acts as a front for a handful of
> > > > > probe values that we've already completed and cached in userspace. We
> > > > > opted to make it a function, rather than exposing the data itself via
> > > > > vDSO, so that we had future flexibility in what elements we cached in
> > > > > userspace and their storage format. We can update the kernel as needed
> > > > > to cache the hottest things in userspace, even if that means
> > > > > rearranging the data format, passing through some extra information,
> > > > > or adding an extra snip of code. My hope is callers can directly
> > > > > interact with the vDSO function (though maybe as Richard suggested
> > > > > maybe with the help of a tidy inline helper), rather than trying to
> > > > > add a second layer of userspace caching.
> > > >
> > > > on reflection i think i might be too focused on the FMV use case, in
> > > > part because we're looking at those compiler-generated ifuncs for
> > > > arm64 on Android atm. i think i'm imagining a world where there's a
> > > > lot of that, and worrying about having to pay for the setup, call, and
> > > > loop for each ifunc, and wondering why we don't just pay once instead.
> > > > (as a bit of background context, Android "app" start is actually a
> > > > dlopen() in a clone of an existing zygote process, and in general app
> > > > launch time is one of the key metrics anyone who's serious is
> > > > optimizing for. you'd be surprised how much of my life i spend
> > > > explaining to people that if they want dlopen() to be faster, maybe
> > > > they shouldn't ask us to run thousands of ELF constructors.)
> > > >
> > > > but... the more time i spend looking at what we actually need in
> > > > third-party open source libraries right now i realize that libc and
> > > > FMV (which is still a future thing for us anyway) are really the only
> > > > _actual_ ifunc users. perhaps in part because macOS/iOS don't have
> > > > ifuncs, all the libraries that are part of the OS itself, for example,
> > > > are just doing their own thing with function pointers and
> > > > pthread_once() or whatever.
> > > >
> > > > (i have yet to try to get any data on actual apps. i have no reason to
> > > > think they'll be very different, but that could easily be skewed by
> > > > popular middleware or a popular game engine using ifuncs, so i do plan
> > > > on following up on that.)
> > > >
> > > > "how do they decide what to set that function pointer to?". well, it
> > > > looks like in most cases cpuid on x86 and calls to getauxval()
> > > > everywhere else. in some cases that's actually via some other library:
> > > > https://github.com/pytorch/cpuinfo or
> > > > https://github.com/google/cpu_features for example. so they have a
> > > > layer of caching there, even in cases where they don't have a single
> > > > function that sets all the function pointers.
> > >
> > > Right, function multi-versioning is just the sort of spot where we'd
> > > imagine hwprobe gets used, since it's providing similar/equivalent
> > > information to what cpuid does on x86. It may not be quite as fast as
> > > cpuid (I don't know how fast cpuid actually is). But with the vDSO
> > > function+data in userspace it should be able to match getauxval() in
> > > performance, as they're both a function pointer plus a loop. We're
> > > sort of planning for a world in which RISC-V has a wider set of these
> > > values to fetch, such that a ifunc selector may need a more complex
> > > set of information. Hwprobe and the vDSO gives us the ability both to
> > > answer multiple queries fast, and freely allocate more keys that may
> > > represent versioned features or even compound features.
> >
> > yeah, my incorrect mental model was that -- primarily because of
> > x86-64 and cpuid -- every function would get its own ifunc resolver
> > that would have to make a query. but the [in progress] arm64
> > implementation shows that that's not really the case anyway, and we
> > can just cache __riscv_hwprobe() in the same [one] place that
> > getauxval() is already being cached for arm64.
>
> Sounds good.
>
> >
> > > > so assuming i don't find that apps look very different from the OS
> > > > (that is: that apps use lots of ifuncs), i probably don't care at all
> > > > until we get to FMV. and i probably don't care for FMV, because
> > > > compiler-rt (or gcc's equivalent) will be the "caching layer" there.
> > > > (and on Android it'll be a while before i have to worry about libc's
> > > > ifuncs because we'll require V and not use ifuncs there for the
> > > > foreseeable future.)
> > > >
> > > > so, yeah, given that i've adopted the "pass a null pointer rather than
> > > > no arguments" convention you have, we have room for expansion if/when
> > > > FMV is a big thing, and until then -- unless i'm shocked by what i
> > > > find looking at actual apps -- i don't think i have any reason to
> > > > believe that ifuncs matter that much, and if compiler-rt makes one
> > > > __riscv_hwprobe() call per .so, that's probably fine. (i already spend
> > > > a big chunk of my life advising people to just have one .so file,
> > > > exporting nothing but a JNI_OnLoad symbol, so this will just make that
> > > > advice even better advice :-) )
> > >
> > > Just to confirm, by "pass a null pointer", you're saying that the
> > > Android libc also passes NULL as the second ifunc selector argument
> > > (or first)?
> >
> > #elif defined(__riscv)
> >   // This argument and its value is just a placeholder for now,
> >   // but it means that if we do pass something in future (such as
> >   // getauxval() and/or hwprobe key/value pairs), callees will be able to
> >   // recognize what they're being given.
> >   typedef ElfW(Addr) (*ifunc_resolver_t)(void*);
> >   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(nullptr);
> >
> > it's arm64 that has the initial getauxval() argument:
> >
> > #if defined(__aarch64__)
> >   typedef ElfW(Addr) (*ifunc_resolver_t)(uint64_t, __ifunc_arg_t*);
> >   static __ifunc_arg_t arg;
> >   static bool initialized = false;
> >   if (!initialized) {
> >     initialized = true;
> >     arg._size = sizeof(__ifunc_arg_t);
> >     arg._hwcap = getauxval(AT_HWCAP);
> >     arg._hwcap2 = getauxval(AT_HWCAP2);
> >   }
> >   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(arg._hwcap
> > | _IFUNC_ARG_HWCAP, &arg);
> >
> > https://android.googlesource.com/platform/bionic/+/main/libc/bionic/bionic_call_ifunc_resolver.cpp
> >
> > > That's good. It sounds like you're planning to just
> > > continue passing NULL for now, and wait for people to start clamoring
> > > for this in android libc?
> >
> > yeah, and i'm assuming there will never be any clamor ... yesterday
> > and today i actually checked a bunch of popular apks, and didn't find
> > any that were currently using ifuncs.
> >
> > the only change i'm thinking of making right now is that "there's a
> > single argument, and it's null" should probably be the default.
> > obviously since Android doesn't add new architectures very often, this
> > is only likely to affect x86/x86-64 for the foreseeable future, but
> > being able to recognize at a glance "am i running under a libc new
> > enough to pass me arguments?" would certainly have helped for arm64.
> > even if x86/x86-64 never benefit, it seems like the right default for
> > the #else clause...
>
> Sounds good, thanks for the pointers. The paranoid person in me would
> also add a comment in the risc-v section that if a pointer to hwprobe
> is added, it should be added as the second argument, behind hwcap as
> the first (assuming this change lands).
>
> Come to think of it, the static inline helper I'm proposing in my
> discussion with Richard needs to take both arguments, since callers
> need to check both ((arg1 != 0) && (arg2 != NULL)) to safely know that
> arg2 is a pointer to __riscv_hwprobe().

presumably not `(arg1 != 0)` but `(arg1 & _IFUNC_ARG_HWCAP)` to match arm64?

> -Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-16 23:18                         ` enh
@ 2023-08-17 16:27                           ` Evan Green
  2023-08-17 16:37                             ` enh
  0 siblings, 1 reply; 27+ messages in thread
From: Evan Green @ 2023-08-17 16:27 UTC (permalink / raw)
  To: enh
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Wed, Aug 16, 2023 at 4:18 PM enh <enh@google.com> wrote:
>
> On Tue, Aug 15, 2023 at 4:02 PM Evan Green <evan@rivosinc.com> wrote:
> >
> > On Tue, Aug 15, 2023 at 2:54 PM enh <enh@google.com> wrote:
> > >
> > > On Tue, Aug 15, 2023 at 9:41 AM Evan Green <evan@rivosinc.com> wrote:
> > > >
> > > > On Fri, Aug 11, 2023 at 5:01 PM enh <enh@google.com> wrote:
> > > > >
> > > > > On Mon, Aug 7, 2023 at 5:01 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > >
> > > > > > On Mon, Aug 7, 2023 at 3:48 PM enh <enh@google.com> wrote:
> > > > > > >
> > > > > > > On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > > > >
> > > > > > > > On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> > > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > > >
> > > > > > > > > On 8/3/23 11:42, Evan Green wrote:
> > > > > > > > > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > > > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > > > >> Outside libc something is required.
> > > > > > > > > >>
> > > > > > > > > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > > > > > > > > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > > > > > > > > >> could always be called directly.
> > > > > > > > > >
> > > > > > > > > > My previous spin took that approach, defining a
> > > > > > > > > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > > > > > > > > function if available, or make the syscall directly if not. But that
> > > > > > > > > > approach had the drawback that ifunc users couldn't take advantage of
> > > > > > > > > > the vDSO, and then all users had to comprehend the difference between
> > > > > > > > > > __riscv_hwprobe() and __riscv_hwprobe_early().
> > > > > > > > >
> > > > > > > > > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > > > > > > > > initialization reaches a certain point, but cope with being run earlier than that point by
> > > > > > > > > falling back to the syscall.
> > > > > > > > >
> > > > > > > > > That constrains the implementation, I guess, in that it can't set errno, but just
> > > > > > > > > returning the negative errno from the syscall seems fine.
> > > > > > > > >
> > > > > > > > > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > > > > > > > > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > > > > > > > > prior to full relocations being complete.
> > > > > > > >
> > > > > > > > Right, this is what we had in the previous iteration of this series,
> > > > > > > > and it did work ok. But it wasn't as good since it meant ifunc
> > > > > > > > selectors always got stuck in the null/fallback case and were forced
> > > > > > > > to make the syscall. With this mechanism they get to take advantage of
> > > > > > > > the vDSO.
> > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > > > > > > > > already used to getting hwcap info via a parameter. Adding this second
> > > > > > > > > > parameter, which also provides hwcap-like things, seems like a natural
> > > > > > > > > > extension. I didn't quite follow what you meant by the "extra hoops
> > > > > > > > > > above".
> > > > > > > > >
> > > > > > > > > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > > > > > > > > going to be used.
> > > > > > > > >
> > > > > > > > > It might be worth defining some helper functions for probing a single key or a single
> > > > > > > > > field.  E.g.
> > > > > > > > >
> > > > > > > > > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > > > > > > > > {
> > > > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > > > >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> > > > > > > > >    if (err)
> > > > > > > > >      return err;
> > > > > > > > >    if (pair.key == -1)
> > > > > > > > >      return -ENOENT;
> > > > > > > > >    return pair.value;
> > > > > > > > > }
> > > > > > > > >
> > > > > > > > > This implementation requires that no future hwprobe key define a value which as a valid
> > > > > > > > > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> > > > > > > > >
> > > > > > > > > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > > > > > > > > {
> > > > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > > > >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> > > > > > > > >            && pair.key != -1
> > > > > > > > >            && (pair.value & mask) == val);
> > > > > > > > > }
> > > > > > > > >
> > > > > > > > > These yield either
> > > > > > > > >
> > > > > > > > >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> > > > > > > > >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> > > > > > > > >        return __memcpy_noalignment;
> > > > > > > > >      return __memcpy_generic;
> > > > > > > > >
> > > > > > > > > or
> > > > > > > > >
> > > > > > > > >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> > > > > > > > >        return __memcpy_noalignment;
> > > > > > > > >      return __memcpy_generic;
> > > > > > > > >
> > > > > > > > > which to my mind looks much better for a pattern you'll be replicating so very many times
> > > > > > > > > across all of the ifunc implementations in the system.
> > > > > > > >
> > > > > > > > Ah, I see. I could make a static inline function in the header that
> > > > > > > > looks something like this (mangled by gmail, sorry):
> > > > > > > >
> > > > > > > > /* Helper function usable from ifunc selectors that probes a single key. */
> > > > > > > > static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> > > > > > > > signed long long int key,
> > > > > > > > unsigned long long int *value)
> > > > > > > > {
> > > > > > > > struct riscv_hwprobe pair;
> > > > > > > > int rc;
> > > > > > > >
> > > > > > > > if (!hwprobe_func)
> > > > > > > > return -ENOSYS;
> > > > > > > >
> > > > > > > > pair.key = key;
> > > > > > > > rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> > > > > > > > if (rc) {
> > > > > > > > return rc;
> > > > > > > > }
> > > > > > > >
> > > > > > > > if (pair.key < 0) {
> > > > > > > > return -ENOENT;
> > > > > > > > }
> > > > > > > >
> > > > > > > > *value = pair.value;
> > > > > > > > return 0;
> > > > > > > > }
> > > > > > > >
> > > > > > > > The ifunc selector would then be significantly cleaned up, looking
> > > > > > > > something like:
> > > > > > > >
> > > > > > > > if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> > > > > > > > return __memcpy_generic;
> > > > > > > >
> > > > > > > > if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> > > > > > > > return __memcpy_noalignment;
> > > > > > >
> > > > > > > (Android's libc maintainer here, having joined the list just to talk
> > > > > > > about risc-v ifuncs :-) )
> > > > > > >
> > > > > > > has anyone thought about calling ifunc resolvers more like this...
> > > > > > >
> > > > > > > --same part of the dynamic loader that caches the two getauxval()s for arm64--
> > > > > > > static struct riscv_hwprobe probes[] = {
> > > > > > >  {.value = RISCV_HWPROBE_KEY_MVENDORID},
> > > > > > >  {.value = RISCV_HWPROBE_KEY_MARCHID},
> > > > > > >  {.value = RISCV_HWPROBE_KEY_MIMPID},
> > > > > > >  {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
> > > > > > >  {.value = RISCV_HWPROBE_KEY_IMA_EXT},
> > > > > > >  {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
> > > > > > > ... // every time a new key is added to the kernel, we add it here
> > > > > > > };
> > > > > > > __riscv_hwprobe(...); // called once
> > > > > > >
> > > > > > > --part of the dynamic loader that calls ifunc resolvers--
> > > > > > > (*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);
> > > > > > >
> > > > > > > this is similar to what we already have for arm64 (where there's a
> > > > > > > getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
> > > > > > > potentially others), but more uniform, and avoiding the source
> > > > > > > (in)compatibility issues of adding new fields to a struct [even if it
> > > > > > > does have a size_t to "version" it like the arm64 ifunc struct].
> > > > > > >
> > > > > > > yes, it means everyone pays to get all the hwprobes, but that gets
> > > > > > > amortized. and lookup in the ifunc resolver is simple and quick. if we
> > > > > > > know that the keys will be kept dense, we can even have code in ifunc
> > > > > > > resolvers like
> > > > > > >
> > > > > > > if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...
> > > > > > >
> > > > > > > though personally for the "big ticket items" that get a letter to
> > > > > > > themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
> > > > > > > probe_count, probes_ptr)` to the resolver, but i hear that's
> > > > > > > controversial :-)
> > > > > >
> > > > > > Hello, welcome to the fun! :)
> > > > >
> > > > > (sorry for the delay. i've been thinking :-) )
> > > > >
> > > > > > What you're describing here is almost exactly what we did inside the
> > > > > > vDSO function. The vDSO function acts as a front for a handful of
> > > > > > probe values that we've already completed and cached in userspace. We
> > > > > > opted to make it a function, rather than exposing the data itself via
> > > > > > vDSO, so that we had future flexibility in what elements we cached in
> > > > > > userspace and their storage format. We can update the kernel as needed
> > > > > > to cache the hottest things in userspace, even if that means
> > > > > > rearranging the data format, passing through some extra information,
> > > > > > or adding an extra snip of code. My hope is callers can directly
> > > > > > interact with the vDSO function (though maybe as Richard suggested
> > > > > > maybe with the help of a tidy inline helper), rather than trying to
> > > > > > add a second layer of userspace caching.
> > > > >
> > > > > on reflection i think i might be too focused on the FMV use case, in
> > > > > part because we're looking at those compiler-generated ifuncs for
> > > > > arm64 on Android atm. i think i'm imagining a world where there's a
> > > > > lot of that, and worrying about having to pay for the setup, call, and
> > > > > loop for each ifunc, and wondering why we don't just pay once instead.
> > > > > (as a bit of background context, Android "app" start is actually a
> > > > > dlopen() in a clone of an existing zygote process, and in general app
> > > > > launch time is one of the key metrics anyone who's serious is
> > > > > optimizing for. you'd be surprised how much of my life i spend
> > > > > explaining to people that if they want dlopen() to be faster, maybe
> > > > > they shouldn't ask us to run thousands of ELF constructors.)
> > > > >
> > > > > but... the more time i spend looking at what we actually need in
> > > > > third-party open source libraries right now i realize that libc and
> > > > > FMV (which is still a future thing for us anyway) are really the only
> > > > > _actual_ ifunc users. perhaps in part because macOS/iOS don't have
> > > > > ifuncs, all the libraries that are part of the OS itself, for example,
> > > > > are just doing their own thing with function pointers and
> > > > > pthread_once() or whatever.
> > > > >
> > > > > (i have yet to try to get any data on actual apps. i have no reason to
> > > > > think they'll be very different, but that could easily be skewed by
> > > > > popular middleware or a popular game engine using ifuncs, so i do plan
> > > > > on following up on that.)
> > > > >
> > > > > "how do they decide what to set that function pointer to?". well, it
> > > > > looks like in most cases cpuid on x86 and calls to getauxval()
> > > > > everywhere else. in some cases that's actually via some other library:
> > > > > https://github.com/pytorch/cpuinfo or
> > > > > https://github.com/google/cpu_features for example. so they have a
> > > > > layer of caching there, even in cases where they don't have a single
> > > > > function that sets all the function pointers.
> > > >
> > > > Right, function multi-versioning is just the sort of spot where we'd
> > > > imagine hwprobe gets used, since it's providing similar/equivalent
> > > > information to what cpuid does on x86. It may not be quite as fast as
> > > > cpuid (I don't know how fast cpuid actually is). But with the vDSO
> > > > function+data in userspace it should be able to match getauxval() in
> > > > performance, as they're both a function pointer plus a loop. We're
> > > > sort of planning for a world in which RISC-V has a wider set of these
> > > > values to fetch, such that a ifunc selector may need a more complex
> > > > set of information. Hwprobe and the vDSO gives us the ability both to
> > > > answer multiple queries fast, and freely allocate more keys that may
> > > > represent versioned features or even compound features.
> > >
> > > yeah, my incorrect mental model was that -- primarily because of
> > > x86-64 and cpuid -- every function would get its own ifunc resolver
> > > that would have to make a query. but the [in progress] arm64
> > > implementation shows that that's not really the case anyway, and we
> > > can just cache __riscv_hwprobe() in the same [one] place that
> > > getauxval() is already being cached for arm64.
> >
> > Sounds good.
> >
> > >
> > > > > so assuming i don't find that apps look very different from the OS
> > > > > (that is: that apps use lots of ifuncs), i probably don't care at all
> > > > > until we get to FMV. and i probably don't care for FMV, because
> > > > > compiler-rt (or gcc's equivalent) will be the "caching layer" there.
> > > > > (and on Android it'll be a while before i have to worry about libc's
> > > > > ifuncs because we'll require V and not use ifuncs there for the
> > > > > foreseeable future.)
> > > > >
> > > > > so, yeah, given that i've adopted the "pass a null pointer rather than
> > > > > no arguments" convention you have, we have room for expansion if/when
> > > > > FMV is a big thing, and until then -- unless i'm shocked by what i
> > > > > find looking at actual apps -- i don't think i have any reason to
> > > > > believe that ifuncs matter that much, and if compiler-rt makes one
> > > > > __riscv_hwprobe() call per .so, that's probably fine. (i already spend
> > > > > a big chunk of my life advising people to just have one .so file,
> > > > > exporting nothing but a JNI_OnLoad symbol, so this will just make that
> > > > > advice even better advice :-) )
> > > >
> > > > Just to confirm, by "pass a null pointer", you're saying that the
> > > > Android libc also passes NULL as the second ifunc selector argument
> > > > (or first)?
> > >
> > > #elif defined(__riscv)
> > >   // This argument and its value is just a placeholder for now,
> > >   // but it means that if we do pass something in future (such as
> > >   // getauxval() and/or hwprobe key/value pairs), callees will be able to
> > >   // recognize what they're being given.
> > >   typedef ElfW(Addr) (*ifunc_resolver_t)(void*);
> > >   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(nullptr);
> > >
> > > it's arm64 that has the initial getauxval() argument:
> > >
> > > #if defined(__aarch64__)
> > >   typedef ElfW(Addr) (*ifunc_resolver_t)(uint64_t, __ifunc_arg_t*);
> > >   static __ifunc_arg_t arg;
> > >   static bool initialized = false;
> > >   if (!initialized) {
> > >     initialized = true;
> > >     arg._size = sizeof(__ifunc_arg_t);
> > >     arg._hwcap = getauxval(AT_HWCAP);
> > >     arg._hwcap2 = getauxval(AT_HWCAP2);
> > >   }
> > >   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(arg._hwcap
> > > | _IFUNC_ARG_HWCAP, &arg);
> > >
> > > https://android.googlesource.com/platform/bionic/+/main/libc/bionic/bionic_call_ifunc_resolver.cpp
> > >
> > > > That's good. It sounds like you're planning to just
> > > > continue passing NULL for now, and wait for people to start clamoring
> > > > for this in android libc?
> > >
> > > yeah, and i'm assuming there will never be any clamor ... yesterday
> > > and today i actually checked a bunch of popular apks, and didn't find
> > > any that were currently using ifuncs.
> > >
> > > the only change i'm thinking of making right now is that "there's a
> > > single argument, and it's null" should probably be the default.
> > > obviously since Android doesn't add new architectures very often, this
> > > is only likely to affect x86/x86-64 for the foreseeable future, but
> > > being able to recognize at a glance "am i running under a libc new
> > > enough to pass me arguments?" would certainly have helped for arm64.
> > > even if x86/x86-64 never benefit, it seems like the right default for
> > > the #else clause...
> >
> > Sounds good, thanks for the pointers. The paranoid person in me would
> > also add a comment in the risc-v section that if a pointer to hwprobe
> > is added, it should be added as the second argument, behind hwcap as
> > the first (assuming this change lands).
> >
> > Come to think of it, the static inline helper I'm proposing in my
> > discussion with Richard needs to take both arguments, since callers
> > need to check both ((arg1 != 0) && (arg2 != NULL)) to safely know that
> > arg2 is a pointer to __riscv_hwprobe().
>
> presumably not `(arg1 != 0)` but `(arg1 & _IFUNC_ARG_HWCAP)` to match arm64?

It looks like we didn't do that _IFUNC_ARG_HWCAP bit on riscv.
Actually, looking at the history of sysdeps/riscv/dl-irel.h, hwcap has
always been passed as the first argument. So I think I don't need to
check it in the (glibc-specific) inline helper function, I can safely
assume it's there and go straight to checking the second argument.

If you were coding this directly in a library or application, you
would need to check the first arg to be compatible with other libcs
like Android's. I think checking against zero should be ok since bits
like I, M, A should always be set. (I didn't dig through the kernel
history, so maybe not on really old kernels? But you're not going to
get hwprobe on those anyway either so the false bailout is correct).

-Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-17 16:27                           ` Evan Green
@ 2023-08-17 16:37                             ` enh
  2023-08-17 17:40                               ` Evan Green
  0 siblings, 1 reply; 27+ messages in thread
From: enh @ 2023-08-17 16:37 UTC (permalink / raw)
  To: Evan Green
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Thu, Aug 17, 2023 at 9:27 AM Evan Green <evan@rivosinc.com> wrote:
>
> On Wed, Aug 16, 2023 at 4:18 PM enh <enh@google.com> wrote:
> >
> > On Tue, Aug 15, 2023 at 4:02 PM Evan Green <evan@rivosinc.com> wrote:
> > >
> > > On Tue, Aug 15, 2023 at 2:54 PM enh <enh@google.com> wrote:
> > > >
> > > > On Tue, Aug 15, 2023 at 9:41 AM Evan Green <evan@rivosinc.com> wrote:
> > > > >
> > > > > On Fri, Aug 11, 2023 at 5:01 PM enh <enh@google.com> wrote:
> > > > > >
> > > > > > On Mon, Aug 7, 2023 at 5:01 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > > >
> > > > > > > On Mon, Aug 7, 2023 at 3:48 PM enh <enh@google.com> wrote:
> > > > > > > >
> > > > > > > > On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > > > > >
> > > > > > > > > On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> > > > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > > > >
> > > > > > > > > > On 8/3/23 11:42, Evan Green wrote:
> > > > > > > > > > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > > > > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > > > > >> Outside libc something is required.
> > > > > > > > > > >>
> > > > > > > > > > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > > > > > > > > > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > > > > > > > > > >> could always be called directly.
> > > > > > > > > > >
> > > > > > > > > > > My previous spin took that approach, defining a
> > > > > > > > > > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > > > > > > > > > function if available, or make the syscall directly if not. But that
> > > > > > > > > > > approach had the drawback that ifunc users couldn't take advantage of
> > > > > > > > > > > the vDSO, and then all users had to comprehend the difference between
> > > > > > > > > > > __riscv_hwprobe() and __riscv_hwprobe_early().
> > > > > > > > > >
> > > > > > > > > > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > > > > > > > > > initialization reaches a certain point, but cope with being run earlier than that point by
> > > > > > > > > > falling back to the syscall.
> > > > > > > > > >
> > > > > > > > > > That constrains the implementation, I guess, in that it can't set errno, but just
> > > > > > > > > > returning the negative errno from the syscall seems fine.
> > > > > > > > > >
> > > > > > > > > > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > > > > > > > > > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > > > > > > > > > prior to full relocations being complete.
> > > > > > > > >
> > > > > > > > > Right, this is what we had in the previous iteration of this series,
> > > > > > > > > and it did work ok. But it wasn't as good since it meant ifunc
> > > > > > > > > selectors always got stuck in the null/fallback case and were forced
> > > > > > > > > to make the syscall. With this mechanism they get to take advantage of
> > > > > > > > > the vDSO.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > > > > > > > > > already used to getting hwcap info via a parameter. Adding this second
> > > > > > > > > > > parameter, which also provides hwcap-like things, seems like a natural
> > > > > > > > > > > extension. I didn't quite follow what you meant by the "extra hoops
> > > > > > > > > > > above".
> > > > > > > > > >
> > > > > > > > > > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > > > > > > > > > going to be used.
> > > > > > > > > >
> > > > > > > > > > It might be worth defining some helper functions for probing a single key or a single
> > > > > > > > > > field.  E.g.
> > > > > > > > > >
> > > > > > > > > > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > > > > > > > > > {
> > > > > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > > > > >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> > > > > > > > > >    if (err)
> > > > > > > > > >      return err;
> > > > > > > > > >    if (pair.key == -1)
> > > > > > > > > >      return -ENOENT;
> > > > > > > > > >    return pair.value;
> > > > > > > > > > }
> > > > > > > > > >
> > > > > > > > > > This implementation requires that no future hwprobe key define a value which as a valid
> > > > > > > > > > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> > > > > > > > > >
> > > > > > > > > > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > > > > > > > > > {
> > > > > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > > > > >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> > > > > > > > > >            && pair.key != -1
> > > > > > > > > >            && (pair.value & mask) == val);
> > > > > > > > > > }
> > > > > > > > > >
> > > > > > > > > > These yield either
> > > > > > > > > >
> > > > > > > > > >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> > > > > > > > > >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> > > > > > > > > >        return __memcpy_noalignment;
> > > > > > > > > >      return __memcpy_generic;
> > > > > > > > > >
> > > > > > > > > > or
> > > > > > > > > >
> > > > > > > > > >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> > > > > > > > > >        return __memcpy_noalignment;
> > > > > > > > > >      return __memcpy_generic;
> > > > > > > > > >
> > > > > > > > > > which to my mind looks much better for a pattern you'll be replicating so very many times
> > > > > > > > > > across all of the ifunc implementations in the system.
> > > > > > > > >
> > > > > > > > > Ah, I see. I could make a static inline function in the header that
> > > > > > > > > looks something like this (mangled by gmail, sorry):
> > > > > > > > >
> > > > > > > > > /* Helper function usable from ifunc selectors that probes a single key. */
> > > > > > > > > static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> > > > > > > > > signed long long int key,
> > > > > > > > > unsigned long long int *value)
> > > > > > > > > {
> > > > > > > > > struct riscv_hwprobe pair;
> > > > > > > > > int rc;
> > > > > > > > >
> > > > > > > > > if (!hwprobe_func)
> > > > > > > > > return -ENOSYS;
> > > > > > > > >
> > > > > > > > > pair.key = key;
> > > > > > > > > rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> > > > > > > > > if (rc) {
> > > > > > > > > return rc;
> > > > > > > > > }
> > > > > > > > >
> > > > > > > > > if (pair.key < 0) {
> > > > > > > > > return -ENOENT;
> > > > > > > > > }
> > > > > > > > >
> > > > > > > > > *value = pair.value;
> > > > > > > > > return 0;
> > > > > > > > > }
> > > > > > > > >
> > > > > > > > > The ifunc selector would then be significantly cleaned up, looking
> > > > > > > > > something like:
> > > > > > > > >
> > > > > > > > > if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> > > > > > > > > return __memcpy_generic;
> > > > > > > > >
> > > > > > > > > if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> > > > > > > > > return __memcpy_noalignment;
> > > > > > > >
> > > > > > > > (Android's libc maintainer here, having joined the list just to talk
> > > > > > > > about risc-v ifuncs :-) )
> > > > > > > >
> > > > > > > > has anyone thought about calling ifunc resolvers more like this...
> > > > > > > >
> > > > > > > > --same part of the dynamic loader that caches the two getauxval()s for arm64--
> > > > > > > > static struct riscv_hwprobe probes[] = {
> > > > > > > >  {.value = RISCV_HWPROBE_KEY_MVENDORID},
> > > > > > > >  {.value = RISCV_HWPROBE_KEY_MARCHID},
> > > > > > > >  {.value = RISCV_HWPROBE_KEY_MIMPID},
> > > > > > > >  {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
> > > > > > > >  {.value = RISCV_HWPROBE_KEY_IMA_EXT},
> > > > > > > >  {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
> > > > > > > > ... // every time a new key is added to the kernel, we add it here
> > > > > > > > };
> > > > > > > > __riscv_hwprobe(...); // called once
> > > > > > > >
> > > > > > > > --part of the dynamic loader that calls ifunc resolvers--
> > > > > > > > (*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);
> > > > > > > >
> > > > > > > > this is similar to what we already have for arm64 (where there's a
> > > > > > > > getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
> > > > > > > > potentially others), but more uniform, and avoiding the source
> > > > > > > > (in)compatibility issues of adding new fields to a struct [even if it
> > > > > > > > does have a size_t to "version" it like the arm64 ifunc struct].
> > > > > > > >
> > > > > > > > yes, it means everyone pays to get all the hwprobes, but that gets
> > > > > > > > amortized. and lookup in the ifunc resolver is simple and quick. if we
> > > > > > > > know that the keys will be kept dense, we can even have code in ifunc
> > > > > > > > resolvers like
> > > > > > > >
> > > > > > > > if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...
> > > > > > > >
> > > > > > > > though personally for the "big ticket items" that get a letter to
> > > > > > > > themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
> > > > > > > > probe_count, probes_ptr)` to the resolver, but i hear that's
> > > > > > > > controversial :-)
> > > > > > >
> > > > > > > Hello, welcome to the fun! :)
> > > > > >
> > > > > > (sorry for the delay. i've been thinking :-) )
> > > > > >
> > > > > > > What you're describing here is almost exactly what we did inside the
> > > > > > > vDSO function. The vDSO function acts as a front for a handful of
> > > > > > > probe values that we've already completed and cached in userspace. We
> > > > > > > opted to make it a function, rather than exposing the data itself via
> > > > > > > vDSO, so that we had future flexibility in what elements we cached in
> > > > > > > userspace and their storage format. We can update the kernel as needed
> > > > > > > to cache the hottest things in userspace, even if that means
> > > > > > > rearranging the data format, passing through some extra information,
> > > > > > > or adding an extra snip of code. My hope is callers can directly
> > > > > > > interact with the vDSO function (though maybe as Richard suggested
> > > > > > > maybe with the help of a tidy inline helper), rather than trying to
> > > > > > > add a second layer of userspace caching.
> > > > > >
> > > > > > on reflection i think i might be too focused on the FMV use case, in
> > > > > > part because we're looking at those compiler-generated ifuncs for
> > > > > > arm64 on Android atm. i think i'm imagining a world where there's a
> > > > > > lot of that, and worrying about having to pay for the setup, call, and
> > > > > > loop for each ifunc, and wondering why we don't just pay once instead.
> > > > > > (as a bit of background context, Android "app" start is actually a
> > > > > > dlopen() in a clone of an existing zygote process, and in general app
> > > > > > launch time is one of the key metrics anyone who's serious is
> > > > > > optimizing for. you'd be surprised how much of my life i spend
> > > > > > explaining to people that if they want dlopen() to be faster, maybe
> > > > > > they shouldn't ask us to run thousands of ELF constructors.)
> > > > > >
> > > > > > but... the more time i spend looking at what we actually need in
> > > > > > third-party open source libraries right now i realize that libc and
> > > > > > FMV (which is still a future thing for us anyway) are really the only
> > > > > > _actual_ ifunc users. perhaps in part because macOS/iOS don't have
> > > > > > ifuncs, all the libraries that are part of the OS itself, for example,
> > > > > > are just doing their own thing with function pointers and
> > > > > > pthread_once() or whatever.
> > > > > >
> > > > > > (i have yet to try to get any data on actual apps. i have no reason to
> > > > > > think they'll be very different, but that could easily be skewed by
> > > > > > popular middleware or a popular game engine using ifuncs, so i do plan
> > > > > > on following up on that.)
> > > > > >
> > > > > > "how do they decide what to set that function pointer to?". well, it
> > > > > > looks like in most cases cpuid on x86 and calls to getauxval()
> > > > > > everywhere else. in some cases that's actually via some other library:
> > > > > > https://github.com/pytorch/cpuinfo or
> > > > > > https://github.com/google/cpu_features for example. so they have a
> > > > > > layer of caching there, even in cases where they don't have a single
> > > > > > function that sets all the function pointers.
> > > > >
> > > > > Right, function multi-versioning is just the sort of spot where we'd
> > > > > imagine hwprobe gets used, since it's providing similar/equivalent
> > > > > information to what cpuid does on x86. It may not be quite as fast as
> > > > > cpuid (I don't know how fast cpuid actually is). But with the vDSO
> > > > > function+data in userspace it should be able to match getauxval() in
> > > > > performance, as they're both a function pointer plus a loop. We're
> > > > > sort of planning for a world in which RISC-V has a wider set of these
> > > > > values to fetch, such that a ifunc selector may need a more complex
> > > > > set of information. Hwprobe and the vDSO gives us the ability both to
> > > > > answer multiple queries fast, and freely allocate more keys that may
> > > > > represent versioned features or even compound features.
> > > >
> > > > yeah, my incorrect mental model was that -- primarily because of
> > > > x86-64 and cpuid -- every function would get its own ifunc resolver
> > > > that would have to make a query. but the [in progress] arm64
> > > > implementation shows that that's not really the case anyway, and we
> > > > can just cache __riscv_hwprobe() in the same [one] place that
> > > > getauxval() is already being cached for arm64.
> > >
> > > Sounds good.
> > >
> > > >
> > > > > > so assuming i don't find that apps look very different from the OS
> > > > > > (that is: that apps use lots of ifuncs), i probably don't care at all
> > > > > > until we get to FMV. and i probably don't care for FMV, because
> > > > > > compiler-rt (or gcc's equivalent) will be the "caching layer" there.
> > > > > > (and on Android it'll be a while before i have to worry about libc's
> > > > > > ifuncs because we'll require V and not use ifuncs there for the
> > > > > > foreseeable future.)
> > > > > >
> > > > > > so, yeah, given that i've adopted the "pass a null pointer rather than
> > > > > > no arguments" convention you have, we have room for expansion if/when
> > > > > > FMV is a big thing, and until then -- unless i'm shocked by what i
> > > > > > find looking at actual apps -- i don't think i have any reason to
> > > > > > believe that ifuncs matter that much, and if compiler-rt makes one
> > > > > > __riscv_hwprobe() call per .so, that's probably fine. (i already spend
> > > > > > a big chunk of my life advising people to just have one .so file,
> > > > > > exporting nothing but a JNI_OnLoad symbol, so this will just make that
> > > > > > advice even better advice :-) )
> > > > >
> > > > > Just to confirm, by "pass a null pointer", you're saying that the
> > > > > Android libc also passes NULL as the second ifunc selector argument
> > > > > (or first)?
> > > >
> > > > #elif defined(__riscv)
> > > >   // This argument and its value is just a placeholder for now,
> > > >   // but it means that if we do pass something in future (such as
> > > >   // getauxval() and/or hwprobe key/value pairs), callees will be able to
> > > >   // recognize what they're being given.
> > > >   typedef ElfW(Addr) (*ifunc_resolver_t)(void*);
> > > >   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(nullptr);
> > > >
> > > > it's arm64 that has the initial getauxval() argument:
> > > >
> > > > #if defined(__aarch64__)
> > > >   typedef ElfW(Addr) (*ifunc_resolver_t)(uint64_t, __ifunc_arg_t*);
> > > >   static __ifunc_arg_t arg;
> > > >   static bool initialized = false;
> > > >   if (!initialized) {
> > > >     initialized = true;
> > > >     arg._size = sizeof(__ifunc_arg_t);
> > > >     arg._hwcap = getauxval(AT_HWCAP);
> > > >     arg._hwcap2 = getauxval(AT_HWCAP2);
> > > >   }
> > > >   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(arg._hwcap
> > > > | _IFUNC_ARG_HWCAP, &arg);
> > > >
> > > > https://android.googlesource.com/platform/bionic/+/main/libc/bionic/bionic_call_ifunc_resolver.cpp
> > > >
> > > > > That's good. It sounds like you're planning to just
> > > > > continue passing NULL for now, and wait for people to start clamoring
> > > > > for this in android libc?
> > > >
> > > > yeah, and i'm assuming there will never be any clamor ... yesterday
> > > > and today i actually checked a bunch of popular apks, and didn't find
> > > > any that were currently using ifuncs.
> > > >
> > > > the only change i'm thinking of making right now is that "there's a
> > > > single argument, and it's null" should probably be the default.
> > > > obviously since Android doesn't add new architectures very often, this
> > > > is only likely to affect x86/x86-64 for the foreseeable future, but
> > > > being able to recognize at a glance "am i running under a libc new
> > > > enough to pass me arguments?" would certainly have helped for arm64.
> > > > even if x86/x86-64 never benefit, it seems like the right default for
> > > > the #else clause...
> > >
> > > Sounds good, thanks for the pointers. The paranoid person in me would
> > > also add a comment in the risc-v section that if a pointer to hwprobe
> > > is added, it should be added as the second argument, behind hwcap as
> > > the first (assuming this change lands).
> > >
> > > Come to think of it, the static inline helper I'm proposing in my
> > > discussion with Richard needs to take both arguments, since callers
> > > need to check both ((arg1 != 0) && (arg2 != NULL)) to safely know that
> > > arg2 is a pointer to __riscv_hwprobe().
> >
> > presumably not `(arg1 != 0)` but `(arg1 & _IFUNC_ARG_HWCAP)` to match arm64?
>
> It looks like we didn't do that _IFUNC_ARG_HWCAP bit on riscv.
> Actually, looking at the history of sysdeps/riscv/dl-irel.h, hwcap has
> always been passed as the first argument. So I think I don't need to
> check it in the (glibc-specific) inline helper function, I can safely
> assume it's there and go straight to checking the second argument.

oh i misunderstood what you were saying earlier.

> If you were coding this directly in a library or application, you
> would need to check the first arg to be compatible with other libcs
> like Android's.

we haven't shipped yet, so if you're telling me glibc is passing
`(getauxval(AT_HWCAP), nullptr)`, i'll change bionic to do the same
today ... the whole reason i'm here on this thread is to ensure source
compatibility for anyone writing ifuncs :-)

> I think checking against zero should be ok since bits
> like I, M, A should always be set. (I didn't dig through the kernel
> history, so maybe not on really old kernels? But you're not going to
> get hwprobe on those anyway either so the false bailout is correct).
>
> -Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-17 16:37                             ` enh
@ 2023-08-17 17:40                               ` Evan Green
  2023-08-22 15:06                                 ` enh
  0 siblings, 1 reply; 27+ messages in thread
From: Evan Green @ 2023-08-17 17:40 UTC (permalink / raw)
  To: enh
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Thu, Aug 17, 2023 at 9:37 AM enh <enh@google.com> wrote:
>
> On Thu, Aug 17, 2023 at 9:27 AM Evan Green <evan@rivosinc.com> wrote:
> >
> > On Wed, Aug 16, 2023 at 4:18 PM enh <enh@google.com> wrote:
> > >
> > > On Tue, Aug 15, 2023 at 4:02 PM Evan Green <evan@rivosinc.com> wrote:
> > > >
> > > > On Tue, Aug 15, 2023 at 2:54 PM enh <enh@google.com> wrote:
> > > > >
> > > > > On Tue, Aug 15, 2023 at 9:41 AM Evan Green <evan@rivosinc.com> wrote:
> > > > > >
> > > > > > On Fri, Aug 11, 2023 at 5:01 PM enh <enh@google.com> wrote:
> > > > > > >
> > > > > > > On Mon, Aug 7, 2023 at 5:01 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > > > >
> > > > > > > > On Mon, Aug 7, 2023 at 3:48 PM enh <enh@google.com> wrote:
> > > > > > > > >
> > > > > > > > > On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > > > > > >
> > > > > > > > > > On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> > > > > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > > > > >
> > > > > > > > > > > On 8/3/23 11:42, Evan Green wrote:
> > > > > > > > > > > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > > > > > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > > > > > >> Outside libc something is required.
> > > > > > > > > > > >>
> > > > > > > > > > > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > > > > > > > > > > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > > > > > > > > > > >> could always be called directly.
> > > > > > > > > > > >
> > > > > > > > > > > > My previous spin took that approach, defining a
> > > > > > > > > > > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > > > > > > > > > > function if available, or make the syscall directly if not. But that
> > > > > > > > > > > > approach had the drawback that ifunc users couldn't take advantage of
> > > > > > > > > > > > the vDSO, and then all users had to comprehend the difference between
> > > > > > > > > > > > __riscv_hwprobe() and __riscv_hwprobe_early().
> > > > > > > > > > >
> > > > > > > > > > > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > > > > > > > > > > initialization reaches a certain point, but cope with being run earlier than that point by
> > > > > > > > > > > falling back to the syscall.
> > > > > > > > > > >
> > > > > > > > > > > That constrains the implementation, I guess, in that it can't set errno, but just
> > > > > > > > > > > returning the negative errno from the syscall seems fine.
> > > > > > > > > > >
> > > > > > > > > > > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > > > > > > > > > > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > > > > > > > > > > prior to full relocations being complete.
> > > > > > > > > >
> > > > > > > > > > Right, this is what we had in the previous iteration of this series,
> > > > > > > > > > and it did work ok. But it wasn't as good since it meant ifunc
> > > > > > > > > > selectors always got stuck in the null/fallback case and were forced
> > > > > > > > > > to make the syscall. With this mechanism they get to take advantage of
> > > > > > > > > > the vDSO.
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > > > > > > > > > > already used to getting hwcap info via a parameter. Adding this second
> > > > > > > > > > > > parameter, which also provides hwcap-like things, seems like a natural
> > > > > > > > > > > > extension. I didn't quite follow what you meant by the "extra hoops
> > > > > > > > > > > > above".
> > > > > > > > > > >
> > > > > > > > > > > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > > > > > > > > > > going to be used.
> > > > > > > > > > >
> > > > > > > > > > > It might be worth defining some helper functions for probing a single key or a single
> > > > > > > > > > > field.  E.g.
> > > > > > > > > > >
> > > > > > > > > > > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > > > > > > > > > > {
> > > > > > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > > > > > >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> > > > > > > > > > >    if (err)
> > > > > > > > > > >      return err;
> > > > > > > > > > >    if (pair.key == -1)
> > > > > > > > > > >      return -ENOENT;
> > > > > > > > > > >    return pair.value;
> > > > > > > > > > > }
> > > > > > > > > > >
> > > > > > > > > > > This implementation requires that no future hwprobe key define a value which as a valid
> > > > > > > > > > > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> > > > > > > > > > >
> > > > > > > > > > > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > > > > > > > > > > {
> > > > > > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > > > > > >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> > > > > > > > > > >            && pair.key != -1
> > > > > > > > > > >            && (pair.value & mask) == val);
> > > > > > > > > > > }
> > > > > > > > > > >
> > > > > > > > > > > These yield either
> > > > > > > > > > >
> > > > > > > > > > >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> > > > > > > > > > >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> > > > > > > > > > >        return __memcpy_noalignment;
> > > > > > > > > > >      return __memcpy_generic;
> > > > > > > > > > >
> > > > > > > > > > > or
> > > > > > > > > > >
> > > > > > > > > > >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> > > > > > > > > > >        return __memcpy_noalignment;
> > > > > > > > > > >      return __memcpy_generic;
> > > > > > > > > > >
> > > > > > > > > > > which to my mind looks much better for a pattern you'll be replicating so very many times
> > > > > > > > > > > across all of the ifunc implementations in the system.
> > > > > > > > > >
> > > > > > > > > > Ah, I see. I could make a static inline function in the header that
> > > > > > > > > > looks something like this (mangled by gmail, sorry):
> > > > > > > > > >
> > > > > > > > > > /* Helper function usable from ifunc selectors that probes a single key. */
> > > > > > > > > > static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> > > > > > > > > > signed long long int key,
> > > > > > > > > > unsigned long long int *value)
> > > > > > > > > > {
> > > > > > > > > > struct riscv_hwprobe pair;
> > > > > > > > > > int rc;
> > > > > > > > > >
> > > > > > > > > > if (!hwprobe_func)
> > > > > > > > > > return -ENOSYS;
> > > > > > > > > >
> > > > > > > > > > pair.key = key;
> > > > > > > > > > rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> > > > > > > > > > if (rc) {
> > > > > > > > > > return rc;
> > > > > > > > > > }
> > > > > > > > > >
> > > > > > > > > > if (pair.key < 0) {
> > > > > > > > > > return -ENOENT;
> > > > > > > > > > }
> > > > > > > > > >
> > > > > > > > > > *value = pair.value;
> > > > > > > > > > return 0;
> > > > > > > > > > }
> > > > > > > > > >
> > > > > > > > > > The ifunc selector would then be significantly cleaned up, looking
> > > > > > > > > > something like:
> > > > > > > > > >
> > > > > > > > > > if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> > > > > > > > > > return __memcpy_generic;
> > > > > > > > > >
> > > > > > > > > > if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> > > > > > > > > > return __memcpy_noalignment;
> > > > > > > > >
> > > > > > > > > (Android's libc maintainer here, having joined the list just to talk
> > > > > > > > > about risc-v ifuncs :-) )
> > > > > > > > >
> > > > > > > > > has anyone thought about calling ifunc resolvers more like this...
> > > > > > > > >
> > > > > > > > > --same part of the dynamic loader that caches the two getauxval()s for arm64--
> > > > > > > > > static struct riscv_hwprobe probes[] = {
> > > > > > > > >  {.value = RISCV_HWPROBE_KEY_MVENDORID},
> > > > > > > > >  {.value = RISCV_HWPROBE_KEY_MARCHID},
> > > > > > > > >  {.value = RISCV_HWPROBE_KEY_MIMPID},
> > > > > > > > >  {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
> > > > > > > > >  {.value = RISCV_HWPROBE_KEY_IMA_EXT},
> > > > > > > > >  {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
> > > > > > > > > ... // every time a new key is added to the kernel, we add it here
> > > > > > > > > };
> > > > > > > > > __riscv_hwprobe(...); // called once
> > > > > > > > >
> > > > > > > > > --part of the dynamic loader that calls ifunc resolvers--
> > > > > > > > > (*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);
> > > > > > > > >
> > > > > > > > > this is similar to what we already have for arm64 (where there's a
> > > > > > > > > getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
> > > > > > > > > potentially others), but more uniform, and avoiding the source
> > > > > > > > > (in)compatibility issues of adding new fields to a struct [even if it
> > > > > > > > > does have a size_t to "version" it like the arm64 ifunc struct].
> > > > > > > > >
> > > > > > > > > yes, it means everyone pays to get all the hwprobes, but that gets
> > > > > > > > > amortized. and lookup in the ifunc resolver is simple and quick. if we
> > > > > > > > > know that the keys will be kept dense, we can even have code in ifunc
> > > > > > > > > resolvers like
> > > > > > > > >
> > > > > > > > > if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...
> > > > > > > > >
> > > > > > > > > though personally for the "big ticket items" that get a letter to
> > > > > > > > > themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
> > > > > > > > > probe_count, probes_ptr)` to the resolver, but i hear that's
> > > > > > > > > controversial :-)
> > > > > > > >
> > > > > > > > Hello, welcome to the fun! :)
> > > > > > >
> > > > > > > (sorry for the delay. i've been thinking :-) )
> > > > > > >
> > > > > > > > What you're describing here is almost exactly what we did inside the
> > > > > > > > vDSO function. The vDSO function acts as a front for a handful of
> > > > > > > > probe values that we've already completed and cached in userspace. We
> > > > > > > > opted to make it a function, rather than exposing the data itself via
> > > > > > > > vDSO, so that we had future flexibility in what elements we cached in
> > > > > > > > userspace and their storage format. We can update the kernel as needed
> > > > > > > > to cache the hottest things in userspace, even if that means
> > > > > > > > rearranging the data format, passing through some extra information,
> > > > > > > > or adding an extra snip of code. My hope is callers can directly
> > > > > > > > interact with the vDSO function (though maybe as Richard suggested
> > > > > > > > maybe with the help of a tidy inline helper), rather than trying to
> > > > > > > > add a second layer of userspace caching.
> > > > > > >
> > > > > > > on reflection i think i might be too focused on the FMV use case, in
> > > > > > > part because we're looking at those compiler-generated ifuncs for
> > > > > > > arm64 on Android atm. i think i'm imagining a world where there's a
> > > > > > > lot of that, and worrying about having to pay for the setup, call, and
> > > > > > > loop for each ifunc, and wondering why we don't just pay once instead.
> > > > > > > (as a bit of background context, Android "app" start is actually a
> > > > > > > dlopen() in a clone of an existing zygote process, and in general app
> > > > > > > launch time is one of the key metrics anyone who's serious is
> > > > > > > optimizing for. you'd be surprised how much of my life i spend
> > > > > > > explaining to people that if they want dlopen() to be faster, maybe
> > > > > > > they shouldn't ask us to run thousands of ELF constructors.)
> > > > > > >
> > > > > > > but... the more time i spend looking at what we actually need in
> > > > > > > third-party open source libraries right now i realize that libc and
> > > > > > > FMV (which is still a future thing for us anyway) are really the only
> > > > > > > _actual_ ifunc users. perhaps in part because macOS/iOS don't have
> > > > > > > ifuncs, all the libraries that are part of the OS itself, for example,
> > > > > > > are just doing their own thing with function pointers and
> > > > > > > pthread_once() or whatever.
> > > > > > >
> > > > > > > (i have yet to try to get any data on actual apps. i have no reason to
> > > > > > > think they'll be very different, but that could easily be skewed by
> > > > > > > popular middleware or a popular game engine using ifuncs, so i do plan
> > > > > > > on following up on that.)
> > > > > > >
> > > > > > > "how do they decide what to set that function pointer to?". well, it
> > > > > > > looks like in most cases cpuid on x86 and calls to getauxval()
> > > > > > > everywhere else. in some cases that's actually via some other library:
> > > > > > > https://github.com/pytorch/cpuinfo or
> > > > > > > https://github.com/google/cpu_features for example. so they have a
> > > > > > > layer of caching there, even in cases where they don't have a single
> > > > > > > function that sets all the function pointers.
> > > > > >
> > > > > > Right, function multi-versioning is just the sort of spot where we'd
> > > > > > imagine hwprobe gets used, since it's providing similar/equivalent
> > > > > > information to what cpuid does on x86. It may not be quite as fast as
> > > > > > cpuid (I don't know how fast cpuid actually is). But with the vDSO
> > > > > > function+data in userspace it should be able to match getauxval() in
> > > > > > performance, as they're both a function pointer plus a loop. We're
> > > > > > sort of planning for a world in which RISC-V has a wider set of these
> > > > > > values to fetch, such that a ifunc selector may need a more complex
> > > > > > set of information. Hwprobe and the vDSO gives us the ability both to
> > > > > > answer multiple queries fast, and freely allocate more keys that may
> > > > > > represent versioned features or even compound features.
> > > > >
> > > > > yeah, my incorrect mental model was that -- primarily because of
> > > > > x86-64 and cpuid -- every function would get its own ifunc resolver
> > > > > that would have to make a query. but the [in progress] arm64
> > > > > implementation shows that that's not really the case anyway, and we
> > > > > can just cache __riscv_hwprobe() in the same [one] place that
> > > > > getauxval() is already being cached for arm64.
> > > >
> > > > Sounds good.
> > > >
> > > > >
> > > > > > > so assuming i don't find that apps look very different from the OS
> > > > > > > (that is: that apps use lots of ifuncs), i probably don't care at all
> > > > > > > until we get to FMV. and i probably don't care for FMV, because
> > > > > > > compiler-rt (or gcc's equivalent) will be the "caching layer" there.
> > > > > > > (and on Android it'll be a while before i have to worry about libc's
> > > > > > > ifuncs because we'll require V and not use ifuncs there for the
> > > > > > > foreseeable future.)
> > > > > > >
> > > > > > > so, yeah, given that i've adopted the "pass a null pointer rather than
> > > > > > > no arguments" convention you have, we have room for expansion if/when
> > > > > > > FMV is a big thing, and until then -- unless i'm shocked by what i
> > > > > > > find looking at actual apps -- i don't think i have any reason to
> > > > > > > believe that ifuncs matter that much, and if compiler-rt makes one
> > > > > > > __riscv_hwprobe() call per .so, that's probably fine. (i already spend
> > > > > > > a big chunk of my life advising people to just have one .so file,
> > > > > > > exporting nothing but a JNI_OnLoad symbol, so this will just make that
> > > > > > > advice even better advice :-) )
> > > > > >
> > > > > > Just to confirm, by "pass a null pointer", you're saying that the
> > > > > > Android libc also passes NULL as the second ifunc selector argument
> > > > > > (or first)?
> > > > >
> > > > > #elif defined(__riscv)
> > > > >   // This argument and its value is just a placeholder for now,
> > > > >   // but it means that if we do pass something in future (such as
> > > > >   // getauxval() and/or hwprobe key/value pairs), callees will be able to
> > > > >   // recognize what they're being given.
> > > > >   typedef ElfW(Addr) (*ifunc_resolver_t)(void*);
> > > > >   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(nullptr);
> > > > >
> > > > > it's arm64 that has the initial getauxval() argument:
> > > > >
> > > > > #if defined(__aarch64__)
> > > > >   typedef ElfW(Addr) (*ifunc_resolver_t)(uint64_t, __ifunc_arg_t*);
> > > > >   static __ifunc_arg_t arg;
> > > > >   static bool initialized = false;
> > > > >   if (!initialized) {
> > > > >     initialized = true;
> > > > >     arg._size = sizeof(__ifunc_arg_t);
> > > > >     arg._hwcap = getauxval(AT_HWCAP);
> > > > >     arg._hwcap2 = getauxval(AT_HWCAP2);
> > > > >   }
> > > > >   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(arg._hwcap
> > > > > | _IFUNC_ARG_HWCAP, &arg);
> > > > >
> > > > > https://android.googlesource.com/platform/bionic/+/main/libc/bionic/bionic_call_ifunc_resolver.cpp
> > > > >
> > > > > > That's good. It sounds like you're planning to just
> > > > > > continue passing NULL for now, and wait for people to start clamoring
> > > > > > for this in android libc?
> > > > >
> > > > > yeah, and i'm assuming there will never be any clamor ... yesterday
> > > > > and today i actually checked a bunch of popular apks, and didn't find
> > > > > any that were currently using ifuncs.
> > > > >
> > > > > the only change i'm thinking of making right now is that "there's a
> > > > > single argument, and it's null" should probably be the default.
> > > > > obviously since Android doesn't add new architectures very often, this
> > > > > is only likely to affect x86/x86-64 for the foreseeable future, but
> > > > > being able to recognize at a glance "am i running under a libc new
> > > > > enough to pass me arguments?" would certainly have helped for arm64.
> > > > > even if x86/x86-64 never benefit, it seems like the right default for
> > > > > the #else clause...
> > > >
> > > > Sounds good, thanks for the pointers. The paranoid person in me would
> > > > also add a comment in the risc-v section that if a pointer to hwprobe
> > > > is added, it should be added as the second argument, behind hwcap as
> > > > the first (assuming this change lands).
> > > >
> > > > Come to think of it, the static inline helper I'm proposing in my
> > > > discussion with Richard needs to take both arguments, since callers
> > > > need to check both ((arg1 != 0) && (arg2 != NULL)) to safely know that
> > > > arg2 is a pointer to __riscv_hwprobe().
> > >
> > > presumably not `(arg1 != 0)` but `(arg1 & _IFUNC_ARG_HWCAP)` to match arm64?
> >
> > It looks like we didn't do that _IFUNC_ARG_HWCAP bit on riscv.
> > Actually, looking at the history of sysdeps/riscv/dl-irel.h, hwcap has
> > always been passed as the first argument. So I think I don't need to
> > check it in the (glibc-specific) inline helper function, I can safely
> > assume it's there and go straight to checking the second argument.
>
> oh i misunderstood what you were saying earlier.

My bad for causing confusion, I learned something from your reply and
changed my conclusion.

>
> > If you were coding this directly in a library or application, you
> > would need to check the first arg to be compatible with other libcs
> > like Android's.
>
> we haven't shipped yet, so if you're telling me glibc is passing
> `(getauxval(AT_HWCAP), nullptr)`, i'll change bionic to do the same
> today ... the whole reason i'm here on this thread is to ensure source
> compatibility for anyone writing ifuncs :-)

Ah, yes, they are (and have always on risc-v) passed
(getauxval(AT_HWCAP), nullptr), so do exactly that.
-Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy
  2023-08-17 17:40                               ` Evan Green
@ 2023-08-22 15:06                                 ` enh
  0 siblings, 0 replies; 27+ messages in thread
From: enh @ 2023-08-22 15:06 UTC (permalink / raw)
  To: Evan Green
  Cc: Richard Henderson, Florian Weimer, libc-alpha, slewis, palmer, vineetg

On Thu, Aug 17, 2023 at 10:41 AM Evan Green <evan@rivosinc.com> wrote:
>
> On Thu, Aug 17, 2023 at 9:37 AM enh <enh@google.com> wrote:
> >
> > On Thu, Aug 17, 2023 at 9:27 AM Evan Green <evan@rivosinc.com> wrote:
> > >
> > > On Wed, Aug 16, 2023 at 4:18 PM enh <enh@google.com> wrote:
> > > >
> > > > On Tue, Aug 15, 2023 at 4:02 PM Evan Green <evan@rivosinc.com> wrote:
> > > > >
> > > > > On Tue, Aug 15, 2023 at 2:54 PM enh <enh@google.com> wrote:
> > > > > >
> > > > > > On Tue, Aug 15, 2023 at 9:41 AM Evan Green <evan@rivosinc.com> wrote:
> > > > > > >
> > > > > > > On Fri, Aug 11, 2023 at 5:01 PM enh <enh@google.com> wrote:
> > > > > > > >
> > > > > > > > On Mon, Aug 7, 2023 at 5:01 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > > > > >
> > > > > > > > > On Mon, Aug 7, 2023 at 3:48 PM enh <enh@google.com> wrote:
> > > > > > > > > >
> > > > > > > > > > On Mon, Aug 7, 2023 at 3:11 PM Evan Green <evan@rivosinc.com> wrote:
> > > > > > > > > > >
> > > > > > > > > > > On Thu, Aug 3, 2023 at 3:30 PM Richard Henderson
> > > > > > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > On 8/3/23 11:42, Evan Green wrote:
> > > > > > > > > > > > > On Thu, Aug 3, 2023 at 10:50 AM Richard Henderson
> > > > > > > > > > > > > <richard.henderson@linaro.org> wrote:
> > > > > > > > > > > > >> Outside libc something is required.
> > > > > > > > > > > > >>
> > > > > > > > > > > > >> An extra parameter to ifunc is surprising though, and clearly not ideal per the extra
> > > > > > > > > > > > >> hoops above.  I would hope for something with hidden visibility in libc_nonshared.a that
> > > > > > > > > > > > >> could always be called directly.
> > > > > > > > > > > > >
> > > > > > > > > > > > > My previous spin took that approach, defining a
> > > > > > > > > > > > > __riscv_hwprobe_early() in libc_nonshared that could route to the real
> > > > > > > > > > > > > function if available, or make the syscall directly if not. But that
> > > > > > > > > > > > > approach had the drawback that ifunc users couldn't take advantage of
> > > > > > > > > > > > > the vDSO, and then all users had to comprehend the difference between
> > > > > > > > > > > > > __riscv_hwprobe() and __riscv_hwprobe_early().
> > > > > > > > > > > >
> > > > > > > > > > > > I would define __riscv_hwprobe such that it could take advantage of the vDSO once
> > > > > > > > > > > > initialization reaches a certain point, but cope with being run earlier than that point by
> > > > > > > > > > > > falling back to the syscall.
> > > > > > > > > > > >
> > > > > > > > > > > > That constrains the implementation, I guess, in that it can't set errno, but just
> > > > > > > > > > > > returning the negative errno from the syscall seems fine.
> > > > > > > > > > > >
> > > > > > > > > > > > It might be tricky to get a reference to GLRO(dl_vdso_riscv_hwprobe) very early, but I
> > > > > > > > > > > > would hope that some application of __attribute__((weak)) might correctly get you a NULL
> > > > > > > > > > > > prior to full relocations being complete.
> > > > > > > > > > >
> > > > > > > > > > > Right, this is what we had in the previous iteration of this series,
> > > > > > > > > > > and it did work ok. But it wasn't as good since it meant ifunc
> > > > > > > > > > > selectors always got stuck in the null/fallback case and were forced
> > > > > > > > > > > to make the syscall. With this mechanism they get to take advantage of
> > > > > > > > > > > the vDSO.
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > > In contrast, IMO this approach is much nicer. Ifunc writers are
> > > > > > > > > > > > > already used to getting hwcap info via a parameter. Adding this second
> > > > > > > > > > > > > parameter, which also provides hwcap-like things, seems like a natural
> > > > > > > > > > > > > extension. I didn't quite follow what you meant by the "extra hoops
> > > > > > > > > > > > > above".
> > > > > > > > > > > >
> > > > > > > > > > > > The check for null function pointer, for sure.  But also consider how __riscv_hwprobe is
> > > > > > > > > > > > going to be used.
> > > > > > > > > > > >
> > > > > > > > > > > > It might be worth defining some helper functions for probing a single key or a single
> > > > > > > > > > > > field.  E.g.
> > > > > > > > > > > >
> > > > > > > > > > > > uint64_t __riscv_hwprobe_one_key(int64_t key, unsigned int flags)
> > > > > > > > > > > > {
> > > > > > > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > > > > > > >    int err = __riscv_hwprobe(&pair, 1, 0, NULL, flags);
> > > > > > > > > > > >    if (err)
> > > > > > > > > > > >      return err;
> > > > > > > > > > > >    if (pair.key == -1)
> > > > > > > > > > > >      return -ENOENT;
> > > > > > > > > > > >    return pair.value;
> > > > > > > > > > > > }
> > > > > > > > > > > >
> > > > > > > > > > > > This implementation requires that no future hwprobe key define a value which as a valid
> > > > > > > > > > > > value in the errno range (or better, bit 63 unused).  Alternately, or additionally:
> > > > > > > > > > > >
> > > > > > > > > > > > bool __riscv_hwprobe_one_mask(int64_t key, uint64_t mask, uint64_t val, int flags)
> > > > > > > > > > > > {
> > > > > > > > > > > >    struct riscv_hwprobe pair = { .key = key };
> > > > > > > > > > > >    return (__riscv_hwprobe(&pair, 1, 0, NULL, flags) == 0
> > > > > > > > > > > >            && pair.key != -1
> > > > > > > > > > > >            && (pair.value & mask) == val);
> > > > > > > > > > > > }
> > > > > > > > > > > >
> > > > > > > > > > > > These yield either
> > > > > > > > > > > >
> > > > > > > > > > > >      int64_t v = __riscv_hwprobe_one_key(CPUPERF_0, 0);
> > > > > > > > > > > >      if (v >= 0 && (v & MISALIGNED_MASK) == MISALIGNED_FAST)
> > > > > > > > > > > >        return __memcpy_noalignment;
> > > > > > > > > > > >      return __memcpy_generic;
> > > > > > > > > > > >
> > > > > > > > > > > > or
> > > > > > > > > > > >
> > > > > > > > > > > >      if (__riscv_hwprobe_one_mask(CPUPERF_0, MISALIGNED_MASK, MISALIGNED_FAST, 0))
> > > > > > > > > > > >        return __memcpy_noalignment;
> > > > > > > > > > > >      return __memcpy_generic;
> > > > > > > > > > > >
> > > > > > > > > > > > which to my mind looks much better for a pattern you'll be replicating so very many times
> > > > > > > > > > > > across all of the ifunc implementations in the system.
> > > > > > > > > > >
> > > > > > > > > > > Ah, I see. I could make a static inline function in the header that
> > > > > > > > > > > looks something like this (mangled by gmail, sorry):
> > > > > > > > > > >
> > > > > > > > > > > /* Helper function usable from ifunc selectors that probes a single key. */
> > > > > > > > > > > static inline int __riscv_hwprobe_one(__riscv_hwprobe_t hwprobe_func,
> > > > > > > > > > > signed long long int key,
> > > > > > > > > > > unsigned long long int *value)
> > > > > > > > > > > {
> > > > > > > > > > > struct riscv_hwprobe pair;
> > > > > > > > > > > int rc;
> > > > > > > > > > >
> > > > > > > > > > > if (!hwprobe_func)
> > > > > > > > > > > return -ENOSYS;
> > > > > > > > > > >
> > > > > > > > > > > pair.key = key;
> > > > > > > > > > > rc = hwprobe_func(&pair, 1, 0, NULL, 0);
> > > > > > > > > > > if (rc) {
> > > > > > > > > > > return rc;
> > > > > > > > > > > }
> > > > > > > > > > >
> > > > > > > > > > > if (pair.key < 0) {
> > > > > > > > > > > return -ENOENT;
> > > > > > > > > > > }
> > > > > > > > > > >
> > > > > > > > > > > *value = pair.value;
> > > > > > > > > > > return 0;
> > > > > > > > > > > }
> > > > > > > > > > >
> > > > > > > > > > > The ifunc selector would then be significantly cleaned up, looking
> > > > > > > > > > > something like:
> > > > > > > > > > >
> > > > > > > > > > > if (__riscv_hwprobe_one(hwprobe_func, RISCV_HWPROBE_KEY_CPUPERF_0, &value))
> > > > > > > > > > > return __memcpy_generic;
> > > > > > > > > > >
> > > > > > > > > > > if (value & RISCV_HWPROBE_MISALIGNED_MASK) == RISCV_HWPROBE_MISALIGNED_FAST)
> > > > > > > > > > > return __memcpy_noalignment;
> > > > > > > > > >
> > > > > > > > > > (Android's libc maintainer here, having joined the list just to talk
> > > > > > > > > > about risc-v ifuncs :-) )
> > > > > > > > > >
> > > > > > > > > > has anyone thought about calling ifunc resolvers more like this...
> > > > > > > > > >
> > > > > > > > > > --same part of the dynamic loader that caches the two getauxval()s for arm64--
> > > > > > > > > > static struct riscv_hwprobe probes[] = {
> > > > > > > > > >  {.value = RISCV_HWPROBE_KEY_MVENDORID},
> > > > > > > > > >  {.value = RISCV_HWPROBE_KEY_MARCHID},
> > > > > > > > > >  {.value = RISCV_HWPROBE_KEY_MIMPID},
> > > > > > > > > >  {.value = RISCV_HWPROBE_KEY_BASE_BEHAVIOR},
> > > > > > > > > >  {.value = RISCV_HWPROBE_KEY_IMA_EXT},
> > > > > > > > > >  {.value = RISCV_HWPROBE_KEY_CPUPERF_0},
> > > > > > > > > > ... // every time a new key is added to the kernel, we add it here
> > > > > > > > > > };
> > > > > > > > > > __riscv_hwprobe(...); // called once
> > > > > > > > > >
> > > > > > > > > > --part of the dynamic loader that calls ifunc resolvers--
> > > > > > > > > > (*ifunc_resolver)(sizeof(probes)/sizeof(probes[0]), probes);
> > > > > > > > > >
> > > > > > > > > > this is similar to what we already have for arm64 (where there's a
> > > > > > > > > > getauxval(AT_HWCAP) and a pointer to a struct for AT_HWCAP2 and
> > > > > > > > > > potentially others), but more uniform, and avoiding the source
> > > > > > > > > > (in)compatibility issues of adding new fields to a struct [even if it
> > > > > > > > > > does have a size_t to "version" it like the arm64 ifunc struct].
> > > > > > > > > >
> > > > > > > > > > yes, it means everyone pays to get all the hwprobes, but that gets
> > > > > > > > > > amortized. and lookup in the ifunc resolver is simple and quick. if we
> > > > > > > > > > know that the keys will be kept dense, we can even have code in ifunc
> > > > > > > > > > resolvers like
> > > > > > > > > >
> > > > > > > > > > if (probes[RISCV_HWPROBE_BASE_BEHAVIOR_IMA].value & RISCV_HWPROBE_IMA_V) ...
> > > > > > > > > >
> > > > > > > > > > though personally for the "big ticket items" that get a letter to
> > > > > > > > > > themselves like V, i'd be tempted to pass `(getauxval(AT_HWCAP),
> > > > > > > > > > probe_count, probes_ptr)` to the resolver, but i hear that's
> > > > > > > > > > controversial :-)
> > > > > > > > >
> > > > > > > > > Hello, welcome to the fun! :)
> > > > > > > >
> > > > > > > > (sorry for the delay. i've been thinking :-) )
> > > > > > > >
> > > > > > > > > What you're describing here is almost exactly what we did inside the
> > > > > > > > > vDSO function. The vDSO function acts as a front for a handful of
> > > > > > > > > probe values that we've already completed and cached in userspace. We
> > > > > > > > > opted to make it a function, rather than exposing the data itself via
> > > > > > > > > vDSO, so that we had future flexibility in what elements we cached in
> > > > > > > > > userspace and their storage format. We can update the kernel as needed
> > > > > > > > > to cache the hottest things in userspace, even if that means
> > > > > > > > > rearranging the data format, passing through some extra information,
> > > > > > > > > or adding an extra snip of code. My hope is callers can directly
> > > > > > > > > interact with the vDSO function (though maybe as Richard suggested
> > > > > > > > > maybe with the help of a tidy inline helper), rather than trying to
> > > > > > > > > add a second layer of userspace caching.
> > > > > > > >
> > > > > > > > on reflection i think i might be too focused on the FMV use case, in
> > > > > > > > part because we're looking at those compiler-generated ifuncs for
> > > > > > > > arm64 on Android atm. i think i'm imagining a world where there's a
> > > > > > > > lot of that, and worrying about having to pay for the setup, call, and
> > > > > > > > loop for each ifunc, and wondering why we don't just pay once instead.
> > > > > > > > (as a bit of background context, Android "app" start is actually a
> > > > > > > > dlopen() in a clone of an existing zygote process, and in general app
> > > > > > > > launch time is one of the key metrics anyone who's serious is
> > > > > > > > optimizing for. you'd be surprised how much of my life i spend
> > > > > > > > explaining to people that if they want dlopen() to be faster, maybe
> > > > > > > > they shouldn't ask us to run thousands of ELF constructors.)
> > > > > > > >
> > > > > > > > but... the more time i spend looking at what we actually need in
> > > > > > > > third-party open source libraries right now i realize that libc and
> > > > > > > > FMV (which is still a future thing for us anyway) are really the only
> > > > > > > > _actual_ ifunc users. perhaps in part because macOS/iOS don't have
> > > > > > > > ifuncs, all the libraries that are part of the OS itself, for example,
> > > > > > > > are just doing their own thing with function pointers and
> > > > > > > > pthread_once() or whatever.
> > > > > > > >
> > > > > > > > (i have yet to try to get any data on actual apps. i have no reason to
> > > > > > > > think they'll be very different, but that could easily be skewed by
> > > > > > > > popular middleware or a popular game engine using ifuncs, so i do plan
> > > > > > > > on following up on that.)
> > > > > > > >
> > > > > > > > "how do they decide what to set that function pointer to?". well, it
> > > > > > > > looks like in most cases cpuid on x86 and calls to getauxval()
> > > > > > > > everywhere else. in some cases that's actually via some other library:
> > > > > > > > https://github.com/pytorch/cpuinfo or
> > > > > > > > https://github.com/google/cpu_features for example. so they have a
> > > > > > > > layer of caching there, even in cases where they don't have a single
> > > > > > > > function that sets all the function pointers.
> > > > > > >
> > > > > > > Right, function multi-versioning is just the sort of spot where we'd
> > > > > > > imagine hwprobe gets used, since it's providing similar/equivalent
> > > > > > > information to what cpuid does on x86. It may not be quite as fast as
> > > > > > > cpuid (I don't know how fast cpuid actually is). But with the vDSO
> > > > > > > function+data in userspace it should be able to match getauxval() in
> > > > > > > performance, as they're both a function pointer plus a loop. We're
> > > > > > > sort of planning for a world in which RISC-V has a wider set of these
> > > > > > > values to fetch, such that a ifunc selector may need a more complex
> > > > > > > set of information. Hwprobe and the vDSO gives us the ability both to
> > > > > > > answer multiple queries fast, and freely allocate more keys that may
> > > > > > > represent versioned features or even compound features.
> > > > > >
> > > > > > yeah, my incorrect mental model was that -- primarily because of
> > > > > > x86-64 and cpuid -- every function would get its own ifunc resolver
> > > > > > that would have to make a query. but the [in progress] arm64
> > > > > > implementation shows that that's not really the case anyway, and we
> > > > > > can just cache __riscv_hwprobe() in the same [one] place that
> > > > > > getauxval() is already being cached for arm64.
> > > > >
> > > > > Sounds good.
> > > > >
> > > > > >
> > > > > > > > so assuming i don't find that apps look very different from the OS
> > > > > > > > (that is: that apps use lots of ifuncs), i probably don't care at all
> > > > > > > > until we get to FMV. and i probably don't care for FMV, because
> > > > > > > > compiler-rt (or gcc's equivalent) will be the "caching layer" there.
> > > > > > > > (and on Android it'll be a while before i have to worry about libc's
> > > > > > > > ifuncs because we'll require V and not use ifuncs there for the
> > > > > > > > foreseeable future.)
> > > > > > > >
> > > > > > > > so, yeah, given that i've adopted the "pass a null pointer rather than
> > > > > > > > no arguments" convention you have, we have room for expansion if/when
> > > > > > > > FMV is a big thing, and until then -- unless i'm shocked by what i
> > > > > > > > find looking at actual apps -- i don't think i have any reason to
> > > > > > > > believe that ifuncs matter that much, and if compiler-rt makes one
> > > > > > > > __riscv_hwprobe() call per .so, that's probably fine. (i already spend
> > > > > > > > a big chunk of my life advising people to just have one .so file,
> > > > > > > > exporting nothing but a JNI_OnLoad symbol, so this will just make that
> > > > > > > > advice even better advice :-) )
> > > > > > >
> > > > > > > Just to confirm, by "pass a null pointer", you're saying that the
> > > > > > > Android libc also passes NULL as the second ifunc selector argument
> > > > > > > (or first)?
> > > > > >
> > > > > > #elif defined(__riscv)
> > > > > >   // This argument and its value is just a placeholder for now,
> > > > > >   // but it means that if we do pass something in future (such as
> > > > > >   // getauxval() and/or hwprobe key/value pairs), callees will be able to
> > > > > >   // recognize what they're being given.
> > > > > >   typedef ElfW(Addr) (*ifunc_resolver_t)(void*);
> > > > > >   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(nullptr);
> > > > > >
> > > > > > it's arm64 that has the initial getauxval() argument:
> > > > > >
> > > > > > #if defined(__aarch64__)
> > > > > >   typedef ElfW(Addr) (*ifunc_resolver_t)(uint64_t, __ifunc_arg_t*);
> > > > > >   static __ifunc_arg_t arg;
> > > > > >   static bool initialized = false;
> > > > > >   if (!initialized) {
> > > > > >     initialized = true;
> > > > > >     arg._size = sizeof(__ifunc_arg_t);
> > > > > >     arg._hwcap = getauxval(AT_HWCAP);
> > > > > >     arg._hwcap2 = getauxval(AT_HWCAP2);
> > > > > >   }
> > > > > >   return reinterpret_cast<ifunc_resolver_t>(resolver_addr)(arg._hwcap
> > > > > > | _IFUNC_ARG_HWCAP, &arg);
> > > > > >
> > > > > > https://android.googlesource.com/platform/bionic/+/main/libc/bionic/bionic_call_ifunc_resolver.cpp
> > > > > >
> > > > > > > That's good. It sounds like you're planning to just
> > > > > > > continue passing NULL for now, and wait for people to start clamoring
> > > > > > > for this in android libc?
> > > > > >
> > > > > > yeah, and i'm assuming there will never be any clamor ... yesterday
> > > > > > and today i actually checked a bunch of popular apks, and didn't find
> > > > > > any that were currently using ifuncs.
> > > > > >
> > > > > > the only change i'm thinking of making right now is that "there's a
> > > > > > single argument, and it's null" should probably be the default.
> > > > > > obviously since Android doesn't add new architectures very often, this
> > > > > > is only likely to affect x86/x86-64 for the foreseeable future, but
> > > > > > being able to recognize at a glance "am i running under a libc new
> > > > > > enough to pass me arguments?" would certainly have helped for arm64.
> > > > > > even if x86/x86-64 never benefit, it seems like the right default for
> > > > > > the #else clause...
> > > > >
> > > > > Sounds good, thanks for the pointers. The paranoid person in me would
> > > > > also add a comment in the risc-v section that if a pointer to hwprobe
> > > > > is added, it should be added as the second argument, behind hwcap as
> > > > > the first (assuming this change lands).
> > > > >
> > > > > Come to think of it, the static inline helper I'm proposing in my
> > > > > discussion with Richard needs to take both arguments, since callers
> > > > > need to check both ((arg1 != 0) && (arg2 != NULL)) to safely know that
> > > > > arg2 is a pointer to __riscv_hwprobe().
> > > >
> > > > presumably not `(arg1 != 0)` but `(arg1 & _IFUNC_ARG_HWCAP)` to match arm64?
> > >
> > > It looks like we didn't do that _IFUNC_ARG_HWCAP bit on riscv.
> > > Actually, looking at the history of sysdeps/riscv/dl-irel.h, hwcap has
> > > always been passed as the first argument. So I think I don't need to
> > > check it in the (glibc-specific) inline helper function, I can safely
> > > assume it's there and go straight to checking the second argument.
> >
> > oh i misunderstood what you were saying earlier.
>
> My bad for causing confusion, I learned something from your reply and
> changed my conclusion.
>
> >
> > > If you were coding this directly in a library or application, you
> > > would need to check the first arg to be compatible with other libcs
> > > like Android's.
> >
> > we haven't shipped yet, so if you're telling me glibc is passing
> > `(getauxval(AT_HWCAP), nullptr)`, i'll change bionic to do the same
> > today ... the whole reason i'm here on this thread is to ensure source
> > compatibility for anyone writing ifuncs :-)
>
> Ah, yes, they are (and have always on risc-v) passed
> (getauxval(AT_HWCAP), nullptr), so do exactly that.

done: https://android-review.googlesource.com/c/platform/bionic/+/2695693

> -Evan

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2023-08-22 15:06 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-02 15:58 [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface Evan Green
2023-08-02 15:58 ` [PATCH v6 1/5] riscv: Add Linux hwprobe syscall support Evan Green
2023-08-02 16:52   ` Joseph Myers
2023-08-03  7:24   ` Florian Weimer
2023-08-02 15:59 ` [PATCH v6 2/5] riscv: Add hwprobe vdso call support Evan Green
2023-08-02 15:59 ` [PATCH v6 3/5] riscv: Add __riscv_hwprobe pointer to ifunc calls Evan Green
2023-08-02 15:59 ` [PATCH v6 4/5] riscv: Enable multi-arg ifunc resolvers Evan Green
2023-08-02 15:59 ` [PATCH v6 5/5] riscv: Add and use alignment-ignorant memcpy Evan Green
2023-08-03  7:25   ` Florian Weimer
2023-08-03 17:50     ` Richard Henderson
2023-08-03 18:42       ` Evan Green
2023-08-03 22:30         ` Richard Henderson
2023-08-07 22:10           ` Evan Green
2023-08-07 22:21             ` Florian Weimer
2023-08-07 22:30               ` Evan Green
2023-08-07 22:48             ` enh
2023-08-08  0:01               ` Evan Green
2023-08-12  0:01                 ` enh
2023-08-15 16:40                   ` Evan Green
2023-08-15 21:53                     ` enh
2023-08-15 23:01                       ` Evan Green
2023-08-16 23:18                         ` enh
2023-08-17 16:27                           ` Evan Green
2023-08-17 16:37                             ` enh
2023-08-17 17:40                               ` Evan Green
2023-08-22 15:06                                 ` enh
2023-08-02 16:03 ` [PATCH v6 0/5] RISC-V: ifunced memcpy using new kernel hwprobe interface Evan Green

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).