public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [PATCH v2 1/5] aarch64: Add vector implementations of tan routines
@ 2023-10-05 16:10 Joe Ramsay
  2023-10-05 16:10 ` [PATCH v2 2/5] aarch64: Add vector implementations of exp2 routines Joe Ramsay
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Joe Ramsay @ 2023-10-05 16:10 UTC (permalink / raw)
  To: libc-alpha; +Cc: Joe Ramsay

This includes some utility headers for evaluating polynomials using
various schemes.
---
Changes from v1:
* Silence sign-of-zero check for mathvec tan
* Optimise register usage around special-case, and remove -0 check
Thanks,
Joe
 math/auto-libm-test-in                        |   2 +-
 math/auto-libm-test-out-tan                   |  50 +--
 sysdeps/aarch64/fpu/Makefile                  |   3 +-
 sysdeps/aarch64/fpu/Versions                  |   6 +
 sysdeps/aarch64/fpu/bits/math-vector.h        |   4 +
 sysdeps/aarch64/fpu/poly_advsimd_f32.h        |  36 ++
 sysdeps/aarch64/fpu/poly_advsimd_f64.h        |  36 ++
 sysdeps/aarch64/fpu/poly_generic.h            | 285 ++++++++++++++++
 sysdeps/aarch64/fpu/poly_sve_f32.h            |  38 +++
 sysdeps/aarch64/fpu/poly_sve_f64.h            |  38 +++
 sysdeps/aarch64/fpu/poly_sve_generic.h        | 313 ++++++++++++++++++
 sysdeps/aarch64/fpu/tan_advsimd.c             | 123 +++++++
 sysdeps/aarch64/fpu/tan_sve.c                 | 104 ++++++
 sysdeps/aarch64/fpu/tanf_advsimd.c            | 129 ++++++++
 sysdeps/aarch64/fpu/tanf_sve.c                | 118 +++++++
 .../fpu/test-double-advsimd-wrappers.c        |   1 +
 .../aarch64/fpu/test-double-sve-wrappers.c    |   1 +
 .../aarch64/fpu/test-float-advsimd-wrappers.c |   1 +
 sysdeps/aarch64/fpu/test-float-sve-wrappers.c |   1 +
 sysdeps/aarch64/libm-test-ulps                |   8 +
 .../unix/sysv/linux/aarch64/libmvec.abilist   |   4 +
 21 files changed, 1274 insertions(+), 27 deletions(-)
 create mode 100644 sysdeps/aarch64/fpu/poly_advsimd_f32.h
 create mode 100644 sysdeps/aarch64/fpu/poly_advsimd_f64.h
 create mode 100644 sysdeps/aarch64/fpu/poly_generic.h
 create mode 100644 sysdeps/aarch64/fpu/poly_sve_f32.h
 create mode 100644 sysdeps/aarch64/fpu/poly_sve_f64.h
 create mode 100644 sysdeps/aarch64/fpu/poly_sve_generic.h
 create mode 100644 sysdeps/aarch64/fpu/tan_advsimd.c
 create mode 100644 sysdeps/aarch64/fpu/tan_sve.c
 create mode 100644 sysdeps/aarch64/fpu/tanf_advsimd.c
 create mode 100644 sysdeps/aarch64/fpu/tanf_sve.c

diff --git a/math/auto-libm-test-in b/math/auto-libm-test-in
index 2672eb1f6a..70892503d6 100644
--- a/math/auto-libm-test-in
+++ b/math/auto-libm-test-in
@@ -7655,7 +7655,7 @@ sqrt min
 sqrt min_subnorm
 
 tan 0
-tan -0
+tan -0 no-mathvec
 tan pi/4
 tan pi/2
 tan -pi/2
diff --git a/math/auto-libm-test-out-tan b/math/auto-libm-test-out-tan
index 7d00d03e1d..f46fdc7ec6 100644
--- a/math/auto-libm-test-out-tan
+++ b/math/auto-libm-test-out-tan
@@ -23,31 +23,31 @@ tan 0
 = tan tonearest ibm128 0x0p+0 : 0x0p+0 : inexact-ok
 = tan towardzero ibm128 0x0p+0 : 0x0p+0 : inexact-ok
 = tan upward ibm128 0x0p+0 : 0x0p+0 : inexact-ok
-tan -0
-= tan downward binary32 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan tonearest binary32 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan towardzero binary32 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan upward binary32 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan downward binary64 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan tonearest binary64 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan towardzero binary64 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan upward binary64 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan downward intel96 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan tonearest intel96 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan towardzero intel96 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan upward intel96 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan downward m68k96 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan tonearest m68k96 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan towardzero m68k96 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan upward m68k96 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan downward binary128 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan tonearest binary128 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan towardzero binary128 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan upward binary128 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan downward ibm128 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan tonearest ibm128 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan towardzero ibm128 -0x0p+0 : -0x0p+0 : inexact-ok
-= tan upward ibm128 -0x0p+0 : -0x0p+0 : inexact-ok
+tan -0 no-mathvec
+= tan downward binary32 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan tonearest binary32 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan towardzero binary32 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan upward binary32 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan downward binary64 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan tonearest binary64 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan towardzero binary64 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan upward binary64 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan downward intel96 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan tonearest intel96 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan towardzero intel96 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan upward intel96 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan downward m68k96 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan tonearest m68k96 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan towardzero m68k96 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan upward m68k96 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan downward binary128 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan tonearest binary128 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan towardzero binary128 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan upward binary128 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan downward ibm128 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan tonearest ibm128 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan towardzero ibm128 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
+= tan upward ibm128 -0x0p+0 : -0x0p+0 : no-mathvec inexact-ok
 tan pi/4
 = tan downward binary32 0xc.90fdbp-4 : 0x1p+0 : inexact-ok
 = tan tonearest binary32 0xc.90fdbp-4 : 0x1p+0 : inexact-ok
diff --git a/sysdeps/aarch64/fpu/Makefile b/sysdeps/aarch64/fpu/Makefile
index 04aa2e37ca..a1bbc9bcaa 100644
--- a/sysdeps/aarch64/fpu/Makefile
+++ b/sysdeps/aarch64/fpu/Makefile
@@ -1,7 +1,8 @@
 libmvec-supported-funcs = cos \
                           exp \
                           log \
-                          sin
+                          sin \
+                          tan
 
 float-advsimd-funcs = $(libmvec-supported-funcs)
 double-advsimd-funcs = $(libmvec-supported-funcs)
diff --git a/sysdeps/aarch64/fpu/Versions b/sysdeps/aarch64/fpu/Versions
index c85c0f3efb..f0ca0940a9 100644
--- a/sysdeps/aarch64/fpu/Versions
+++ b/sysdeps/aarch64/fpu/Versions
@@ -17,4 +17,10 @@ libmvec {
     _ZGVsMxv_sin;
     _ZGVsMxv_sinf;
   }
+  GLIBC_2.39 {
+    _ZGVnN4v_tanf;
+    _ZGVnN2v_tan;
+    _ZGVsMxv_tanf;
+    _ZGVsMxv_tan;
+  }
 }
diff --git a/sysdeps/aarch64/fpu/bits/math-vector.h b/sysdeps/aarch64/fpu/bits/math-vector.h
index 7c200599c1..6193213147 100644
--- a/sysdeps/aarch64/fpu/bits/math-vector.h
+++ b/sysdeps/aarch64/fpu/bits/math-vector.h
@@ -53,11 +53,13 @@ __vpcs __f32x4_t _ZGVnN4v_cosf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_expf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_logf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_sinf (__f32x4_t);
+__vpcs __f32x4_t _ZGVnN4v_tanf (__f32x4_t);
 
 __vpcs __f64x2_t _ZGVnN2v_cos (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_exp (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_log (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_sin (__f64x2_t);
+__vpcs __f64x2_t _ZGVnN2v_tan (__f64x2_t);
 
 #  undef __ADVSIMD_VEC_MATH_SUPPORTED
 #endif /* __ADVSIMD_VEC_MATH_SUPPORTED */
@@ -68,11 +70,13 @@ __sv_f32_t _ZGVsMxv_cosf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_expf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_logf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_sinf (__sv_f32_t, __sv_bool_t);
+__sv_f32_t _ZGVsMxv_tanf (__sv_f32_t, __sv_bool_t);
 
 __sv_f64_t _ZGVsMxv_cos (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_exp (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_log (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_sin (__sv_f64_t, __sv_bool_t);
+__sv_f64_t _ZGVsMxv_tan (__sv_f64_t, __sv_bool_t);
 
 #  undef __SVE_VEC_MATH_SUPPORTED
 #endif /* __SVE_VEC_MATH_SUPPORTED */
diff --git a/sysdeps/aarch64/fpu/poly_advsimd_f32.h b/sysdeps/aarch64/fpu/poly_advsimd_f32.h
new file mode 100644
index 0000000000..9e2ad9ad94
--- /dev/null
+++ b/sysdeps/aarch64/fpu/poly_advsimd_f32.h
@@ -0,0 +1,36 @@
+/* Helpers for evaluating polynomials on single-precision AdvSIMD input, using
+   various schemes.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#ifndef AARCH64_FPU_POLY_ADVSIMD_F32_H
+#define AARCH64_FPU_POLY_ADVSIMD_F32_H
+
+#include <arm_neon.h>
+
+/* Wrap AdvSIMD f32 helpers: evaluation of some scheme/order has form:
+   v_[scheme]_[order]_f32.  */
+#define VTYPE float32x4_t
+#define FMA(x, y, z) vfmaq_f32 (z, x, y)
+#define VWRAP(f) v_##f##_f32
+#include "poly_generic.h"
+#undef VWRAP
+#undef FMA
+#undef VTYPE
+
+#endif
diff --git a/sysdeps/aarch64/fpu/poly_advsimd_f64.h b/sysdeps/aarch64/fpu/poly_advsimd_f64.h
new file mode 100644
index 0000000000..955cfc08ce
--- /dev/null
+++ b/sysdeps/aarch64/fpu/poly_advsimd_f64.h
@@ -0,0 +1,36 @@
+/* Helpers for evaluating polynomials on double-precision AdvSIMD input, using
+   various schemes.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#ifndef AARCH64_FPU_POLY_ADVSIMD_F64_H
+#define AARCH64_FPU_POLY_ADVSIMD_F64_H
+
+#include <arm_neon.h>
+
+/* Wrap AdvSIMD f64 helpers: evaluation of some scheme/order has form:
+   v_[scheme]_[order]_f64.  */
+#define VTYPE float64x2_t
+#define FMA(x, y, z) vfmaq_f64 (z, x, y)
+#define VWRAP(f) v_##f##_f64
+#include "poly_generic.h"
+#undef VWRAP
+#undef FMA
+#undef VTYPE
+
+#endif
diff --git a/sysdeps/aarch64/fpu/poly_generic.h b/sysdeps/aarch64/fpu/poly_generic.h
new file mode 100644
index 0000000000..84f042182b
--- /dev/null
+++ b/sysdeps/aarch64/fpu/poly_generic.h
@@ -0,0 +1,285 @@
+/* Generic helpers for evaluating polynomials with various schemes.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+
+#ifndef VTYPE
+# error Cannot use poly_generic without defining VTYPE
+#endif
+#ifndef VWRAP
+# error Cannot use poly_generic without defining VWRAP
+#endif
+#ifndef FMA
+# error Cannot use poly_generic without defining FMA
+#endif
+
+static inline VTYPE VWRAP (pairwise_poly_3) (VTYPE x, VTYPE x2,
+					     const VTYPE *poly)
+{
+  /* At order 3, Estrin and Pairwise Horner are identical.  */
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  VTYPE p23 = FMA (poly[3], x, poly[2]);
+  return FMA (p23, x2, p01);
+}
+
+static inline VTYPE VWRAP (estrin_4) (VTYPE x, VTYPE x2, VTYPE x4,
+				      const VTYPE *poly)
+{
+  VTYPE p03 = VWRAP (pairwise_poly_3) (x, x2, poly);
+  return FMA (poly[4], x4, p03);
+}
+static inline VTYPE VWRAP (estrin_5) (VTYPE x, VTYPE x2, VTYPE x4,
+				      const VTYPE *poly)
+{
+  VTYPE p03 = VWRAP (pairwise_poly_3) (x, x2, poly);
+  VTYPE p45 = FMA (poly[5], x, poly[4]);
+  return FMA (p45, x4, p03);
+}
+static inline VTYPE VWRAP (estrin_6) (VTYPE x, VTYPE x2, VTYPE x4,
+				      const VTYPE *poly)
+{
+  VTYPE p03 = VWRAP (pairwise_poly_3) (x, x2, poly);
+  VTYPE p45 = FMA (poly[5], x, poly[4]);
+  VTYPE p46 = FMA (poly[6], x2, p45);
+  return FMA (p46, x4, p03);
+}
+static inline VTYPE VWRAP (estrin_7) (VTYPE x, VTYPE x2, VTYPE x4,
+				      const VTYPE *poly)
+{
+  VTYPE p03 = VWRAP (pairwise_poly_3) (x, x2, poly);
+  VTYPE p47 = VWRAP (pairwise_poly_3) (x, x2, poly + 4);
+  return FMA (p47, x4, p03);
+}
+static inline VTYPE VWRAP (estrin_8) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				      const VTYPE *poly)
+{
+  return FMA (poly[8], x8, VWRAP (estrin_7) (x, x2, x4, poly));
+}
+static inline VTYPE VWRAP (estrin_9) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				      const VTYPE *poly)
+{
+  VTYPE p89 = FMA (poly[9], x, poly[8]);
+  return FMA (p89, x8, VWRAP (estrin_7) (x, x2, x4, poly));
+}
+static inline VTYPE VWRAP (estrin_10) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				       const VTYPE *poly)
+{
+  VTYPE p89 = FMA (poly[9], x, poly[8]);
+  VTYPE p8_10 = FMA (poly[10], x2, p89);
+  return FMA (p8_10, x8, VWRAP (estrin_7) (x, x2, x4, poly));
+}
+static inline VTYPE VWRAP (estrin_11) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				       const VTYPE *poly)
+{
+  VTYPE p8_11 = VWRAP (pairwise_poly_3) (x, x2, poly + 8);
+  return FMA (p8_11, x8, VWRAP (estrin_7) (x, x2, x4, poly));
+}
+static inline VTYPE VWRAP (estrin_12) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				       const VTYPE *poly)
+{
+  return FMA (VWRAP (estrin_4) (x, x2, x4, poly + 8), x8,
+	      VWRAP (estrin_7) (x, x2, x4, poly));
+}
+static inline VTYPE VWRAP (estrin_13) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				       const VTYPE *poly)
+{
+  return FMA (VWRAP (estrin_5) (x, x2, x4, poly + 8), x8,
+	      VWRAP (estrin_7) (x, x2, x4, poly));
+}
+static inline VTYPE VWRAP (estrin_14) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				       const VTYPE *poly)
+{
+  return FMA (VWRAP (estrin_6) (x, x2, x4, poly + 8), x8,
+	      VWRAP (estrin_7) (x, x2, x4, poly));
+}
+static inline VTYPE VWRAP (estrin_15) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				       const VTYPE *poly)
+{
+  return FMA (VWRAP (estrin_7) (x, x2, x4, poly + 8), x8,
+	      VWRAP (estrin_7) (x, x2, x4, poly));
+}
+static inline VTYPE VWRAP (estrin_16) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				       VTYPE x16, const VTYPE *poly)
+{
+  return FMA (poly[16], x16, VWRAP (estrin_15) (x, x2, x4, x8, poly));
+}
+static inline VTYPE VWRAP (estrin_17) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				       VTYPE x16, const VTYPE *poly)
+{
+  VTYPE p16_17 = FMA (poly[17], x, poly[16]);
+  return FMA (p16_17, x16, VWRAP (estrin_15) (x, x2, x4, x8, poly));
+}
+static inline VTYPE VWRAP (estrin_18) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				       VTYPE x16, const VTYPE *poly)
+{
+  VTYPE p16_17 = FMA (poly[17], x, poly[16]);
+  VTYPE p16_18 = FMA (poly[18], x2, p16_17);
+  return FMA (p16_18, x16, VWRAP (estrin_15) (x, x2, x4, x8, poly));
+}
+static inline VTYPE VWRAP (estrin_19) (VTYPE x, VTYPE x2, VTYPE x4, VTYPE x8,
+				       VTYPE x16, const VTYPE *poly)
+{
+  VTYPE p16_19 = VWRAP (pairwise_poly_3) (x, x2, poly + 16);
+  return FMA (p16_19, x16, VWRAP (estrin_15) (x, x2, x4, x8, poly));
+}
+
+static inline VTYPE VWRAP (horner_3) (VTYPE x, const VTYPE *poly)
+{
+  VTYPE p = FMA (poly[3], x, poly[2]);
+  p = FMA (x, p, poly[1]);
+  p = FMA (x, p, poly[0]);
+  return p;
+}
+static inline VTYPE VWRAP (horner_4) (VTYPE x, const VTYPE *poly)
+{
+  VTYPE p = FMA (poly[4], x, poly[3]);
+  p = FMA (x, p, poly[2]);
+  p = FMA (x, p, poly[1]);
+  p = FMA (x, p, poly[0]);
+  return p;
+}
+static inline VTYPE VWRAP (horner_5) (VTYPE x, const VTYPE *poly)
+{
+  return FMA (x, VWRAP (horner_4) (x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_6) (VTYPE x, const VTYPE *poly)
+{
+  return FMA (x, VWRAP (horner_5) (x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_7) (VTYPE x, const VTYPE *poly)
+{
+  return FMA (x, VWRAP (horner_6) (x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_8) (VTYPE x, const VTYPE *poly)
+{
+  return FMA (x, VWRAP (horner_7) (x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_9) (VTYPE x, const VTYPE *poly)
+{
+  return FMA (x, VWRAP (horner_8) (x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_10) (VTYPE x, const VTYPE *poly)
+{
+  return FMA (x, VWRAP (horner_9) (x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_11) (VTYPE x, const VTYPE *poly)
+{
+  return FMA (x, VWRAP (horner_10) (x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_12) (VTYPE x, const VTYPE *poly)
+{
+  return FMA (x, VWRAP (horner_11) (x, poly + 1), poly[0]);
+}
+
+static inline VTYPE VWRAP (pw_horner_4) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  VTYPE p23 = FMA (poly[3], x, poly[2]);
+  VTYPE p;
+  p = FMA (x2, poly[4], p23);
+  p = FMA (x2, p, p01);
+  return p;
+}
+static inline VTYPE VWRAP (pw_horner_5) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  VTYPE p23 = FMA (poly[3], x, poly[2]);
+  VTYPE p45 = FMA (poly[5], x, poly[4]);
+  VTYPE p;
+  p = FMA (x2, p45, p23);
+  p = FMA (x2, p, p01);
+  return p;
+}
+static inline VTYPE VWRAP (pw_horner_6) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p26 = VWRAP (pw_horner_4) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p26, p01);
+}
+static inline VTYPE VWRAP (pw_horner_7) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p27 = VWRAP (pw_horner_5) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p27, p01);
+}
+static inline VTYPE VWRAP (pw_horner_8) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p28 = VWRAP (pw_horner_6) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p28, p01);
+}
+static inline VTYPE VWRAP (pw_horner_9) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p29 = VWRAP (pw_horner_7) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p29, p01);
+}
+static inline VTYPE VWRAP (pw_horner_10) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p2_10 = VWRAP (pw_horner_8) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p2_10, p01);
+}
+static inline VTYPE VWRAP (pw_horner_11) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p2_11 = VWRAP (pw_horner_9) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p2_11, p01);
+}
+static inline VTYPE VWRAP (pw_horner_12) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p2_12 = VWRAP (pw_horner_10) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p2_12, p01);
+}
+static inline VTYPE VWRAP (pw_horner_13) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p2_13 = VWRAP (pw_horner_11) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p2_13, p01);
+}
+static inline VTYPE VWRAP (pw_horner_14) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p2_14 = VWRAP (pw_horner_12) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p2_14, p01);
+}
+static inline VTYPE VWRAP (pw_horner_15) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p2_15 = VWRAP (pw_horner_13) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p2_15, p01);
+}
+static inline VTYPE VWRAP (pw_horner_16) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p2_16 = VWRAP (pw_horner_14) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p2_16, p01);
+}
+static inline VTYPE VWRAP (pw_horner_17) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p2_17 = VWRAP (pw_horner_15) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p2_17, p01);
+}
+static inline VTYPE VWRAP (pw_horner_18) (VTYPE x, VTYPE x2, const VTYPE *poly)
+{
+  VTYPE p2_18 = VWRAP (pw_horner_16) (x, x2, poly + 2);
+  VTYPE p01 = FMA (poly[1], x, poly[0]);
+  return FMA (x2, p2_18, p01);
+}
diff --git a/sysdeps/aarch64/fpu/poly_sve_f32.h b/sysdeps/aarch64/fpu/poly_sve_f32.h
new file mode 100644
index 0000000000..dcf2fab8dd
--- /dev/null
+++ b/sysdeps/aarch64/fpu/poly_sve_f32.h
@@ -0,0 +1,38 @@
+/* Helpers for evaluating polynomials on single-precision SVE input, using
+   various schemes.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#ifndef AARCH64_FPU_POLY_SVE_F32_H
+#define AARCH64_FPU_POLY_SVE_F32_H
+
+#include <arm_sve.h>
+
+/* Wrap SVE f32 helpers: evaluation of some scheme/order has form:
+   sv_[scheme]_[order]_f32_x.  */
+#define VTYPE svfloat32_t
+#define STYPE float
+#define VWRAP(f) sv_##f##_f32_x
+#define DUP svdup_n_f32
+#include "poly_sve_generic.h"
+#undef DUP
+#undef VWRAP
+#undef STYPE
+#undef VTYPE
+
+#endif
diff --git a/sysdeps/aarch64/fpu/poly_sve_f64.h b/sysdeps/aarch64/fpu/poly_sve_f64.h
new file mode 100644
index 0000000000..97a0b76637
--- /dev/null
+++ b/sysdeps/aarch64/fpu/poly_sve_f64.h
@@ -0,0 +1,38 @@
+/* Helpers for evaluating polynomials on double-precision SVE input, using
+   various schemes.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#ifndef AARCH64_FPU_POLY_SVE_F64_H
+#define AARCH64_FPU_POLY_SVE_F64_H
+
+#include <arm_sve.h>
+
+/* Wrap SVE f64 helpers: evaluation of some scheme/order has form:
+   sv_[scheme]_[order]_f64_x.  */
+#define VTYPE svfloat64_t
+#define STYPE double
+#define VWRAP(f) sv_##f##_f64_x
+#define DUP svdup_n_f64
+#include "poly_sve_generic.h"
+#undef DUP
+#undef VWRAP
+#undef STYPE
+#undef VTYPE
+
+#endif
diff --git a/sysdeps/aarch64/fpu/poly_sve_generic.h b/sysdeps/aarch64/fpu/poly_sve_generic.h
new file mode 100644
index 0000000000..0ecf5ce45b
--- /dev/null
+++ b/sysdeps/aarch64/fpu/poly_sve_generic.h
@@ -0,0 +1,313 @@
+/* Helpers for evaluating polynomials with various schemes - specific to SVE
+   but precision-agnostic.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#ifndef VTYPE
+# error Cannot use poly_generic without defining VTYPE
+#endif
+#ifndef STYPE
+# error Cannot use poly_generic without defining STYPE
+#endif
+#ifndef VWRAP
+# error Cannot use poly_generic without defining VWRAP
+#endif
+#ifndef DUP
+# error Cannot use poly_generic without defining DUP
+#endif
+
+static inline VTYPE VWRAP (pairwise_poly_3) (svbool_t pg, VTYPE x, VTYPE x2,
+					     const STYPE *poly)
+{
+  /* At order 3, Estrin and Pairwise Horner are identical.  */
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  VTYPE p23 = svmla_x (pg, DUP (poly[2]), x, poly[3]);
+  return svmla_x (pg, p01, p23, x2);
+}
+
+static inline VTYPE VWRAP (estrin_4) (svbool_t pg, VTYPE x, VTYPE x2, VTYPE x4,
+				      const STYPE *poly)
+{
+  VTYPE p03 = VWRAP (pairwise_poly_3) (pg, x, x2, poly);
+  return svmla_x (pg, p03, x4, poly[4]);
+}
+static inline VTYPE VWRAP (estrin_5) (svbool_t pg, VTYPE x, VTYPE x2, VTYPE x4,
+				      const STYPE *poly)
+{
+  VTYPE p03 = VWRAP (pairwise_poly_3) (pg, x, x2, poly);
+  VTYPE p45 = svmla_x (pg, DUP (poly[4]), x, poly[5]);
+  return svmla_x (pg, p03, p45, x4);
+}
+static inline VTYPE VWRAP (estrin_6) (svbool_t pg, VTYPE x, VTYPE x2, VTYPE x4,
+				      const STYPE *poly)
+{
+  VTYPE p03 = VWRAP (pairwise_poly_3) (pg, x, x2, poly);
+  VTYPE p45 = svmla_x (pg, DUP (poly[4]), x, poly[5]);
+  VTYPE p46 = svmla_x (pg, p45, x, poly[6]);
+  return svmla_x (pg, p03, p46, x4);
+}
+static inline VTYPE VWRAP (estrin_7) (svbool_t pg, VTYPE x, VTYPE x2, VTYPE x4,
+				      const STYPE *poly)
+{
+  VTYPE p03 = VWRAP (pairwise_poly_3) (pg, x, x2, poly);
+  VTYPE p47 = VWRAP (pairwise_poly_3) (pg, x, x2, poly + 4);
+  return svmla_x (pg, p03, p47, x4);
+}
+static inline VTYPE VWRAP (estrin_8) (svbool_t pg, VTYPE x, VTYPE x2, VTYPE x4,
+				      VTYPE x8, const STYPE *poly)
+{
+  return svmla_x (pg, VWRAP (estrin_7) (pg, x, x2, x4, poly), x8, poly[8]);
+}
+static inline VTYPE VWRAP (estrin_9) (svbool_t pg, VTYPE x, VTYPE x2, VTYPE x4,
+				      VTYPE x8, const STYPE *poly)
+{
+  VTYPE p89 = svmla_x (pg, DUP (poly[8]), x, poly[9]);
+  return svmla_x (pg, VWRAP (estrin_7) (pg, x, x2, x4, poly), p89, x8);
+}
+static inline VTYPE VWRAP (estrin_10) (svbool_t pg, VTYPE x, VTYPE x2,
+				       VTYPE x4, VTYPE x8, const STYPE *poly)
+{
+  VTYPE p89 = svmla_x (pg, DUP (poly[8]), x, poly[9]);
+  VTYPE p8_10 = svmla_x (pg, p89, x2, poly[10]);
+  return svmla_x (pg, VWRAP (estrin_7) (pg, x, x2, x4, poly), p8_10, x8);
+}
+static inline VTYPE VWRAP (estrin_11) (svbool_t pg, VTYPE x, VTYPE x2,
+				       VTYPE x4, VTYPE x8, const STYPE *poly)
+{
+  VTYPE p8_11 = VWRAP (pairwise_poly_3) (pg, x, x2, poly + 8);
+  return svmla_x (pg, VWRAP (estrin_7) (pg, x, x2, x4, poly), p8_11, x8);
+}
+static inline VTYPE VWRAP (estrin_12) (svbool_t pg, VTYPE x, VTYPE x2,
+				       VTYPE x4, VTYPE x8, const STYPE *poly)
+{
+  return svmla_x (pg, VWRAP (estrin_7) (pg, x, x2, x4, poly),
+		  VWRAP (estrin_4) (pg, x, x2, x4, poly + 8), x8);
+}
+static inline VTYPE VWRAP (estrin_13) (svbool_t pg, VTYPE x, VTYPE x2,
+				       VTYPE x4, VTYPE x8, const STYPE *poly)
+{
+  return svmla_x (pg, VWRAP (estrin_7) (pg, x, x2, x4, poly),
+		  VWRAP (estrin_5) (pg, x, x2, x4, poly + 8), x8);
+}
+static inline VTYPE VWRAP (estrin_14) (svbool_t pg, VTYPE x, VTYPE x2,
+				       VTYPE x4, VTYPE x8, const STYPE *poly)
+{
+  return svmla_x (pg, VWRAP (estrin_7) (pg, x, x2, x4, poly),
+		  VWRAP (estrin_6) (pg, x, x2, x4, poly + 8), x8);
+}
+static inline VTYPE VWRAP (estrin_15) (svbool_t pg, VTYPE x, VTYPE x2,
+				       VTYPE x4, VTYPE x8, const STYPE *poly)
+{
+  return svmla_x (pg, VWRAP (estrin_7) (pg, x, x2, x4, poly),
+		  VWRAP (estrin_7) (pg, x, x2, x4, poly + 8), x8);
+}
+static inline VTYPE VWRAP (estrin_16) (svbool_t pg, VTYPE x, VTYPE x2,
+				       VTYPE x4, VTYPE x8, VTYPE x16,
+				       const STYPE *poly)
+{
+  return svmla_x (pg, VWRAP (estrin_15) (pg, x, x2, x4, x8, poly), x16,
+		  poly[16]);
+}
+static inline VTYPE VWRAP (estrin_17) (svbool_t pg, VTYPE x, VTYPE x2,
+				       VTYPE x4, VTYPE x8, VTYPE x16,
+				       const STYPE *poly)
+{
+  VTYPE p16_17 = svmla_x (pg, DUP (poly[16]), x, poly[17]);
+  return svmla_x (pg, VWRAP (estrin_15) (pg, x, x2, x4, x8, poly), p16_17,
+		  x16);
+}
+static inline VTYPE VWRAP (estrin_18) (svbool_t pg, VTYPE x, VTYPE x2,
+				       VTYPE x4, VTYPE x8, VTYPE x16,
+				       const STYPE *poly)
+{
+  VTYPE p16_17 = svmla_x (pg, DUP (poly[16]), x, poly[17]);
+  VTYPE p16_18 = svmla_x (pg, p16_17, x2, poly[18]);
+  return svmla_x (pg, VWRAP (estrin_15) (pg, x, x2, x4, x8, poly), p16_18,
+		  x16);
+}
+static inline VTYPE VWRAP (estrin_19) (svbool_t pg, VTYPE x, VTYPE x2,
+				       VTYPE x4, VTYPE x8, VTYPE x16,
+				       const STYPE *poly)
+{
+  return svmla_x (pg, VWRAP (estrin_15) (pg, x, x2, x4, x8, poly),
+		  VWRAP (pairwise_poly_3) (pg, x, x2, poly + 16), x16);
+}
+
+static inline VTYPE VWRAP (horner_3) (svbool_t pg, VTYPE x, const STYPE *poly)
+{
+  VTYPE p = svmla_x (pg, DUP (poly[2]), x, poly[3]);
+  p = svmad_x (pg, x, p, poly[1]);
+  p = svmad_x (pg, x, p, poly[0]);
+  return p;
+}
+static inline VTYPE VWRAP (horner_4) (svbool_t pg, VTYPE x, const STYPE *poly)
+{
+  VTYPE p = svmla_x (pg, DUP (poly[3]), x, poly[4]);
+  p = svmad_x (pg, x, p, poly[2]);
+  p = svmad_x (pg, x, p, poly[1]);
+  p = svmad_x (pg, x, p, poly[0]);
+  return p;
+}
+static inline VTYPE VWRAP (horner_5) (svbool_t pg, VTYPE x, const STYPE *poly)
+{
+  return svmad_x (pg, x, VWRAP (horner_4) (pg, x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_6) (svbool_t pg, VTYPE x, const STYPE *poly)
+{
+  return svmad_x (pg, x, VWRAP (horner_5) (pg, x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_7) (svbool_t pg, VTYPE x, const STYPE *poly)
+{
+  return svmad_x (pg, x, VWRAP (horner_6) (pg, x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_8) (svbool_t pg, VTYPE x, const STYPE *poly)
+{
+  return svmad_x (pg, x, VWRAP (horner_7) (pg, x, poly + 1), poly[0]);
+}
+static inline VTYPE VWRAP (horner_9) (svbool_t pg, VTYPE x, const STYPE *poly)
+{
+  return svmad_x (pg, x, VWRAP (horner_8) (pg, x, poly + 1), poly[0]);
+}
+static inline VTYPE
+sv_horner_10_f32_x (svbool_t pg, VTYPE x, const STYPE *poly)
+{
+  return svmad_x (pg, x, VWRAP (horner_9) (pg, x, poly + 1), poly[0]);
+}
+static inline VTYPE
+sv_horner_11_f32_x (svbool_t pg, VTYPE x, const STYPE *poly)
+{
+  return svmad_x (pg, x, sv_horner_10_f32_x (pg, x, poly + 1), poly[0]);
+}
+static inline VTYPE
+sv_horner_12_f32_x (svbool_t pg, VTYPE x, const STYPE *poly)
+{
+  return svmad_x (pg, x, sv_horner_11_f32_x (pg, x, poly + 1), poly[0]);
+}
+
+static inline VTYPE VWRAP (pw_horner_4) (svbool_t pg, VTYPE x, VTYPE x2,
+					 const STYPE *poly)
+{
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  VTYPE p23 = svmla_x (pg, DUP (poly[2]), x, poly[3]);
+  VTYPE p;
+  p = svmla_x (pg, p23, x2, poly[4]);
+  p = svmla_x (pg, p01, x2, p);
+  return p;
+}
+static inline VTYPE VWRAP (pw_horner_5) (svbool_t pg, VTYPE x, VTYPE x2,
+					 const STYPE *poly)
+{
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  VTYPE p23 = svmla_x (pg, DUP (poly[2]), x, poly[3]);
+  VTYPE p45 = svmla_x (pg, DUP (poly[4]), x, poly[5]);
+  VTYPE p;
+  p = svmla_x (pg, p23, x2, p45);
+  p = svmla_x (pg, p01, x2, p);
+  return p;
+}
+static inline VTYPE VWRAP (pw_horner_6) (svbool_t pg, VTYPE x, VTYPE x2,
+					 const STYPE *poly)
+{
+  VTYPE p26 = VWRAP (pw_horner_4) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p26);
+}
+static inline VTYPE VWRAP (pw_horner_7) (svbool_t pg, VTYPE x, VTYPE x2,
+					 const STYPE *poly)
+{
+  VTYPE p27 = VWRAP (pw_horner_5) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p27);
+}
+static inline VTYPE VWRAP (pw_horner_8) (svbool_t pg, VTYPE x, VTYPE x2,
+					 const STYPE *poly)
+{
+  VTYPE p28 = VWRAP (pw_horner_6) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p28);
+}
+static inline VTYPE VWRAP (pw_horner_9) (svbool_t pg, VTYPE x, VTYPE x2,
+					 const STYPE *poly)
+{
+  VTYPE p29 = VWRAP (pw_horner_7) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p29);
+}
+static inline VTYPE VWRAP (pw_horner_10) (svbool_t pg, VTYPE x, VTYPE x2,
+					  const STYPE *poly)
+{
+  VTYPE p2_10 = VWRAP (pw_horner_8) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p2_10);
+}
+static inline VTYPE VWRAP (pw_horner_11) (svbool_t pg, VTYPE x, VTYPE x2,
+					  const STYPE *poly)
+{
+  VTYPE p2_11 = VWRAP (pw_horner_9) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p2_11);
+}
+static inline VTYPE VWRAP (pw_horner_12) (svbool_t pg, VTYPE x, VTYPE x2,
+					  const STYPE *poly)
+{
+  VTYPE p2_12 = VWRAP (pw_horner_10) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p2_12);
+}
+static inline VTYPE VWRAP (pw_horner_13) (svbool_t pg, VTYPE x, VTYPE x2,
+					  const STYPE *poly)
+{
+  VTYPE p2_13 = VWRAP (pw_horner_11) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p2_13);
+}
+static inline VTYPE VWRAP (pw_horner_14) (svbool_t pg, VTYPE x, VTYPE x2,
+					  const STYPE *poly)
+{
+  VTYPE p2_14 = VWRAP (pw_horner_12) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p2_14);
+}
+static inline VTYPE VWRAP (pw_horner_15) (svbool_t pg, VTYPE x, VTYPE x2,
+					  const STYPE *poly)
+{
+  VTYPE p2_15 = VWRAP (pw_horner_13) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p2_15);
+}
+static inline VTYPE VWRAP (pw_horner_16) (svbool_t pg, VTYPE x, VTYPE x2,
+					  const STYPE *poly)
+{
+  VTYPE p2_16 = VWRAP (pw_horner_14) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p2_16);
+}
+static inline VTYPE VWRAP (pw_horner_17) (svbool_t pg, VTYPE x, VTYPE x2,
+					  const STYPE *poly)
+{
+  VTYPE p2_17 = VWRAP (pw_horner_15) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p2_17);
+}
+static inline VTYPE VWRAP (pw_horner_18) (svbool_t pg, VTYPE x, VTYPE x2,
+					  const STYPE *poly)
+{
+  VTYPE p2_18 = VWRAP (pw_horner_16) (pg, x, x2, poly + 2);
+  VTYPE p01 = svmla_x (pg, DUP (poly[0]), x, poly[1]);
+  return svmla_x (pg, p01, x2, p2_18);
+}
diff --git a/sysdeps/aarch64/fpu/tan_advsimd.c b/sysdeps/aarch64/fpu/tan_advsimd.c
new file mode 100644
index 0000000000..936a0569c8
--- /dev/null
+++ b/sysdeps/aarch64/fpu/tan_advsimd.c
@@ -0,0 +1,123 @@
+/* Double-precision vector (Advanced SIMD) tan function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "v_math.h"
+#include "poly_advsimd_f64.h"
+
+static const struct data
+{
+  float64x2_t poly[9];
+  float64x2_t half_pi_hi, half_pi_lo, two_over_pi, shift;
+#if !WANT_SIMD_EXCEPT
+  float64x2_t range_val;
+#endif
+} data = {
+  /* Coefficients generated using FPMinimax.  */
+  .poly = { V2 (0x1.5555555555556p-2), V2 (0x1.1111111110a63p-3),
+	    V2 (0x1.ba1ba1bb46414p-5), V2 (0x1.664f47e5b5445p-6),
+	    V2 (0x1.226e5e5ecdfa3p-7), V2 (0x1.d6c7ddbf87047p-9),
+	    V2 (0x1.7ea75d05b583ep-10), V2 (0x1.289f22964a03cp-11),
+	    V2 (0x1.4e4fd14147622p-12) },
+  .half_pi_hi = V2 (0x1.921fb54442d18p0),
+  .half_pi_lo = V2 (0x1.1a62633145c07p-54),
+  .two_over_pi = V2 (0x1.45f306dc9c883p-1),
+  .shift = V2 (0x1.8p52),
+#if !WANT_SIMD_EXCEPT
+  .range_val = V2 (0x1p23),
+#endif
+};
+
+#define RangeVal 0x4160000000000000  /* asuint64(0x1p23).  */
+#define TinyBound 0x3e50000000000000 /* asuint64(2^-26).  */
+#define Thresh 0x310000000000000     /* RangeVal - TinyBound.  */
+
+/* Special cases (fall back to scalar calls).  */
+static float64x2_t VPCS_ATTR NOINLINE
+special_case (float64x2_t x)
+{
+  return v_call_f64 (tan, x, x, v_u64 (-1));
+}
+
+/* Vector approximation for double-precision tan.
+   Maximum measured error is 3.48 ULP:
+   __v_tan(0x1.4457047ef78d8p+20) got -0x1.f6ccd8ecf7dedp+37
+				 want -0x1.f6ccd8ecf7deap+37.   */
+float64x2_t VPCS_ATTR V_NAME_D1 (tan) (float64x2_t x)
+{
+  const struct data *dat = ptr_barrier (&data);
+  /* Our argument reduction cannot calculate q with sufficient accuracy for very
+     large inputs. Fall back to scalar routine for all lanes if any are too
+     large, or Inf/NaN. If fenv exceptions are expected, also fall back for tiny
+     input to avoid underflow.  */
+#if WANT_SIMD_EXCEPT
+  uint64x2_t iax = vreinterpretq_u64_f64 (vabsq_f64 (x));
+  /* iax - tiny_bound > range_val - tiny_bound.  */
+  uint64x2_t special
+      = vcgtq_u64 (vsubq_u64 (iax, v_u64 (TinyBound)), v_u64 (Thresh));
+  if (__glibc_unlikely (v_any_u64 (special)))
+    return special_case (x);
+#endif
+
+  /* q = nearest integer to 2 * x / pi.  */
+  float64x2_t q
+      = vsubq_f64 (vfmaq_f64 (dat->shift, x, dat->two_over_pi), dat->shift);
+  int64x2_t qi = vcvtq_s64_f64 (q);
+
+  /* Use q to reduce x to r in [-pi/4, pi/4], by:
+     r = x - q * pi/2, in extended precision.  */
+  float64x2_t r = x;
+  r = vfmsq_f64 (r, q, dat->half_pi_hi);
+  r = vfmsq_f64 (r, q, dat->half_pi_lo);
+  /* Further reduce r to [-pi/8, pi/8], to be reconstructed using double angle
+     formula.  */
+  r = vmulq_n_f64 (r, 0.5);
+
+  /* Approximate tan(r) using order 8 polynomial.
+     tan(x) is odd, so polynomial has the form:
+     tan(x) ~= x + C0 * x^3 + C1 * x^5 + C3 * x^7 + ...
+     Hence we first approximate P(r) = C1 + C2 * r^2 + C3 * r^4 + ...
+     Then compute the approximation by:
+     tan(r) ~= r + r^3 * (C0 + r^2 * P(r)).  */
+  float64x2_t r2 = vmulq_f64 (r, r), r4 = vmulq_f64 (r2, r2),
+	      r8 = vmulq_f64 (r4, r4);
+  /* Offset coefficients to evaluate from C1 onwards.  */
+  float64x2_t p = v_estrin_7_f64 (r2, r4, r8, dat->poly + 1);
+  p = vfmaq_f64 (dat->poly[0], p, r2);
+  p = vfmaq_f64 (r, r2, vmulq_f64 (p, r));
+
+  /* Recombination uses double-angle formula:
+     tan(2x) = 2 * tan(x) / (1 - (tan(x))^2)
+     and reciprocity around pi/2:
+     tan(x) = 1 / (tan(pi/2 - x))
+     to assemble result using change-of-sign and conditional selection of
+     numerator/denominator, dependent on odd/even-ness of q (hence quadrant). */
+  float64x2_t n = vfmaq_f64 (v_f64 (-1), p, p);
+  float64x2_t d = vaddq_f64 (p, p);
+
+  uint64x2_t no_recip = vtstq_u64 (vreinterpretq_u64_s64 (qi), v_u64 (1));
+
+#if !WANT_SIMD_EXCEPT
+  uint64x2_t special = vceqzq_u64 (vcaleq_f64 (x, dat->range_val));
+  if (__glibc_unlikely (v_any_u64 (special)))
+    return special_case (x);
+#endif
+
+  return vdivq_f64 (vbslq_f64 (no_recip, n, vnegq_f64 (d)),
+		    vbslq_f64 (no_recip, d, n));
+}
diff --git a/sysdeps/aarch64/fpu/tan_sve.c b/sysdeps/aarch64/fpu/tan_sve.c
new file mode 100644
index 0000000000..df9666711d
--- /dev/null
+++ b/sysdeps/aarch64/fpu/tan_sve.c
@@ -0,0 +1,104 @@
+/* Double-precision vector (SVE) tan function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "sv_math.h"
+#include "poly_sve_f64.h"
+
+static const struct data
+{
+  double poly[9];
+  double half_pi_hi, half_pi_lo, inv_half_pi, range_val, shift;
+} data = {
+  /* Polynomial generated with FPMinimax.  */
+  .poly = { 0x1.5555555555556p-2, 0x1.1111111110a63p-3, 0x1.ba1ba1bb46414p-5,
+	    0x1.664f47e5b5445p-6, 0x1.226e5e5ecdfa3p-7, 0x1.d6c7ddbf87047p-9,
+	    0x1.7ea75d05b583ep-10, 0x1.289f22964a03cp-11,
+	    0x1.4e4fd14147622p-12, },
+  .half_pi_hi = 0x1.921fb54442d18p0,
+  .half_pi_lo = 0x1.1a62633145c07p-54,
+  .inv_half_pi = 0x1.45f306dc9c883p-1,
+  .range_val = 0x1p23,
+  .shift = 0x1.8p52,
+};
+
+static svfloat64_t NOINLINE
+special_case (svfloat64_t x, svfloat64_t y, svbool_t special)
+{
+  return sv_call_f64 (tan, x, y, special);
+}
+
+/* Vector approximation for double-precision tan.
+   Maximum measured error is 3.48 ULP:
+   _ZGVsMxv_tan(0x1.4457047ef78d8p+20) got -0x1.f6ccd8ecf7dedp+37
+				      want -0x1.f6ccd8ecf7deap+37.  */
+svfloat64_t SV_NAME_D1 (tan) (svfloat64_t x, svbool_t pg)
+{
+  const struct data *dat = ptr_barrier (&data);
+
+  /* Invert condition to catch NaNs and Infs as well as large values.  */
+  svbool_t special = svnot_z (pg, svaclt (pg, x, dat->range_val));
+
+  /* q = nearest integer to 2 * x / pi.  */
+  svfloat64_t shift = sv_f64 (dat->shift);
+  svfloat64_t q = svmla_x (pg, shift, x, dat->inv_half_pi);
+  q = svsub_x (pg, q, shift);
+  svint64_t qi = svcvt_s64_x (pg, q);
+
+  /* Use q to reduce x to r in [-pi/4, pi/4], by:
+     r = x - q * pi/2, in extended precision.  */
+  svfloat64_t r = x;
+  svfloat64_t half_pi = svld1rq (svptrue_b64 (), &dat->half_pi_hi);
+  r = svmls_lane (r, q, half_pi, 0);
+  r = svmls_lane (r, q, half_pi, 1);
+  /* Further reduce r to [-pi/8, pi/8], to be reconstructed using double angle
+     formula.  */
+  r = svmul_x (pg, r, 0.5);
+
+  /* Approximate tan(r) using order 8 polynomial.
+     tan(x) is odd, so polynomial has the form:
+     tan(x) ~= x + C0 * x^3 + C1 * x^5 + C3 * x^7 + ...
+     Hence we first approximate P(r) = C1 + C2 * r^2 + C3 * r^4 + ...
+     Then compute the approximation by:
+     tan(r) ~= r + r^3 * (C0 + r^2 * P(r)).  */
+  svfloat64_t r2 = svmul_x (pg, r, r);
+  svfloat64_t r4 = svmul_x (pg, r2, r2);
+  svfloat64_t r8 = svmul_x (pg, r4, r4);
+  /* Use offset version coeff array by 1 to evaluate from C1 onwards.  */
+  svfloat64_t p = sv_estrin_7_f64_x (pg, r2, r4, r8, dat->poly + 1);
+  p = svmad_x (pg, p, r2, dat->poly[0]);
+  p = svmla_x (pg, r, r2, svmul_x (pg, p, r));
+
+  /* Recombination uses double-angle formula:
+     tan(2x) = 2 * tan(x) / (1 - (tan(x))^2)
+     and reciprocity around pi/2:
+     tan(x) = 1 / (tan(pi/2 - x))
+     to assemble result using change-of-sign and conditional selection of
+     numerator/denominator dependent on odd/even-ness of q (hence quadrant).  */
+  svbool_t use_recip
+      = svcmpeq (pg, svand_x (pg, svreinterpret_u64 (qi), 1), 0);
+
+  svfloat64_t n = svmad_x (pg, p, p, -1);
+  svfloat64_t d = svmul_x (pg, p, 2);
+  svfloat64_t swap = n;
+  n = svneg_m (n, use_recip, d);
+  d = svsel (use_recip, swap, d);
+  if (__glibc_unlikely (svptest_any (pg, special)))
+    return special_case (x, svdiv_x (svnot_z (pg, special), n, d), special);
+  return svdiv_x (pg, n, d);
+}
diff --git a/sysdeps/aarch64/fpu/tanf_advsimd.c b/sysdeps/aarch64/fpu/tanf_advsimd.c
new file mode 100644
index 0000000000..4c8a7f740e
--- /dev/null
+++ b/sysdeps/aarch64/fpu/tanf_advsimd.c
@@ -0,0 +1,129 @@
+/* Single-precision vector (Advanced SIMD) tan function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "v_math.h"
+#include "poly_advsimd_f32.h"
+
+static const struct data
+{
+  float32x4_t poly[6];
+  float32x4_t neg_half_pi_1, neg_half_pi_2, neg_half_pi_3, two_over_pi, shift;
+#if !WANT_SIMD_EXCEPT
+  float32x4_t range_val;
+#endif
+} data = {
+  /* Coefficients generated using FPMinimax.  */
+  .poly = { V4 (0x1.55555p-2f), V4 (0x1.11166p-3f), V4 (0x1.b88a78p-5f),
+	    V4 (0x1.7b5756p-6f), V4 (0x1.4ef4cep-8f), V4 (0x1.0e1e74p-7f) },
+  .neg_half_pi_1 = V4 (-0x1.921fb6p+0f),
+  .neg_half_pi_2 = V4 (0x1.777a5cp-25f),
+  .neg_half_pi_3 = V4 (0x1.ee59dap-50f),
+  .two_over_pi = V4 (0x1.45f306p-1f),
+  .shift = V4 (0x1.8p+23f),
+#if !WANT_SIMD_EXCEPT
+  .range_val = V4 (0x1p15f),
+#endif
+};
+
+#define RangeVal v_u32 (0x47000000)  /* asuint32(0x1p15f).  */
+#define TinyBound v_u32 (0x30000000) /* asuint32 (0x1p-31f).  */
+#define Thresh v_u32 (0x16000000)    /* asuint32(RangeVal) - TinyBound.  */
+
+/* Special cases (fall back to scalar calls).  */
+static float32x4_t VPCS_ATTR NOINLINE
+special_case (float32x4_t x, float32x4_t y, uint32x4_t cmp)
+{
+  return v_call_f32 (tanf, x, y, cmp);
+}
+
+/* Use a full Estrin scheme to evaluate polynomial.  */
+static inline float32x4_t
+eval_poly (float32x4_t z, const struct data *d)
+{
+  float32x4_t z2 = vmulq_f32 (z, z);
+#if WANT_SIMD_EXCEPT
+  /* Tiny z (<= 0x1p-31) will underflow when calculating z^4. If fp exceptions
+     are to be triggered correctly, sidestep this by fixing such lanes to 0.  */
+  uint32x4_t will_uflow
+    = vcleq_u32 (vreinterpretq_u32_f32 (vabsq_f32 (z)), TinyBound);
+  if (__glibc_unlikely (v_any_u32 (will_uflow)))
+    z2 = vbslq_f32 (will_uflow, v_f32 (0), z2);
+#endif
+  float32x4_t z4 = vmulq_f32 (z2, z2);
+  return v_estrin_5_f32 (z, z2, z4, d->poly);
+}
+
+/* Fast implementation of AdvSIMD tanf.
+   Maximum error is 3.45 ULP:
+   __v_tanf(-0x1.e5f0cap+13) got 0x1.ff9856p-1
+			    want 0x1.ff9850p-1.  */
+float32x4_t VPCS_ATTR V_NAME_F1 (tan) (float32x4_t x)
+{
+  const struct data *d = ptr_barrier (&data);
+  float32x4_t special_arg = x;
+
+  /* iax >= RangeVal means x, if not inf or NaN, is too large to perform fast
+     regression.  */
+#if WANT_SIMD_EXCEPT
+  uint32x4_t iax = vreinterpretq_u32_f32 (vabsq_f32 (x));
+  /* If fp exceptions are to be triggered correctly, also special-case tiny
+     input, as this will load to overflow later. Fix any special lanes to 1 to
+     prevent any exceptions being triggered.  */
+  uint32x4_t special = vcgeq_u32 (vsubq_u32 (iax, TinyBound), Thresh);
+  if (__glibc_unlikely (v_any_u32 (special)))
+    x = vbslq_f32 (special, v_f32 (1.0f), x);
+#else
+  /* Otherwise, special-case large and special values.  */
+  uint32x4_t special = vcageq_f32 (x, d->range_val);
+#endif
+
+  /* n = rint(x/(pi/2)).  */
+  float32x4_t q = vfmaq_f32 (d->shift, d->two_over_pi, x);
+  float32x4_t n = vsubq_f32 (q, d->shift);
+  /* Determine if x lives in an interval, where |tan(x)| grows to infinity.  */
+  uint32x4_t pred_alt = vtstq_u32 (vreinterpretq_u32_f32 (q), v_u32 (1));
+
+  /* r = x - n * (pi/2)  (range reduction into -pi./4 .. pi/4).  */
+  float32x4_t r;
+  r = vfmaq_f32 (x, d->neg_half_pi_1, n);
+  r = vfmaq_f32 (r, d->neg_half_pi_2, n);
+  r = vfmaq_f32 (r, d->neg_half_pi_3, n);
+
+  /* If x lives in an interval, where |tan(x)|
+     - is finite, then use a polynomial approximation of the form
+       tan(r) ~ r + r^3 * P(r^2) = r + r * r^2 * P(r^2).
+     - grows to infinity then use symmetries of tangent and the identity
+       tan(r) = cotan(pi/2 - r) to express tan(x) as 1/tan(-r). Finally, use
+       the same polynomial approximation of tan as above.  */
+
+  /* Invert sign of r if odd quadrant.  */
+  float32x4_t z = vmulq_f32 (r, vbslq_f32 (pred_alt, v_f32 (-1), v_f32 (1)));
+
+  /* Evaluate polynomial approximation of tangent on [-pi/4, pi/4].  */
+  float32x4_t z2 = vmulq_f32 (r, r);
+  float32x4_t p = eval_poly (z2, d);
+  float32x4_t y = vfmaq_f32 (z, vmulq_f32 (z, z2), p);
+
+  /* Compute reciprocal and apply if required.  */
+  float32x4_t inv_y = vdivq_f32 (v_f32 (1.0f), y);
+
+  if (__glibc_unlikely (v_any_u32 (special)))
+    return special_case (special_arg, vbslq_f32 (pred_alt, inv_y, y), special);
+  return vbslq_f32 (pred_alt, inv_y, y);
+}
diff --git a/sysdeps/aarch64/fpu/tanf_sve.c b/sysdeps/aarch64/fpu/tanf_sve.c
new file mode 100644
index 0000000000..856cbece7e
--- /dev/null
+++ b/sysdeps/aarch64/fpu/tanf_sve.c
@@ -0,0 +1,118 @@
+/* Single-precision vector (SVE) tan function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "sv_math.h"
+
+static const struct data
+{
+  float pio2_1, pio2_2, pio2_3, invpio2;
+  float c1, c3, c5;
+  float c0, c2, c4, range_val, shift;
+} data = {
+  /* Coefficients generated using:
+     poly = fpminimax((tan(sqrt(x))-sqrt(x))/x^(3/2),
+		      deg,
+		      [|single ...|],
+		      [a*a;b*b]);
+     optimize relative error
+     final prec : 23 bits
+     deg : 5
+     a : 0x1p-126 ^ 2
+     b : ((pi) / 0x1p2) ^ 2
+     dirty rel error: 0x1.f7c2e4p-25
+     dirty abs error: 0x1.f7c2ecp-25.  */
+  .c0 = 0x1.55555p-2,	      .c1 = 0x1.11166p-3,
+  .c2 = 0x1.b88a78p-5,	      .c3 = 0x1.7b5756p-6,
+  .c4 = 0x1.4ef4cep-8,	      .c5 = 0x1.0e1e74p-7,
+
+  .pio2_1 = 0x1.921fb6p+0f,   .pio2_2 = -0x1.777a5cp-25f,
+  .pio2_3 = -0x1.ee59dap-50f, .invpio2 = 0x1.45f306p-1f,
+  .range_val = 0x1p15f,	      .shift = 0x1.8p+23f
+};
+
+static svfloat32_t NOINLINE
+special_case (svfloat32_t x, svfloat32_t y, svbool_t cmp)
+{
+  return sv_call_f32 (tanf, x, y, cmp);
+}
+
+/* Fast implementation of SVE tanf.
+   Maximum error is 3.45 ULP:
+   SV_NAME_F1 (tan)(-0x1.e5f0cap+13) got 0x1.ff9856p-1
+				    want 0x1.ff9850p-1.  */
+svfloat32_t SV_NAME_F1 (tan) (svfloat32_t x, const svbool_t pg)
+{
+  const struct data *d = ptr_barrier (&data);
+
+  /* Determine whether input is too large to perform fast regression.  */
+  svbool_t cmp = svacge (pg, x, d->range_val);
+
+  svfloat32_t odd_coeffs = svld1rq (svptrue_b32 (), &d->c1);
+  svfloat32_t pi_vals = svld1rq (svptrue_b32 (), &d->pio2_1);
+
+  /* n = rint(x/(pi/2)).  */
+  svfloat32_t q = svmla_lane (sv_f32 (d->shift), x, pi_vals, 3);
+  svfloat32_t n = svsub_x (pg, q, d->shift);
+  /* n is already a signed integer, simply convert it.  */
+  svint32_t in = svcvt_s32_x (pg, n);
+  /* Determine if x lives in an interval, where |tan(x)| grows to infinity.  */
+  svint32_t alt = svand_x (pg, in, 1);
+  svbool_t pred_alt = svcmpne (pg, alt, 0);
+
+  /* r = x - n * (pi/2)  (range reduction into 0 .. pi/4).  */
+  svfloat32_t r;
+  r = svmls_lane (x, n, pi_vals, 0);
+  r = svmls_lane (r, n, pi_vals, 1);
+  r = svmls_lane (r, n, pi_vals, 2);
+
+  /* If x lives in an interval, where |tan(x)|
+     - is finite, then use a polynomial approximation of the form
+       tan(r) ~ r + r^3 * P(r^2) = r + r * r^2 * P(r^2).
+     - grows to infinity then use symmetries of tangent and the identity
+       tan(r) = cotan(pi/2 - r) to express tan(x) as 1/tan(-r). Finally, use
+       the same polynomial approximation of tan as above.  */
+
+  /* Perform additional reduction if required.  */
+  svfloat32_t z = svneg_m (r, pred_alt, r);
+
+  /* Evaluate polynomial approximation of tangent on [-pi/4, pi/4],
+     using Estrin on z^2.  */
+  svfloat32_t z2 = svmul_x (pg, z, z);
+  svfloat32_t p01 = svmla_lane (sv_f32 (d->c0), z2, odd_coeffs, 0);
+  svfloat32_t p23 = svmla_lane (sv_f32 (d->c2), z2, odd_coeffs, 1);
+  svfloat32_t p45 = svmla_lane (sv_f32 (d->c4), z2, odd_coeffs, 2);
+
+  svfloat32_t z4 = svmul_x (pg, z2, z2);
+  svfloat32_t p = svmla_x (pg, p01, z4, p23);
+
+  svfloat32_t z8 = svmul_x (pg, z4, z4);
+  p = svmla_x (pg, p, z8, p45);
+
+  svfloat32_t y = svmla_x (pg, z, p, svmul_x (pg, z, z2));
+
+  /* Transform result back, if necessary.  */
+  svfloat32_t inv_y = svdivr_x (pg, y, 1.0f);
+
+  /* No need to pass pg to specialcase here since cmp is a strict subset,
+     guaranteed by the cmpge above.  */
+  if (__glibc_unlikely (svptest_any (pg, cmp)))
+    return special_case (x, svsel (pred_alt, inv_y, y), cmp);
+
+  return svsel (pred_alt, inv_y, y);
+}
diff --git a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
index 3b6b1e343d..f76726e324 100644
--- a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
@@ -27,3 +27,4 @@ VPCS_VECTOR_WRAPPER (cos_advsimd, _ZGVnN2v_cos)
 VPCS_VECTOR_WRAPPER (exp_advsimd, _ZGVnN2v_exp)
 VPCS_VECTOR_WRAPPER (log_advsimd, _ZGVnN2v_log)
 VPCS_VECTOR_WRAPPER (sin_advsimd, _ZGVnN2v_sin)
+VPCS_VECTOR_WRAPPER (tan_advsimd, _ZGVnN2v_tan)
diff --git a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
index d7ac47ca22..5a9e5b552a 100644
--- a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
@@ -36,3 +36,4 @@ SVE_VECTOR_WRAPPER (cos_sve, _ZGVsMxv_cos)
 SVE_VECTOR_WRAPPER (exp_sve, _ZGVsMxv_exp)
 SVE_VECTOR_WRAPPER (log_sve, _ZGVsMxv_log)
 SVE_VECTOR_WRAPPER (sin_sve, _ZGVsMxv_sin)
+SVE_VECTOR_WRAPPER (tan_sve, _ZGVsMxv_tan)
diff --git a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
index d4a9ac7154..66c6227087 100644
--- a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
@@ -27,3 +27,4 @@ VPCS_VECTOR_WRAPPER (cosf_advsimd, _ZGVnN4v_cosf)
 VPCS_VECTOR_WRAPPER (expf_advsimd, _ZGVnN4v_expf)
 VPCS_VECTOR_WRAPPER (logf_advsimd, _ZGVnN4v_logf)
 VPCS_VECTOR_WRAPPER (sinf_advsimd, _ZGVnN4v_sinf)
+VPCS_VECTOR_WRAPPER (tanf_advsimd, _ZGVnN4v_tanf)
diff --git a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
index d44033eab0..3cbf8b48b7 100644
--- a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
@@ -36,3 +36,4 @@ SVE_VECTOR_WRAPPER (cosf_sve, _ZGVsMxv_cosf)
 SVE_VECTOR_WRAPPER (expf_sve, _ZGVsMxv_expf)
 SVE_VECTOR_WRAPPER (logf_sve, _ZGVsMxv_logf)
 SVE_VECTOR_WRAPPER (sinf_sve, _ZGVsMxv_sinf)
+SVE_VECTOR_WRAPPER (tanf_sve, _ZGVsMxv_tanf)
diff --git a/sysdeps/aarch64/libm-test-ulps b/sysdeps/aarch64/libm-test-ulps
index 6b25ed43e0..ec84381c75 100644
--- a/sysdeps/aarch64/libm-test-ulps
+++ b/sysdeps/aarch64/libm-test-ulps
@@ -1340,11 +1340,19 @@ Function: "tan":
 float: 1
 ldouble: 1
 
+Function: "tan_advsimd":
+double: 2
+float: 2
+
 Function: "tan_downward":
 double: 1
 float: 2
 ldouble: 1
 
+Function: "tan_sve":
+double: 2
+float: 2
+
 Function: "tan_towardzero":
 double: 1
 float: 1
diff --git a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
index ae46ef8c34..0ec668582e 100644
--- a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
+++ b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
@@ -14,3 +14,7 @@ GLIBC_2.38 _ZGVsMxv_log F
 GLIBC_2.38 _ZGVsMxv_logf F
 GLIBC_2.38 _ZGVsMxv_sin F
 GLIBC_2.38 _ZGVsMxv_sinf F
+GLIBC_2.39 _ZGVnN2v_tan F
+GLIBC_2.39 _ZGVnN4v_tanf F
+GLIBC_2.39 _ZGVsMxv_tan F
+GLIBC_2.39 _ZGVsMxv_tanf F
-- 
2.27.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 2/5] aarch64: Add vector implementations of exp2 routines
  2023-10-05 16:10 [PATCH v2 1/5] aarch64: Add vector implementations of tan routines Joe Ramsay
@ 2023-10-05 16:10 ` Joe Ramsay
  2023-10-05 16:10 ` [PATCH 3/5] aarch64: Add vector implementations of log2 routines Joe Ramsay
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Joe Ramsay @ 2023-10-05 16:10 UTC (permalink / raw)
  To: libc-alpha; +Cc: Joe Ramsay

Some routines reuse table from v_exp_data.c
---
No changes from v1, but re-sent as part of the stack.
Thanks,
Joe
 sysdeps/aarch64/fpu/Makefile                  |   1 +
 sysdeps/aarch64/fpu/Versions                  |   4 +
 sysdeps/aarch64/fpu/bits/math-vector.h        |   4 +
 sysdeps/aarch64/fpu/exp2_advsimd.c            | 128 ++++++++++++++++++
 sysdeps/aarch64/fpu/exp2_sve.c                | 111 +++++++++++++++
 sysdeps/aarch64/fpu/exp2f_advsimd.c           | 124 +++++++++++++++++
 sysdeps/aarch64/fpu/exp2f_sve.c               |  75 ++++++++++
 .../fpu/test-double-advsimd-wrappers.c        |   1 +
 .../aarch64/fpu/test-double-sve-wrappers.c    |   1 +
 .../aarch64/fpu/test-float-advsimd-wrappers.c |   1 +
 sysdeps/aarch64/fpu/test-float-sve-wrappers.c |   1 +
 sysdeps/aarch64/libm-test-ulps                |   8 ++
 .../unix/sysv/linux/aarch64/libmvec.abilist   |   4 +
 13 files changed, 463 insertions(+)
 create mode 100644 sysdeps/aarch64/fpu/exp2_advsimd.c
 create mode 100644 sysdeps/aarch64/fpu/exp2_sve.c
 create mode 100644 sysdeps/aarch64/fpu/exp2f_advsimd.c
 create mode 100644 sysdeps/aarch64/fpu/exp2f_sve.c

diff --git a/sysdeps/aarch64/fpu/Makefile b/sysdeps/aarch64/fpu/Makefile
index a1bbc9bcaa..9c7c768301 100644
--- a/sysdeps/aarch64/fpu/Makefile
+++ b/sysdeps/aarch64/fpu/Makefile
@@ -1,5 +1,6 @@
 libmvec-supported-funcs = cos \
                           exp \
+                          exp2 \
                           log \
                           sin \
                           tan
diff --git a/sysdeps/aarch64/fpu/Versions b/sysdeps/aarch64/fpu/Versions
index f0ca0940a9..05de4325d5 100644
--- a/sysdeps/aarch64/fpu/Versions
+++ b/sysdeps/aarch64/fpu/Versions
@@ -18,6 +18,10 @@ libmvec {
     _ZGVsMxv_sinf;
   }
   GLIBC_2.39 {
+    _ZGVnN4v_exp2f;
+    _ZGVnN2v_exp2;
+    _ZGVsMxv_exp2f;
+    _ZGVsMxv_exp2;
     _ZGVnN4v_tanf;
     _ZGVnN2v_tan;
     _ZGVsMxv_tanf;
diff --git a/sysdeps/aarch64/fpu/bits/math-vector.h b/sysdeps/aarch64/fpu/bits/math-vector.h
index 6193213147..50921b22e5 100644
--- a/sysdeps/aarch64/fpu/bits/math-vector.h
+++ b/sysdeps/aarch64/fpu/bits/math-vector.h
@@ -51,12 +51,14 @@ typedef __SVBool_t __sv_bool_t;
 
 __vpcs __f32x4_t _ZGVnN4v_cosf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_expf (__f32x4_t);
+__vpcs __f32x4_t _ZGVnN4v_exp2f (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_logf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_sinf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_tanf (__f32x4_t);
 
 __vpcs __f64x2_t _ZGVnN2v_cos (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_exp (__f64x2_t);
+__vpcs __f64x2_t _ZGVnN2v_exp2 (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_log (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_sin (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_tan (__f64x2_t);
@@ -68,12 +70,14 @@ __vpcs __f64x2_t _ZGVnN2v_tan (__f64x2_t);
 
 __sv_f32_t _ZGVsMxv_cosf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_expf (__sv_f32_t, __sv_bool_t);
+__sv_f32_t _ZGVsMxv_exp2f (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_logf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_sinf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_tanf (__sv_f32_t, __sv_bool_t);
 
 __sv_f64_t _ZGVsMxv_cos (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_exp (__sv_f64_t, __sv_bool_t);
+__sv_f64_t _ZGVsMxv_exp2 (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_log (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_sin (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_tan (__sv_f64_t, __sv_bool_t);
diff --git a/sysdeps/aarch64/fpu/exp2_advsimd.c b/sysdeps/aarch64/fpu/exp2_advsimd.c
new file mode 100644
index 0000000000..8e26033cb9
--- /dev/null
+++ b/sysdeps/aarch64/fpu/exp2_advsimd.c
@@ -0,0 +1,128 @@
+/* Double-precision vector (AdvSIMD) exp2 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "v_math.h"
+#include "poly_advsimd_f64.h"
+
+#define N (1 << V_EXP_TABLE_BITS)
+#define IndexMask (N - 1)
+#define BigBound 1022.0
+#define UOFlowBound 1280.0
+
+static const struct data
+{
+  float64x2_t poly[4];
+  float64x2_t shift, scale_big_bound, scale_uoflow_bound;
+} data = {
+  /* Coefficients are computed using Remez algorithm with
+     minimisation of the absolute error.  */
+  .poly = { V2 (0x1.62e42fefa3686p-1), V2 (0x1.ebfbdff82c241p-3),
+	    V2 (0x1.c6b09b16de99ap-5), V2 (0x1.3b2abf5571ad8p-7) },
+  .shift = V2 (0x1.8p52 / N),
+  .scale_big_bound = V2 (BigBound),
+  .scale_uoflow_bound = V2 (UOFlowBound),
+};
+
+static inline uint64x2_t
+lookup_sbits (uint64x2_t i)
+{
+  return (uint64x2_t){ __v_exp_data[i[0] & IndexMask],
+		       __v_exp_data[i[1] & IndexMask] };
+}
+
+#if WANT_SIMD_EXCEPT
+
+# define TinyBound 0x2000000000000000 /* asuint64(0x1p-511).  */
+# define Thres 0x2080000000000000     /* asuint64(512.0) - TinyBound.  */
+
+/* Call scalar exp2 as a fallback.  */
+static float64x2_t VPCS_ATTR NOINLINE
+special_case (float64x2_t x)
+{
+  return v_call_f64 (exp2, x, x, v_u64 (0xffffffffffffffff));
+}
+
+#else
+
+# define SpecialOffset 0x6000000000000000 /* 0x1p513.  */
+/* SpecialBias1 + SpecialBias1 = asuint(1.0).  */
+# define SpecialBias1 0x7000000000000000 /* 0x1p769.  */
+# define SpecialBias2 0x3010000000000000 /* 0x1p-254.  */
+
+static float64x2_t VPCS_ATTR
+special_case (float64x2_t s, float64x2_t y, float64x2_t n,
+	      const struct data *d)
+{
+  /* 2^(n/N) may overflow, break it up into s1*s2.  */
+  uint64x2_t b = vandq_u64 (vclezq_f64 (n), v_u64 (SpecialOffset));
+  float64x2_t s1 = vreinterpretq_f64_u64 (vsubq_u64 (v_u64 (SpecialBias1), b));
+  float64x2_t s2 = vreinterpretq_f64_u64 (
+    vaddq_u64 (vsubq_u64 (vreinterpretq_u64_f64 (s), v_u64 (SpecialBias2)), b));
+  uint64x2_t cmp = vcagtq_f64 (n, d->scale_uoflow_bound);
+  float64x2_t r1 = vmulq_f64 (s1, s1);
+  float64x2_t r0 = vmulq_f64 (vfmaq_f64 (s2, s2, y), s1);
+  return vbslq_f64 (cmp, r1, r0);
+}
+
+#endif
+
+/* Fast vector implementation of exp2.
+   Maximum measured error is 1.65 ulp.
+   _ZGVnN2v_exp2(-0x1.4c264ab5b559bp-6) got 0x1.f8db0d4df721fp-1
+				       want 0x1.f8db0d4df721dp-1.  */
+VPCS_ATTR
+float64x2_t V_NAME_D1 (exp2) (float64x2_t x)
+{
+  const struct data *d = ptr_barrier (&data);
+  uint64x2_t cmp;
+#if WANT_SIMD_EXCEPT
+  uint64x2_t ia = vreinterpretq_u64_f64 (vabsq_f64 (x));
+  cmp = vcgeq_u64 (vsubq_u64 (ia, v_u64 (TinyBound)), v_u64 (Thres));
+  /* If any special case (inf, nan, small and large x) is detected,
+     fall back to scalar for all lanes.  */
+  if (__glibc_unlikely (v_any_u64 (cmp)))
+    return special_case (x);
+#else
+  cmp = vcagtq_f64 (x, d->scale_big_bound);
+#endif
+
+  /* n = round(x/N).  */
+  float64x2_t z = vaddq_f64 (d->shift, x);
+  uint64x2_t u = vreinterpretq_u64_f64 (z);
+  float64x2_t n = vsubq_f64 (z, d->shift);
+
+  /* r = x - n/N.  */
+  float64x2_t r = vsubq_f64 (x, n);
+
+  /* s = 2^(n/N).  */
+  uint64x2_t e = vshlq_n_u64 (u, 52 - V_EXP_TABLE_BITS);
+  u = lookup_sbits (u);
+  float64x2_t s = vreinterpretq_f64_u64 (vaddq_u64 (u, e));
+
+  /* y ~ exp2(r) - 1.  */
+  float64x2_t r2 = vmulq_f64 (r, r);
+  float64x2_t y = v_pairwise_poly_3_f64 (r, r2, d->poly);
+  y = vmulq_f64 (r, y);
+
+#if !WANT_SIMD_EXCEPT
+  if (__glibc_unlikely (v_any_u64 (cmp)))
+    return special_case (s, y, n, d);
+#endif
+  return vfmaq_f64 (s, s, y);
+}
diff --git a/sysdeps/aarch64/fpu/exp2_sve.c b/sysdeps/aarch64/fpu/exp2_sve.c
new file mode 100644
index 0000000000..c9314a9fee
--- /dev/null
+++ b/sysdeps/aarch64/fpu/exp2_sve.c
@@ -0,0 +1,111 @@
+/* Double-precision vector (SVE) exp2 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "sv_math.h"
+#include "poly_sve_f64.h"
+
+#define N (1 << V_EXP_TABLE_BITS)
+
+#define BigBound 1022
+#define UOFlowBound 1280
+
+static const struct data
+{
+  double poly[4];
+  double shift, big_bound, uoflow_bound;
+} data = {
+  /* Coefficients are computed using Remez algorithm with
+     minimisation of the absolute error.  */
+  .poly = { 0x1.62e42fefa3686p-1, 0x1.ebfbdff82c241p-3, 0x1.c6b09b16de99ap-5,
+	    0x1.3b2abf5571ad8p-7 },
+  .shift = 0x1.8p52 / N,
+  .uoflow_bound = UOFlowBound,
+  .big_bound = BigBound,
+};
+
+#define SpecialOffset 0x6000000000000000 /* 0x1p513.  */
+/* SpecialBias1 + SpecialBias1 = asuint(1.0).  */
+#define SpecialBias1 0x7000000000000000 /* 0x1p769.  */
+#define SpecialBias2 0x3010000000000000 /* 0x1p-254.  */
+
+/* Update of both special and non-special cases, if any special case is
+   detected.  */
+static inline svfloat64_t
+special_case (svbool_t pg, svfloat64_t s, svfloat64_t y, svfloat64_t n,
+	      const struct data *d)
+{
+  /* s=2^n may overflow, break it up into s=s1*s2,
+     such that exp = s + s*y can be computed as s1*(s2+s2*y)
+     and s1*s1 overflows only if n>0.  */
+
+  /* If n<=0 then set b to 0x6, 0 otherwise.  */
+  svbool_t p_sign = svcmple (pg, n, 0.0); /* n <= 0.  */
+  svuint64_t b = svdup_u64_z (p_sign, SpecialOffset);
+
+  /* Set s1 to generate overflow depending on sign of exponent n.  */
+  svfloat64_t s1 = svreinterpret_f64 (svsubr_x (pg, b, SpecialBias1));
+  /* Offset s to avoid overflow in final result if n is below threshold.  */
+  svfloat64_t s2 = svreinterpret_f64 (
+      svadd_x (pg, svsub_x (pg, svreinterpret_u64 (s), SpecialBias2), b));
+
+  /* |n| > 1280 => 2^(n) overflows.  */
+  svbool_t p_cmp = svacgt (pg, n, d->uoflow_bound);
+
+  svfloat64_t r1 = svmul_x (pg, s1, s1);
+  svfloat64_t r2 = svmla_x (pg, s2, s2, y);
+  svfloat64_t r0 = svmul_x (pg, r2, s1);
+
+  return svsel (p_cmp, r1, r0);
+}
+
+/* Fast vector implementation of exp2.
+   Maximum measured error is 1.65 ulp.
+   _ZGVsMxv_exp2(-0x1.4c264ab5b559bp-6) got 0x1.f8db0d4df721fp-1
+				       want 0x1.f8db0d4df721dp-1.  */
+svfloat64_t SV_NAME_D1 (exp2) (svfloat64_t x, svbool_t pg)
+{
+  const struct data *d = ptr_barrier (&data);
+  svbool_t no_big_scale = svacle (pg, x, d->big_bound);
+  svbool_t special = svnot_z (pg, no_big_scale);
+
+  /* Reduce x to k/N + r, where k is integer and r in [-1/2N, 1/2N].  */
+  svfloat64_t shift = sv_f64 (d->shift);
+  svfloat64_t kd = svadd_x (pg, x, shift);
+  svuint64_t ki = svreinterpret_u64 (kd);
+  /* kd = k/N.  */
+  kd = svsub_x (pg, kd, shift);
+  svfloat64_t r = svsub_x (pg, x, kd);
+
+  /* scale ~= 2^(k/N).  */
+  svuint64_t idx = svand_x (pg, ki, N - 1);
+  svuint64_t sbits = svld1_gather_index (pg, __v_exp_data, idx);
+  /* This is only a valid scale when -1023*N < k < 1024*N.  */
+  svuint64_t top = svlsl_x (pg, ki, 52 - V_EXP_TABLE_BITS);
+  svfloat64_t scale = svreinterpret_f64 (svadd_x (pg, sbits, top));
+
+  /* Approximate exp2(r) using polynomial.  */
+  svfloat64_t r2 = svmul_x (pg, r, r);
+  svfloat64_t p = sv_pairwise_poly_3_f64_x (pg, r, r2, d->poly);
+  svfloat64_t y = svmul_x (pg, r, p);
+
+  /* Assemble exp2(x) = exp2(r) * scale.  */
+  if (__glibc_unlikely (svptest_any (pg, special)))
+    return special_case (pg, scale, y, kd, d);
+  return svmla_x (pg, scale, scale, y);
+}
diff --git a/sysdeps/aarch64/fpu/exp2f_advsimd.c b/sysdeps/aarch64/fpu/exp2f_advsimd.c
new file mode 100644
index 0000000000..70b3ab66c1
--- /dev/null
+++ b/sysdeps/aarch64/fpu/exp2f_advsimd.c
@@ -0,0 +1,124 @@
+/* Single-precision vector (AdvSIMD) exp2 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "v_math.h"
+
+static const struct data
+{
+  float32x4_t poly[5];
+  uint32x4_t exponent_bias;
+#if !WANT_SIMD_EXCEPT
+  float32x4_t special_bound, scale_thresh;
+#endif
+} data = {
+  /* maxerr: 1.962 ulp.  */
+  .poly = { V4 (0x1.59977ap-10f), V4 (0x1.3ce9e4p-7f), V4 (0x1.c6bd32p-5f),
+	    V4 (0x1.ebf9bcp-3f), V4 (0x1.62e422p-1f) },
+  .exponent_bias = V4 (0x3f800000),
+#if !WANT_SIMD_EXCEPT
+  .special_bound = V4 (126.0f),
+  .scale_thresh = V4 (192.0f),
+#endif
+};
+
+#define C(i) d->poly[i]
+
+#if WANT_SIMD_EXCEPT
+
+# define TinyBound v_u32 (0x20000000)	  /* asuint (0x1p-63).  */
+# define BigBound v_u32 (0x42800000)	  /* asuint (0x1p6).  */
+# define SpecialBound v_u32 (0x22800000) /* BigBound - TinyBound.  */
+
+static float32x4_t VPCS_ATTR NOINLINE
+special_case (float32x4_t x, float32x4_t y, uint32x4_t cmp)
+{
+  /* If fenv exceptions are to be triggered correctly, fall back to the scalar
+     routine for special lanes.  */
+  return v_call_f32 (exp2f, x, y, cmp);
+}
+
+#else
+
+# define SpecialOffset v_u32 (0x82000000)
+# define SpecialBias v_u32 (0x7f000000)
+
+static float32x4_t VPCS_ATTR NOINLINE
+special_case (float32x4_t poly, float32x4_t n, uint32x4_t e, uint32x4_t cmp1,
+	      float32x4_t scale, const struct data *d)
+{
+  /* 2^n may overflow, break it up into s1*s2.  */
+  uint32x4_t b = vandq_u32 (vclezq_f32 (n), SpecialOffset);
+  float32x4_t s1 = vreinterpretq_f32_u32 (vaddq_u32 (b, SpecialBias));
+  float32x4_t s2 = vreinterpretq_f32_u32 (vsubq_u32 (e, b));
+  uint32x4_t cmp2 = vcagtq_f32 (n, d->scale_thresh);
+  float32x4_t r2 = vmulq_f32 (s1, s1);
+  float32x4_t r1 = vmulq_f32 (vfmaq_f32 (s2, poly, s2), s1);
+  /* Similar to r1 but avoids double rounding in the subnormal range.  */
+  float32x4_t r0 = vfmaq_f32 (scale, poly, scale);
+  float32x4_t r = vbslq_f32 (cmp1, r1, r0);
+  return vbslq_f32 (cmp2, r2, r);
+}
+
+#endif
+
+float32x4_t VPCS_ATTR V_NAME_F1 (exp2) (float32x4_t x)
+{
+  const struct data *d = ptr_barrier (&data);
+  float32x4_t n, r, r2, scale, p, q, poly;
+  uint32x4_t cmp, e;
+
+#if WANT_SIMD_EXCEPT
+  /* asuint(|x|) - TinyBound >= BigBound - TinyBound.  */
+  uint32x4_t ia = vreinterpretq_u32_f32 (vabsq_f32 (x));
+  cmp = vcgeq_u32 (vsubq_u32 (ia, TinyBound), SpecialBound);
+  float32x4_t xm = x;
+  /* If any lanes are special, mask them with 1 and retain a copy of x to allow
+     special_case to fix special lanes later. This is only necessary if fenv
+     exceptions are to be triggered correctly.  */
+  if (__glibc_unlikely (v_any_u32 (cmp)))
+    x = vbslq_f32 (cmp, v_f32 (1), x);
+#endif
+
+    /* exp2(x) = 2^n (1 + poly(r)), with 1 + poly(r) in [1/sqrt(2),sqrt(2)]
+       x = n + r, with r in [-1/2, 1/2].  */
+  n = vrndaq_f32 (x);
+  r = vsubq_f32 (x, n);
+  e = vshlq_n_u32 (vreinterpretq_u32_s32 (vcvtaq_s32_f32 (x)), 23);
+  scale = vreinterpretq_f32_u32 (vaddq_u32 (e, d->exponent_bias));
+
+#if !WANT_SIMD_EXCEPT
+  cmp = vcagtq_f32 (n, d->special_bound);
+#endif
+
+  r2 = vmulq_f32 (r, r);
+  p = vfmaq_f32 (C (1), C (0), r);
+  q = vfmaq_f32 (C (3), C (2), r);
+  q = vfmaq_f32 (q, p, r2);
+  p = vmulq_f32 (C (4), r);
+  poly = vfmaq_f32 (p, q, r2);
+
+  if (__glibc_unlikely (v_any_u32 (cmp)))
+#if WANT_SIMD_EXCEPT
+    return special_case (xm, vfmaq_f32 (scale, poly, scale), cmp);
+#else
+    return special_case (poly, n, e, cmp, scale, d);
+#endif
+
+  return vfmaq_f32 (scale, poly, scale);
+}
diff --git a/sysdeps/aarch64/fpu/exp2f_sve.c b/sysdeps/aarch64/fpu/exp2f_sve.c
new file mode 100644
index 0000000000..88d10daefc
--- /dev/null
+++ b/sysdeps/aarch64/fpu/exp2f_sve.c
@@ -0,0 +1,75 @@
+/* Single-precision vector (SVE) exp2 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "sv_math.h"
+#include "poly_sve_f32.h"
+
+static const struct data
+{
+  float poly[5];
+  float shift, thres;
+} data = {
+  /* Coefficients copied from the polynomial in AdvSIMD variant, reversed for
+     compatibility with polynomial helpers.  */
+  .poly = { 0x1.62e422p-1f, 0x1.ebf9bcp-3f, 0x1.c6bd32p-5f, 0x1.3ce9e4p-7f,
+	    0x1.59977ap-10f },
+  /* 1.5*2^17 + 127.  */
+  .shift = 0x1.903f8p17f,
+  /* Roughly 87.3. For x < -Thres, the result is subnormal and not handled
+     correctly by FEXPA.  */
+  .thres = 0x1.5d5e2ap+6f,
+};
+
+static svfloat32_t NOINLINE
+special_case (svfloat32_t x, svfloat32_t y, svbool_t special)
+{
+  return sv_call_f32 (exp2f, x, y, special);
+}
+
+/* Single-precision SVE exp2f routine. Implements the same algorithm
+   as AdvSIMD exp2f.
+   Worst case error is 1.04 ULPs.
+   SV_NAME_F1 (exp2)(0x1.943b9p-1) got 0x1.ba7eb2p+0
+				  want 0x1.ba7ebp+0.  */
+svfloat32_t SV_NAME_F1 (exp2) (svfloat32_t x, const svbool_t pg)
+{
+  const struct data *d = ptr_barrier (&data);
+  /* exp2(x) = 2^n (1 + poly(r)), with 1 + poly(r) in [1/sqrt(2),sqrt(2)]
+    x = n + r, with r in [-1/2, 1/2].  */
+  svfloat32_t shift = sv_f32 (d->shift);
+  svfloat32_t z = svadd_x (pg, x, shift);
+  svfloat32_t n = svsub_x (pg, z, shift);
+  svfloat32_t r = svsub_x (pg, x, n);
+
+  svbool_t special = svacgt (pg, x, d->thres);
+  svfloat32_t scale = svexpa (svreinterpret_u32 (z));
+
+  /* Polynomial evaluation: poly(r) ~ exp2(r)-1.
+     Evaluate polynomial use hybrid scheme - offset ESTRIN by 1 for
+     coefficients 1 to 4, and apply most significant coefficient directly.  */
+  svfloat32_t r2 = svmul_x (pg, r, r);
+  svfloat32_t p14 = sv_pairwise_poly_3_f32_x (pg, r, r2, d->poly + 1);
+  svfloat32_t p0 = svmul_x (pg, r, d->poly[0]);
+  svfloat32_t poly = svmla_x (pg, p0, r2, p14);
+
+  if (__glibc_unlikely (svptest_any (pg, special)))
+    return special_case (x, svmla_x (pg, scale, scale, poly), special);
+
+  return svmla_x (pg, scale, scale, poly);
+}
diff --git a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
index f76726e324..b2b36fd847 100644
--- a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
@@ -25,6 +25,7 @@
 
 VPCS_VECTOR_WRAPPER (cos_advsimd, _ZGVnN2v_cos)
 VPCS_VECTOR_WRAPPER (exp_advsimd, _ZGVnN2v_exp)
+VPCS_VECTOR_WRAPPER (exp2_advsimd, _ZGVnN2v_exp2)
 VPCS_VECTOR_WRAPPER (log_advsimd, _ZGVnN2v_log)
 VPCS_VECTOR_WRAPPER (sin_advsimd, _ZGVnN2v_sin)
 VPCS_VECTOR_WRAPPER (tan_advsimd, _ZGVnN2v_tan)
diff --git a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
index 5a9e5b552a..88b76ed678 100644
--- a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
@@ -34,6 +34,7 @@
 
 SVE_VECTOR_WRAPPER (cos_sve, _ZGVsMxv_cos)
 SVE_VECTOR_WRAPPER (exp_sve, _ZGVsMxv_exp)
+SVE_VECTOR_WRAPPER (exp2_sve, _ZGVsMxv_exp2)
 SVE_VECTOR_WRAPPER (log_sve, _ZGVsMxv_log)
 SVE_VECTOR_WRAPPER (sin_sve, _ZGVsMxv_sin)
 SVE_VECTOR_WRAPPER (tan_sve, _ZGVsMxv_tan)
diff --git a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
index 66c6227087..02ab609b5a 100644
--- a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
@@ -25,6 +25,7 @@
 
 VPCS_VECTOR_WRAPPER (cosf_advsimd, _ZGVnN4v_cosf)
 VPCS_VECTOR_WRAPPER (expf_advsimd, _ZGVnN4v_expf)
+VPCS_VECTOR_WRAPPER (exp2f_advsimd, _ZGVnN4v_exp2f)
 VPCS_VECTOR_WRAPPER (logf_advsimd, _ZGVnN4v_logf)
 VPCS_VECTOR_WRAPPER (sinf_advsimd, _ZGVnN4v_sinf)
 VPCS_VECTOR_WRAPPER (tanf_advsimd, _ZGVnN4v_tanf)
diff --git a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
index 3cbf8b48b7..fa41ce09d8 100644
--- a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
@@ -34,6 +34,7 @@
 
 SVE_VECTOR_WRAPPER (cosf_sve, _ZGVsMxv_cosf)
 SVE_VECTOR_WRAPPER (expf_sve, _ZGVsMxv_expf)
+SVE_VECTOR_WRAPPER (exp2f_sve, _ZGVsMxv_exp2f)
 SVE_VECTOR_WRAPPER (logf_sve, _ZGVsMxv_logf)
 SVE_VECTOR_WRAPPER (sinf_sve, _ZGVsMxv_sinf)
 SVE_VECTOR_WRAPPER (tanf_sve, _ZGVsMxv_tanf)
diff --git a/sysdeps/aarch64/libm-test-ulps b/sysdeps/aarch64/libm-test-ulps
index ec84381c75..a1e5651c87 100644
--- a/sysdeps/aarch64/libm-test-ulps
+++ b/sysdeps/aarch64/libm-test-ulps
@@ -990,11 +990,19 @@ double: 1
 float: 1
 ldouble: 1
 
+Function: "exp2_advsimd":
+double: 1
+float: 1
+
 Function: "exp2_downward":
 double: 1
 float: 1
 ldouble: 1
 
+Function: "exp2_sve":
+double: 1
+float: 1
+
 Function: "exp2_towardzero":
 double: 1
 float: 1
diff --git a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
index 0ec668582e..6046c3d046 100644
--- a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
+++ b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
@@ -14,7 +14,11 @@ GLIBC_2.38 _ZGVsMxv_log F
 GLIBC_2.38 _ZGVsMxv_logf F
 GLIBC_2.38 _ZGVsMxv_sin F
 GLIBC_2.38 _ZGVsMxv_sinf F
+GLIBC_2.39 _ZGVnN2v_exp2 F
 GLIBC_2.39 _ZGVnN2v_tan F
+GLIBC_2.39 _ZGVnN4v_exp2f F
 GLIBC_2.39 _ZGVnN4v_tanf F
+GLIBC_2.39 _ZGVsMxv_exp2 F
+GLIBC_2.39 _ZGVsMxv_exp2f F
 GLIBC_2.39 _ZGVsMxv_tan F
 GLIBC_2.39 _ZGVsMxv_tanf F
-- 
2.27.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 3/5] aarch64: Add vector implementations of log2 routines
  2023-10-05 16:10 [PATCH v2 1/5] aarch64: Add vector implementations of tan routines Joe Ramsay
  2023-10-05 16:10 ` [PATCH v2 2/5] aarch64: Add vector implementations of exp2 routines Joe Ramsay
@ 2023-10-05 16:10 ` Joe Ramsay
  2023-10-05 16:10 ` [PATCH 4/5] aarch64: Add vector implementations of log10 routines Joe Ramsay
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Joe Ramsay @ 2023-10-05 16:10 UTC (permalink / raw)
  To: libc-alpha; +Cc: Joe Ramsay

A table is also added, which is shared between AdvSIMD and SVE log2.
---
Changes from v1:
* Transpose table layout for zipwise access
* Use half-vectors for AdvSIMD special-case comparison
* Optimise return values
Thanks,
Joe
 sysdeps/aarch64/fpu/Makefile                  |   4 +-
 sysdeps/aarch64/fpu/Versions                  |   4 +
 sysdeps/aarch64/fpu/bits/math-vector.h        |   4 +
 sysdeps/aarch64/fpu/log2_advsimd.c            | 109 ++++++++++++
 sysdeps/aarch64/fpu/log2_sve.c                |  73 ++++++++
 sysdeps/aarch64/fpu/log2f_advsimd.c           |  77 ++++++++
 sysdeps/aarch64/fpu/log2f_sve.c               |  86 +++++++++
 .../fpu/test-double-advsimd-wrappers.c        |   1 +
 .../aarch64/fpu/test-double-sve-wrappers.c    |   1 +
 .../aarch64/fpu/test-float-advsimd-wrappers.c |   1 +
 sysdeps/aarch64/fpu/test-float-sve-wrappers.c |   1 +
 sysdeps/aarch64/fpu/v_log2_data.c             | 165 ++++++++++++++++++
 sysdeps/aarch64/fpu/vecmath_config.h          |  12 ++
 sysdeps/aarch64/libm-test-ulps                |   8 +
 .../unix/sysv/linux/aarch64/libmvec.abilist   |   4 +
 15 files changed, 549 insertions(+), 1 deletion(-)
 create mode 100644 sysdeps/aarch64/fpu/log2_advsimd.c
 create mode 100644 sysdeps/aarch64/fpu/log2_sve.c
 create mode 100644 sysdeps/aarch64/fpu/log2f_advsimd.c
 create mode 100644 sysdeps/aarch64/fpu/log2f_sve.c
 create mode 100644 sysdeps/aarch64/fpu/v_log2_data.c

diff --git a/sysdeps/aarch64/fpu/Makefile b/sysdeps/aarch64/fpu/Makefile
index 9c7c768301..c3f204ff0d 100644
--- a/sysdeps/aarch64/fpu/Makefile
+++ b/sysdeps/aarch64/fpu/Makefile
@@ -2,6 +2,7 @@ libmvec-supported-funcs = cos \
                           exp \
                           exp2 \
                           log \
+                          log2 \
                           sin \
                           tan
 
@@ -16,7 +17,8 @@ libmvec-support = $(addsuffix f_advsimd,$(float-advsimd-funcs)) \
                   $(addsuffix f_sve,$(float-sve-funcs)) \
                   $(addsuffix _sve,$(double-sve-funcs)) \
                   v_log_data \
-                  v_exp_data
+                  v_exp_data \
+                  v_log2_data
 endif
 
 sve-cflags = -march=armv8-a+sve
diff --git a/sysdeps/aarch64/fpu/Versions b/sysdeps/aarch64/fpu/Versions
index 05de4325d5..ffe62a6f65 100644
--- a/sysdeps/aarch64/fpu/Versions
+++ b/sysdeps/aarch64/fpu/Versions
@@ -22,6 +22,10 @@ libmvec {
     _ZGVnN2v_exp2;
     _ZGVsMxv_exp2f;
     _ZGVsMxv_exp2;
+    _ZGVnN4v_log2f;
+    _ZGVnN2v_log2;
+    _ZGVsMxv_log2f;
+    _ZGVsMxv_log2;
     _ZGVnN4v_tanf;
     _ZGVnN2v_tan;
     _ZGVsMxv_tanf;
diff --git a/sysdeps/aarch64/fpu/bits/math-vector.h b/sysdeps/aarch64/fpu/bits/math-vector.h
index 50921b22e5..92f214b194 100644
--- a/sysdeps/aarch64/fpu/bits/math-vector.h
+++ b/sysdeps/aarch64/fpu/bits/math-vector.h
@@ -53,6 +53,7 @@ __vpcs __f32x4_t _ZGVnN4v_cosf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_expf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_exp2f (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_logf (__f32x4_t);
+__vpcs __f32x4_t _ZGVnN4v_log2f (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_sinf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_tanf (__f32x4_t);
 
@@ -60,6 +61,7 @@ __vpcs __f64x2_t _ZGVnN2v_cos (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_exp (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_exp2 (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_log (__f64x2_t);
+__vpcs __f64x2_t _ZGVnN2v_log2 (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_sin (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_tan (__f64x2_t);
 
@@ -72,6 +74,7 @@ __sv_f32_t _ZGVsMxv_cosf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_expf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_exp2f (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_logf (__sv_f32_t, __sv_bool_t);
+__sv_f32_t _ZGVsMxv_log2f (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_sinf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_tanf (__sv_f32_t, __sv_bool_t);
 
@@ -79,6 +82,7 @@ __sv_f64_t _ZGVsMxv_cos (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_exp (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_exp2 (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_log (__sv_f64_t, __sv_bool_t);
+__sv_f64_t _ZGVsMxv_log2 (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_sin (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_tan (__sv_f64_t, __sv_bool_t);
 
diff --git a/sysdeps/aarch64/fpu/log2_advsimd.c b/sysdeps/aarch64/fpu/log2_advsimd.c
new file mode 100644
index 0000000000..4f29924bfa
--- /dev/null
+++ b/sysdeps/aarch64/fpu/log2_advsimd.c
@@ -0,0 +1,109 @@
+/* Double-precision vector (AdvSIMD) exp2 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "v_math.h"
+#include "poly_advsimd_f64.h"
+
+#define N (1 << V_LOG2_TABLE_BITS)
+
+static const struct data
+{
+  uint64x2_t min_norm;
+  uint32x4_t special_bound;
+  float64x2_t poly[5];
+  float64x2_t invln2;
+  uint64x2_t sign_exp_mask;
+} data = {
+  /* Each coefficient was generated to approximate log(r) for |r| < 0x1.fp-9
+     and N = 128, then scaled by log2(e) in extended precision and rounded back
+     to double precision.  */
+  .poly = { V2 (-0x1.71547652b83p-1), V2 (0x1.ec709dc340953p-2),
+	    V2 (-0x1.71547651c8f35p-2), V2 (0x1.2777ebe12dda5p-2),
+	    V2 (-0x1.ec738d616fe26p-3) },
+  .invln2 = V2 (0x1.71547652b82fep0),
+  .min_norm = V2 (0x0010000000000000), /* asuint64(0x1p-1022).  */
+  .special_bound = V4 (0x7fe00000),    /* asuint64(inf) - min_norm.  */
+  .sign_exp_mask = V2 (0xfff0000000000000),
+};
+
+#define Off v_u64 (0x3fe6900900000000)
+#define IndexMask (N - 1)
+
+struct entry
+{
+  float64x2_t invc;
+  float64x2_t log2c;
+};
+
+static inline struct entry
+lookup (uint64x2_t i)
+{
+  struct entry e;
+  uint64_t i0 = (i[0] >> (52 - V_LOG2_TABLE_BITS)) & IndexMask;
+  uint64_t i1 = (i[1] >> (52 - V_LOG2_TABLE_BITS)) & IndexMask;
+  float64x2_t e0 = vld1q_f64 (&__v_log2_data.table[i0].invc);
+  float64x2_t e1 = vld1q_f64 (&__v_log2_data.table[i1].invc);
+  e.invc = vuzp1q_f64 (e0, e1);
+  e.log2c = vuzp2q_f64 (e0, e1);
+  return e;
+}
+
+static float64x2_t VPCS_ATTR NOINLINE
+special_case (float64x2_t x, float64x2_t y, float64x2_t w, float64x2_t r2,
+	      uint32x2_t special)
+{
+  return v_call_f64 (log2, x, vfmaq_f64 (w, r2, y), vmovl_u32 (special));
+}
+
+/* Double-precision vector log2 routine. Implements the same algorithm as
+   vector log10, with coefficients and table entries scaled in extended
+   precision. The maximum observed error is 2.58 ULP:
+   _ZGVnN2v_log2(0x1.0b556b093869bp+0) got 0x1.fffb34198d9dap-5
+				      want 0x1.fffb34198d9ddp-5.  */
+float64x2_t VPCS_ATTR V_NAME_D1 (log2) (float64x2_t x)
+{
+  const struct data *d = ptr_barrier (&data);
+  uint64x2_t ix = vreinterpretq_u64_f64 (x);
+  uint32x2_t special = vcge_u32 (vsubhn_u64 (ix, d->min_norm),
+				 vget_low_u32 (d->special_bound));
+
+  /* x = 2^k z; where z is in range [Off,2*Off) and exact.
+     The range is split into N subintervals.
+     The ith subinterval contains z and c is near its center.  */
+  uint64x2_t tmp = vsubq_u64 (ix, Off);
+  int64x2_t k = vshrq_n_s64 (vreinterpretq_s64_u64 (tmp), 52);
+  uint64x2_t iz = vsubq_u64 (ix, vandq_u64 (tmp, d->sign_exp_mask));
+  float64x2_t z = vreinterpretq_f64_u64 (iz);
+
+  struct entry e = lookup (tmp);
+
+  /* log2(x) = log1p(z/c-1)/log(2) + log2(c) + k.  */
+
+  float64x2_t r = vfmaq_f64 (v_f64 (-1.0), z, e.invc);
+  float64x2_t kd = vcvtq_f64_s64 (k);
+  float64x2_t w = vfmaq_f64 (e.log2c, r, d->invln2);
+
+  float64x2_t r2 = vmulq_f64 (r, r);
+  float64x2_t y = v_pw_horner_4_f64 (r, r2, d->poly);
+  w = vaddq_f64 (kd, w);
+
+  if (__glibc_unlikely (v_any_u32h (special)))
+    return special_case (x, y, w, r2, special);
+  return vfmaq_f64 (w, r2, y);
+}
diff --git a/sysdeps/aarch64/fpu/log2_sve.c b/sysdeps/aarch64/fpu/log2_sve.c
new file mode 100644
index 0000000000..0ef6669fd5
--- /dev/null
+++ b/sysdeps/aarch64/fpu/log2_sve.c
@@ -0,0 +1,73 @@
+/* Double-precision vector (SVE) log2 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "sv_math.h"
+#include "poly_sve_f64.h"
+
+#define N (1 << V_LOG2_TABLE_BITS)
+#define Off 0x3fe6900900000000
+#define Max (0x7ff0000000000000)
+#define Min (0x0010000000000000)
+#define Thresh (0x7fe0000000000000) /* Max - Min.  */
+
+static svfloat64_t NOINLINE
+special_case (svfloat64_t x, svfloat64_t y, svbool_t cmp)
+{
+  return sv_call_f64 (log2, x, y, cmp);
+}
+
+/* Double-precision SVE log2 routine.
+   Implements the same algorithm as AdvSIMD log10, with coefficients and table
+   entries scaled in extended precision.
+   The maximum observed error is 2.58 ULP:
+   SV_NAME_D1 (log2)(0x1.0b556b093869bp+0) got 0x1.fffb34198d9dap-5
+					  want 0x1.fffb34198d9ddp-5.  */
+svfloat64_t SV_NAME_D1 (log2) (svfloat64_t x, const svbool_t pg)
+{
+  svuint64_t ix = svreinterpret_u64 (x);
+  svbool_t special = svcmpge (pg, svsub_x (pg, ix, Min), Thresh);
+
+  /* x = 2^k z; where z is in range [Off,2*Off) and exact.
+     The range is split into N subintervals.
+     The ith subinterval contains z and c is near its center.  */
+  svuint64_t tmp = svsub_x (pg, ix, Off);
+  svuint64_t i = svlsr_x (pg, tmp, 51 - V_LOG2_TABLE_BITS);
+  i = svand_x (pg, i, (N - 1) << 1);
+  svfloat64_t k = svcvt_f64_x (pg, svasr_x (pg, svreinterpret_s64 (tmp), 52));
+  svfloat64_t z = svreinterpret_f64 (
+      svsub_x (pg, ix, svand_x (pg, tmp, 0xfffULL << 52)));
+
+  svfloat64_t invc = svld1_gather_index (pg, &__v_log2_data.table[0].invc, i);
+  svfloat64_t log2c
+      = svld1_gather_index (pg, &__v_log2_data.table[0].log2c, i);
+
+  /* log2(x) = log1p(z/c-1)/log(2) + log2(c) + k.  */
+
+  svfloat64_t r = svmad_x (pg, invc, z, -1.0);
+  svfloat64_t w = svmla_x (pg, log2c, r, __v_log2_data.invln2);
+
+  svfloat64_t r2 = svmul_x (pg, r, r);
+  svfloat64_t y = sv_pw_horner_4_f64_x (pg, r, r2, __v_log2_data.poly);
+  w = svadd_x (pg, k, w);
+
+  if (__glibc_unlikely (svptest_any (pg, special)))
+    return special_case (x, svmla_x (svnot_z (pg, special), w, r2, y),
+			 special);
+  return svmla_x (pg, w, r2, y);
+}
diff --git a/sysdeps/aarch64/fpu/log2f_advsimd.c b/sysdeps/aarch64/fpu/log2f_advsimd.c
new file mode 100644
index 0000000000..e913bcda18
--- /dev/null
+++ b/sysdeps/aarch64/fpu/log2f_advsimd.c
@@ -0,0 +1,77 @@
+/* Single-precision vector (AdvSIMD) exp2 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "v_math.h"
+#include "poly_advsimd_f32.h"
+
+static const struct data
+{
+  uint32x4_t min_norm;
+  uint16x8_t special_bound;
+  uint32x4_t off, mantissa_mask;
+  float32x4_t poly[9];
+} data = {
+  /* Coefficients generated using Remez algorithm approximate
+     log2(1+r)/r for r in [ -1/3, 1/3 ].
+     rel error: 0x1.c4c4b0cp-26.  */
+  .poly = { V4 (0x1.715476p0f), /* (float)(1 / ln(2)).  */
+	    V4 (-0x1.715458p-1f), V4 (0x1.ec701cp-2f), V4 (-0x1.7171a4p-2f),
+	    V4 (0x1.27a0b8p-2f), V4 (-0x1.e5143ep-3f), V4 (0x1.9d8ecap-3f),
+	    V4 (-0x1.c675bp-3f), V4 (0x1.9e495p-3f) },
+  .min_norm = V4 (0x00800000),
+  .special_bound = V8 (0x7f00), /* asuint32(inf) - min_norm.  */
+  .off = V4 (0x3f2aaaab),	/* 0.666667.  */
+  .mantissa_mask = V4 (0x007fffff),
+};
+
+static float32x4_t VPCS_ATTR NOINLINE
+special_case (float32x4_t x, float32x4_t n, float32x4_t p, float32x4_t r,
+	      uint16x4_t cmp)
+{
+  /* Fall back to scalar code.  */
+  return v_call_f32 (log2f, x, vfmaq_f32 (n, p, r), vmovl_u16 (cmp));
+}
+
+/* Fast implementation for single precision AdvSIMD log2,
+   relies on same argument reduction as AdvSIMD logf.
+   Maximum error: 2.48 ULPs
+   _ZGVnN4v_log2f(0x1.558174p+0) got 0x1.a9be84p-2
+				want 0x1.a9be8p-2.  */
+float32x4_t VPCS_ATTR V_NAME_F1 (log2) (float32x4_t x)
+{
+  const struct data *d = ptr_barrier (&data);
+  uint32x4_t u = vreinterpretq_u32_f32 (x);
+  uint16x4_t special = vcge_u16 (vsubhn_u32 (u, d->min_norm),
+				 vget_low_u16 (d->special_bound));
+
+  /* x = 2^n * (1+r), where 2/3 < 1+r < 4/3.  */
+  u = vsubq_u32 (u, d->off);
+  float32x4_t n = vcvtq_f32_s32 (
+      vshrq_n_s32 (vreinterpretq_s32_u32 (u), 23)); /* signextend.  */
+  u = vaddq_u32 (vandq_u32 (u, d->mantissa_mask), d->off);
+  float32x4_t r = vsubq_f32 (vreinterpretq_f32_u32 (u), v_f32 (1.0f));
+
+  /* y = log2(1+r) + n.  */
+  float32x4_t r2 = vmulq_f32 (r, r);
+  float32x4_t p = v_pw_horner_8_f32 (r, r2, d->poly);
+
+  if (__glibc_unlikely (v_any_u16h (special)))
+    return special_case (x, n, p, r, special);
+  return vfmaq_f32 (n, p, r);
+}
diff --git a/sysdeps/aarch64/fpu/log2f_sve.c b/sysdeps/aarch64/fpu/log2f_sve.c
new file mode 100644
index 0000000000..d00813ee24
--- /dev/null
+++ b/sysdeps/aarch64/fpu/log2f_sve.c
@@ -0,0 +1,86 @@
+/* Single-precision vector (SVE) log2 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "sv_math.h"
+
+static const struct data
+{
+  float poly_02468[5];
+  float poly_1357[4];
+} data = {
+  .poly_1357 = {
+    /* Coefficients copied from the AdvSIMD routine, then rearranged so that coeffs
+       1, 3, 5 and 7 can be loaded as a single quad-word, hence used with _lane
+       variant of MLA intrinsic.  */
+    -0x1.715458p-1f, -0x1.7171a4p-2f, -0x1.e5143ep-3f, -0x1.c675bp-3f
+  },
+  .poly_02468 = { 0x1.715476p0f, 0x1.ec701cp-2f, 0x1.27a0b8p-2f,
+		  0x1.9d8ecap-3f, 0x1.9e495p-3f },
+};
+
+#define Min (0x00800000)
+#define Max (0x7f800000)
+#define Thres (0x7f000000) /* Max - Min.  */
+#define MantissaMask (0x007fffff)
+#define Off (0x3f2aaaab) /* 0.666667.  */
+
+static svfloat32_t NOINLINE
+special_case (svfloat32_t x, svfloat32_t y, svbool_t cmp)
+{
+  return sv_call_f32 (log2f, x, y, cmp);
+}
+
+/* Optimised implementation of SVE log2f, using the same algorithm
+   and polynomial as AdvSIMD log2f.
+   Maximum error is 2.48 ULPs:
+   SV_NAME_F1 (log2)(0x1.558174p+0) got 0x1.a9be84p-2
+				   want 0x1.a9be8p-2.  */
+svfloat32_t SV_NAME_F1 (log2) (svfloat32_t x, const svbool_t pg)
+{
+  const struct data *d = ptr_barrier (&data);
+
+  svuint32_t u = svreinterpret_u32 (x);
+  svbool_t special = svcmpge (pg, svsub_x (pg, u, Min), Thres);
+
+  /* x = 2^n * (1+r), where 2/3 < 1+r < 4/3.  */
+  u = svsub_x (pg, u, Off);
+  svfloat32_t n = svcvt_f32_x (
+      pg, svasr_x (pg, svreinterpret_s32 (u), 23)); /* Sign-extend.  */
+  u = svand_x (pg, u, MantissaMask);
+  u = svadd_x (pg, u, Off);
+  svfloat32_t r = svsub_x (pg, svreinterpret_f32 (u), 1.0f);
+
+  /* y = log2(1+r) + n.  */
+  svfloat32_t r2 = svmul_x (pg, r, r);
+
+  /* Evaluate polynomial using pairwise Horner scheme.  */
+  svfloat32_t p_1357 = svld1rq (svptrue_b32 (), &d->poly_1357[0]);
+  svfloat32_t q_01 = svmla_lane (sv_f32 (d->poly_02468[0]), r, p_1357, 0);
+  svfloat32_t q_23 = svmla_lane (sv_f32 (d->poly_02468[1]), r, p_1357, 1);
+  svfloat32_t q_45 = svmla_lane (sv_f32 (d->poly_02468[2]), r, p_1357, 2);
+  svfloat32_t q_67 = svmla_lane (sv_f32 (d->poly_02468[3]), r, p_1357, 3);
+  svfloat32_t y = svmla_x (pg, q_67, r2, sv_f32 (d->poly_02468[4]));
+  y = svmla_x (pg, q_45, r2, y);
+  y = svmla_x (pg, q_23, r2, y);
+  y = svmla_x (pg, q_01, r2, y);
+
+  if (__glibc_unlikely (svptest_any (pg, special)))
+    return special_case (x, svmla_x (svnot_z (pg, special), n, r, y), special);
+  return svmla_x (pg, n, r, y);
+}
diff --git a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
index b2b36fd847..d30dcd6f95 100644
--- a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
@@ -27,5 +27,6 @@ VPCS_VECTOR_WRAPPER (cos_advsimd, _ZGVnN2v_cos)
 VPCS_VECTOR_WRAPPER (exp_advsimd, _ZGVnN2v_exp)
 VPCS_VECTOR_WRAPPER (exp2_advsimd, _ZGVnN2v_exp2)
 VPCS_VECTOR_WRAPPER (log_advsimd, _ZGVnN2v_log)
+VPCS_VECTOR_WRAPPER (log2_advsimd, _ZGVnN2v_log2)
 VPCS_VECTOR_WRAPPER (sin_advsimd, _ZGVnN2v_sin)
 VPCS_VECTOR_WRAPPER (tan_advsimd, _ZGVnN2v_tan)
diff --git a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
index 88b76ed678..22a8479100 100644
--- a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
@@ -36,5 +36,6 @@ SVE_VECTOR_WRAPPER (cos_sve, _ZGVsMxv_cos)
 SVE_VECTOR_WRAPPER (exp_sve, _ZGVsMxv_exp)
 SVE_VECTOR_WRAPPER (exp2_sve, _ZGVsMxv_exp2)
 SVE_VECTOR_WRAPPER (log_sve, _ZGVsMxv_log)
+SVE_VECTOR_WRAPPER (log2_sve, _ZGVsMxv_log2)
 SVE_VECTOR_WRAPPER (sin_sve, _ZGVsMxv_sin)
 SVE_VECTOR_WRAPPER (tan_sve, _ZGVsMxv_tan)
diff --git a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
index 02ab609b5a..e8f7f47c67 100644
--- a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
@@ -27,5 +27,6 @@ VPCS_VECTOR_WRAPPER (cosf_advsimd, _ZGVnN4v_cosf)
 VPCS_VECTOR_WRAPPER (expf_advsimd, _ZGVnN4v_expf)
 VPCS_VECTOR_WRAPPER (exp2f_advsimd, _ZGVnN4v_exp2f)
 VPCS_VECTOR_WRAPPER (logf_advsimd, _ZGVnN4v_logf)
+VPCS_VECTOR_WRAPPER (log2f_advsimd, _ZGVnN4v_log2f)
 VPCS_VECTOR_WRAPPER (sinf_advsimd, _ZGVnN4v_sinf)
 VPCS_VECTOR_WRAPPER (tanf_advsimd, _ZGVnN4v_tanf)
diff --git a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
index fa41ce09d8..f5e9584265 100644
--- a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
@@ -36,5 +36,6 @@ SVE_VECTOR_WRAPPER (cosf_sve, _ZGVsMxv_cosf)
 SVE_VECTOR_WRAPPER (expf_sve, _ZGVsMxv_expf)
 SVE_VECTOR_WRAPPER (exp2f_sve, _ZGVsMxv_exp2f)
 SVE_VECTOR_WRAPPER (logf_sve, _ZGVsMxv_logf)
+SVE_VECTOR_WRAPPER (log2f_sve, _ZGVsMxv_log2f)
 SVE_VECTOR_WRAPPER (sinf_sve, _ZGVsMxv_sinf)
 SVE_VECTOR_WRAPPER (tanf_sve, _ZGVsMxv_tanf)
diff --git a/sysdeps/aarch64/fpu/v_log2_data.c b/sysdeps/aarch64/fpu/v_log2_data.c
new file mode 100644
index 0000000000..4fb126bf31
--- /dev/null
+++ b/sysdeps/aarch64/fpu/v_log2_data.c
@@ -0,0 +1,165 @@
+/* Coefficients and table entries for vector log2
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "vecmath_config.h"
+
+#define N (1 << V_LOG2_TABLE_BITS)
+
+const struct v_log2_data __v_log2_data = {
+
+  /* Each coefficient was generated to approximate log(r) for |r| < 0x1.fp-9
+     and N = 128, then scaled by log2(e) in extended precision and rounded back
+     to double precision.  */
+  .poly = { -0x1.71547652b83p-1, 0x1.ec709dc340953p-2, -0x1.71547651c8f35p-2,
+	    0x1.2777ebe12dda5p-2, -0x1.ec738d616fe26p-3 },
+
+  .invln2 = 0x1.71547652b82fep0,
+
+  /* Derived from tables in v_log_data.c in a similar way as v_log10_data.c.
+     This means invc is unchanged and log2c was calculated by scaling log(c) by
+     log2(e) in extended precision and rounding back to double precision.  */
+  .table = { { 0x1.6a133d0dec120p+0, -0x1.00130d57f5fadp-1 },
+	     { 0x1.6815f2f3e42edp+0, -0x1.f802661bd725ep-2 },
+	     { 0x1.661e39be1ac9ep+0, -0x1.efea1c6f73a5bp-2 },
+	     { 0x1.642bfa30ac371p+0, -0x1.e7dd1dcd06f05p-2 },
+	     { 0x1.623f1d916f323p+0, -0x1.dfdb4ae024809p-2 },
+	     { 0x1.60578da220f65p+0, -0x1.d7e484d101958p-2 },
+	     { 0x1.5e75349dea571p+0, -0x1.cff8ad452f6ep-2 },
+	     { 0x1.5c97fd387a75ap+0, -0x1.c817a666c997fp-2 },
+	     { 0x1.5abfd2981f200p+0, -0x1.c04152d640419p-2 },
+	     { 0x1.58eca051dc99cp+0, -0x1.b87595a3f64b2p-2 },
+	     { 0x1.571e526d9df12p+0, -0x1.b0b4526c44d07p-2 },
+	     { 0x1.5554d555b3fcbp+0, -0x1.a8fd6d1a90f5ep-2 },
+	     { 0x1.539015e2a20cdp+0, -0x1.a150ca2559fc6p-2 },
+	     { 0x1.51d0014ee0164p+0, -0x1.99ae4e62cca29p-2 },
+	     { 0x1.50148538cd9eep+0, -0x1.9215df1a1e842p-2 },
+	     { 0x1.4e5d8f9f698a1p+0, -0x1.8a8761fe1f0d9p-2 },
+	     { 0x1.4cab0edca66bep+0, -0x1.8302bd1cc9a54p-2 },
+	     { 0x1.4afcf1a9db874p+0, -0x1.7b87d6fb437f6p-2 },
+	     { 0x1.495327136e16fp+0, -0x1.741696673a86dp-2 },
+	     { 0x1.47ad9e84af28fp+0, -0x1.6caee2b3c6fe4p-2 },
+	     { 0x1.460c47b39ae15p+0, -0x1.6550a3666c27ap-2 },
+	     { 0x1.446f12b278001p+0, -0x1.5dfbc08de02a4p-2 },
+	     { 0x1.42d5efdd720ecp+0, -0x1.56b022766c84ap-2 },
+	     { 0x1.4140cfe001a0fp+0, -0x1.4f6db1c955536p-2 },
+	     { 0x1.3fafa3b421f69p+0, -0x1.4834579063054p-2 },
+	     { 0x1.3e225c9c8ece5p+0, -0x1.4103fd2249a76p-2 },
+	     { 0x1.3c98ec29a211ap+0, -0x1.39dc8c3fe6dabp-2 },
+	     { 0x1.3b13442a413fep+0, -0x1.32bdeed4b5c8fp-2 },
+	     { 0x1.399156baa3c54p+0, -0x1.2ba80f41e20ddp-2 },
+	     { 0x1.38131639b4cdbp+0, -0x1.249ad8332f4a7p-2 },
+	     { 0x1.36987540fbf53p+0, -0x1.1d96347e7f3ebp-2 },
+	     { 0x1.352166b648f61p+0, -0x1.169a0f7d6604ap-2 },
+	     { 0x1.33adddb3eb575p+0, -0x1.0fa654a221909p-2 },
+	     { 0x1.323dcd99fc1d3p+0, -0x1.08baefcf8251ap-2 },
+	     { 0x1.30d129fefc7d2p+0, -0x1.01d7cd14deecdp-2 },
+	     { 0x1.2f67e6b72fe7dp+0, -0x1.f5f9b1ad55495p-3 },
+	     { 0x1.2e01f7cf8b187p+0, -0x1.e853ff76a77afp-3 },
+	     { 0x1.2c9f518ddc86ep+0, -0x1.dabe5d624cba1p-3 },
+	     { 0x1.2b3fe86e5f413p+0, -0x1.cd38a5cef4822p-3 },
+	     { 0x1.29e3b1211b25cp+0, -0x1.bfc2b38d315f9p-3 },
+	     { 0x1.288aa08b373cfp+0, -0x1.b25c61f5edd0fp-3 },
+	     { 0x1.2734abcaa8467p+0, -0x1.a5058d18e9cacp-3 },
+	     { 0x1.25e1c82459b81p+0, -0x1.97be1113e47a3p-3 },
+	     { 0x1.2491eb1ad59c5p+0, -0x1.8a85cafdf5e27p-3 },
+	     { 0x1.23450a54048b5p+0, -0x1.7d5c97e8fc45bp-3 },
+	     { 0x1.21fb1bb09e578p+0, -0x1.704255d6486e4p-3 },
+	     { 0x1.20b415346d8f7p+0, -0x1.6336e2cedd7bfp-3 },
+	     { 0x1.1f6fed179a1acp+0, -0x1.563a1d9b0cc6ap-3 },
+	     { 0x1.1e2e99b93c7b3p+0, -0x1.494be541aaa6fp-3 },
+	     { 0x1.1cf011a7a882ap+0, -0x1.3c6c1964dd0f2p-3 },
+	     { 0x1.1bb44b97dba5ap+0, -0x1.2f9a99f19a243p-3 },
+	     { 0x1.1a7b3e66cdd4fp+0, -0x1.22d747344446p-3 },
+	     { 0x1.1944e11dc56cdp+0, -0x1.1622020d4f7f5p-3 },
+	     { 0x1.18112aebb1a6ep+0, -0x1.097aabb3553f3p-3 },
+	     { 0x1.16e013231b7e9p+0, -0x1.f9c24b48014c5p-4 },
+	     { 0x1.15b1913f156cfp+0, -0x1.e0aaa3bdc858ap-4 },
+	     { 0x1.14859cdedde13p+0, -0x1.c7ae257c952d6p-4 },
+	     { 0x1.135c2dc68cfa4p+0, -0x1.aecc960a03e58p-4 },
+	     { 0x1.12353bdb01684p+0, -0x1.9605bb724d541p-4 },
+	     { 0x1.1110bf25b85b4p+0, -0x1.7d595ca7147cep-4 },
+	     { 0x1.0feeafd2f8577p+0, -0x1.64c74165002d9p-4 },
+	     { 0x1.0ecf062c51c3bp+0, -0x1.4c4f31c86d344p-4 },
+	     { 0x1.0db1baa076c8bp+0, -0x1.33f0f70388258p-4 },
+	     { 0x1.0c96c5bb3048ep+0, -0x1.1bac5abb3037dp-4 },
+	     { 0x1.0b7e20263e070p+0, -0x1.0381272495f21p-4 },
+	     { 0x1.0a67c2acd0ce3p+0, -0x1.d6de4eba2de2ap-5 },
+	     { 0x1.0953a6391e982p+0, -0x1.a6ec4e8156898p-5 },
+	     { 0x1.0841c3caea380p+0, -0x1.772be542e3e1bp-5 },
+	     { 0x1.07321489b13eap+0, -0x1.479cadcde852dp-5 },
+	     { 0x1.062491aee9904p+0, -0x1.183e4265faa5p-5 },
+	     { 0x1.05193497a7cc5p+0, -0x1.d2207fdaa1b85p-6 },
+	     { 0x1.040ff6b5f5e9fp+0, -0x1.742486cb4a6a2p-6 },
+	     { 0x1.0308d19aa6127p+0, -0x1.1687d77cfc299p-6 },
+	     { 0x1.0203beedb0c67p+0, -0x1.7293623a6b5dep-7 },
+	     { 0x1.010037d38bcc2p+0, -0x1.70ec80ec8f25dp-8 },
+	     { 1.0, 0.0 },
+	     { 0x1.fc06d493cca10p-1, 0x1.704c1ca6b6bc9p-7 },
+	     { 0x1.f81e6ac3b918fp-1, 0x1.6eac8ba664beap-6 },
+	     { 0x1.f44546ef18996p-1, 0x1.11e67d040772dp-5 },
+	     { 0x1.f07b10382c84bp-1, 0x1.6bc665e2105dep-5 },
+	     { 0x1.ecbf7070e59d4p-1, 0x1.c4f8a9772bf1dp-5 },
+	     { 0x1.e91213f715939p-1, 0x1.0ebff10fbb951p-4 },
+	     { 0x1.e572a9a75f7b7p-1, 0x1.3aaf4d7805d11p-4 },
+	     { 0x1.e1e0e2c530207p-1, 0x1.664ba81a4d717p-4 },
+	     { 0x1.de5c72d8a8be3p-1, 0x1.9196387da6de4p-4 },
+	     { 0x1.dae50fa5658ccp-1, 0x1.bc902f2b7796p-4 },
+	     { 0x1.d77a71145a2dap-1, 0x1.e73ab5f584f28p-4 },
+	     { 0x1.d41c51166623ep-1, 0x1.08cb78510d232p-3 },
+	     { 0x1.d0ca6ba0bb29fp-1, 0x1.1dd2fe2f0dcb5p-3 },
+	     { 0x1.cd847e8e59681p-1, 0x1.32b4784400df4p-3 },
+	     { 0x1.ca4a499693e00p-1, 0x1.47706f3d49942p-3 },
+	     { 0x1.c71b8e399e821p-1, 0x1.5c0768ee4a4dcp-3 },
+	     { 0x1.c3f80faf19077p-1, 0x1.7079e86fc7c6dp-3 },
+	     { 0x1.c0df92dc2b0ecp-1, 0x1.84c86e1183467p-3 },
+	     { 0x1.bdd1de3cbb542p-1, 0x1.98f377a34b499p-3 },
+	     { 0x1.baceb9e1007a3p-1, 0x1.acfb803bc924bp-3 },
+	     { 0x1.b7d5ef543e55ep-1, 0x1.c0e10098b025fp-3 },
+	     { 0x1.b4e749977d953p-1, 0x1.d4a46efe103efp-3 },
+	     { 0x1.b20295155478ep-1, 0x1.e8463f45b8d0bp-3 },
+	     { 0x1.af279f8e82be2p-1, 0x1.fbc6e3228997fp-3 },
+	     { 0x1.ac5638197fdf3p-1, 0x1.079364f2e5aa8p-2 },
+	     { 0x1.a98e2f102e087p-1, 0x1.1133306010a63p-2 },
+	     { 0x1.a6cf5606d05c1p-1, 0x1.1ac309631bd17p-2 },
+	     { 0x1.a4197fc04d746p-1, 0x1.24432485370c1p-2 },
+	     { 0x1.a16c80293dc01p-1, 0x1.2db3b5449132fp-2 },
+	     { 0x1.9ec82c4dc5bc9p-1, 0x1.3714ee1d7a32p-2 },
+	     { 0x1.9c2c5a491f534p-1, 0x1.406700ab52c94p-2 },
+	     { 0x1.9998e1480b618p-1, 0x1.49aa1d87522b2p-2 },
+	     { 0x1.970d9977c6c2dp-1, 0x1.52de746d7ecb2p-2 },
+	     { 0x1.948a5c023d212p-1, 0x1.5c0434336b343p-2 },
+	     { 0x1.920f0303d6809p-1, 0x1.651b8ad6c90d1p-2 },
+	     { 0x1.8f9b698a98b45p-1, 0x1.6e24a56ab5831p-2 },
+	     { 0x1.8d2f6b81726f6p-1, 0x1.771fb04ec29b1p-2 },
+	     { 0x1.8acae5bb55badp-1, 0x1.800cd6f19c25ep-2 },
+	     { 0x1.886db5d9275b8p-1, 0x1.88ec441df11dfp-2 },
+	     { 0x1.8617ba567c13cp-1, 0x1.91be21b7c93f5p-2 },
+	     { 0x1.83c8d27487800p-1, 0x1.9a8298f8c7454p-2 },
+	     { 0x1.8180de3c5dbe7p-1, 0x1.a339d255c04ddp-2 },
+	     { 0x1.7f3fbe71cdb71p-1, 0x1.abe3f59f43db7p-2 },
+	     { 0x1.7d055498071c1p-1, 0x1.b48129deca9efp-2 },
+	     { 0x1.7ad182e54f65ap-1, 0x1.bd119575364c1p-2 },
+	     { 0x1.78a42c3c90125p-1, 0x1.c5955e23ebcbcp-2 },
+	     { 0x1.767d342f76944p-1, 0x1.ce0ca8f4e1557p-2 },
+	     { 0x1.745c7ef26b00ap-1, 0x1.d6779a5a75774p-2 },
+	     { 0x1.7241f15769d0fp-1, 0x1.ded6563550d27p-2 },
+	     { 0x1.702d70d396e41p-1, 0x1.e728ffafd840ep-2 },
+	     { 0x1.6e1ee3700cd11p-1, 0x1.ef6fb96c8d739p-2 },
+	     { 0x1.6c162fc9cbe02p-1, 0x1.f7aaa57907219p-2 } }
+};
diff --git a/sysdeps/aarch64/fpu/vecmath_config.h b/sysdeps/aarch64/fpu/vecmath_config.h
index 0abfd8b701..3aa6c280aa 100644
--- a/sysdeps/aarch64/fpu/vecmath_config.h
+++ b/sysdeps/aarch64/fpu/vecmath_config.h
@@ -50,4 +50,16 @@ extern const struct v_log_data
 
 #define V_EXP_TABLE_BITS 7
 extern const uint64_t __v_exp_data[1 << V_EXP_TABLE_BITS] attribute_hidden;
+
+#define V_LOG2_TABLE_BITS 7
+extern const struct v_log2_data
+{
+  double poly[5];
+  double invln2;
+  struct
+  {
+    double invc, log2c;
+  } table[1 << V_LOG2_TABLE_BITS];
+} __v_log2_data attribute_hidden;
+
 #endif
diff --git a/sysdeps/aarch64/libm-test-ulps b/sysdeps/aarch64/libm-test-ulps
index a1e5651c87..7a5af571e2 100644
--- a/sysdeps/aarch64/libm-test-ulps
+++ b/sysdeps/aarch64/libm-test-ulps
@@ -1220,11 +1220,19 @@ double: 1
 float: 1
 ldouble: 3
 
+Function: "log2_advsimd":
+double: 1
+float: 2
+
 Function: "log2_downward":
 double: 3
 float: 3
 ldouble: 3
 
+Function: "log2_sve":
+double: 1
+float: 2
+
 Function: "log2_towardzero":
 double: 2
 float: 2
diff --git a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
index 6046c3d046..657edab7ae 100644
--- a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
+++ b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
@@ -15,10 +15,14 @@ GLIBC_2.38 _ZGVsMxv_logf F
 GLIBC_2.38 _ZGVsMxv_sin F
 GLIBC_2.38 _ZGVsMxv_sinf F
 GLIBC_2.39 _ZGVnN2v_exp2 F
+GLIBC_2.39 _ZGVnN2v_log2 F
 GLIBC_2.39 _ZGVnN2v_tan F
 GLIBC_2.39 _ZGVnN4v_exp2f F
+GLIBC_2.39 _ZGVnN4v_log2f F
 GLIBC_2.39 _ZGVnN4v_tanf F
 GLIBC_2.39 _ZGVsMxv_exp2 F
 GLIBC_2.39 _ZGVsMxv_exp2f F
+GLIBC_2.39 _ZGVsMxv_log2 F
+GLIBC_2.39 _ZGVsMxv_log2f F
 GLIBC_2.39 _ZGVsMxv_tan F
 GLIBC_2.39 _ZGVsMxv_tanf F
-- 
2.27.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 4/5] aarch64: Add vector implementations of log10 routines
  2023-10-05 16:10 [PATCH v2 1/5] aarch64: Add vector implementations of tan routines Joe Ramsay
  2023-10-05 16:10 ` [PATCH v2 2/5] aarch64: Add vector implementations of exp2 routines Joe Ramsay
  2023-10-05 16:10 ` [PATCH 3/5] aarch64: Add vector implementations of log2 routines Joe Ramsay
@ 2023-10-05 16:10 ` Joe Ramsay
  2023-10-05 16:10 ` [PATCH 5/5] aarch64: Add vector implementations of exp10 routines Joe Ramsay
  2023-10-18 16:29 ` [PATCH v2 1/5] aarch64: Add vector implementations of tan routines Szabolcs Nagy
  4 siblings, 0 replies; 7+ messages in thread
From: Joe Ramsay @ 2023-10-05 16:10 UTC (permalink / raw)
  To: libc-alpha; +Cc: Joe Ramsay

A table is also added, which is shared between AdvSIMD and SVE log10.
---
Changes from v1:
* Transpose table layout for zipwise access
* Use half-vectors for AdvSIMD special-case comparison
* Optimise return values
Thanks,
Joe
 sysdeps/aarch64/fpu/Makefile                  |   4 +-
 sysdeps/aarch64/fpu/Versions                  |   4 +
 sysdeps/aarch64/fpu/bits/math-vector.h        |   4 +
 sysdeps/aarch64/fpu/log10_advsimd.c           | 119 ++++++++++++
 sysdeps/aarch64/fpu/log10_sve.c               |  76 ++++++++
 sysdeps/aarch64/fpu/log10f_advsimd.c          |  82 ++++++++
 sysdeps/aarch64/fpu/log10f_sve.c              |  94 ++++++++++
 .../fpu/test-double-advsimd-wrappers.c        |   1 +
 .../aarch64/fpu/test-double-sve-wrappers.c    |   1 +
 .../aarch64/fpu/test-float-advsimd-wrappers.c |   1 +
 sysdeps/aarch64/fpu/test-float-sve-wrappers.c |   1 +
 sysdeps/aarch64/fpu/v_log10_data.c            | 175 ++++++++++++++++++
 sysdeps/aarch64/fpu/vecmath_config.h          |  11 ++
 sysdeps/aarch64/libm-test-ulps                |   8 +
 .../unix/sysv/linux/aarch64/libmvec.abilist   |   4 +
 15 files changed, 584 insertions(+), 1 deletion(-)
 create mode 100644 sysdeps/aarch64/fpu/log10_advsimd.c
 create mode 100644 sysdeps/aarch64/fpu/log10_sve.c
 create mode 100644 sysdeps/aarch64/fpu/log10f_advsimd.c
 create mode 100644 sysdeps/aarch64/fpu/log10f_sve.c
 create mode 100644 sysdeps/aarch64/fpu/v_log10_data.c

diff --git a/sysdeps/aarch64/fpu/Makefile b/sysdeps/aarch64/fpu/Makefile
index c3f204ff0d..0a047a150d 100644
--- a/sysdeps/aarch64/fpu/Makefile
+++ b/sysdeps/aarch64/fpu/Makefile
@@ -2,6 +2,7 @@ libmvec-supported-funcs = cos \
                           exp \
                           exp2 \
                           log \
+                          log10 \
                           log2 \
                           sin \
                           tan
@@ -18,7 +19,8 @@ libmvec-support = $(addsuffix f_advsimd,$(float-advsimd-funcs)) \
                   $(addsuffix _sve,$(double-sve-funcs)) \
                   v_log_data \
                   v_exp_data \
-                  v_log2_data
+                  v_log2_data \
+                  v_log10_data
 endif
 
 sve-cflags = -march=armv8-a+sve
diff --git a/sysdeps/aarch64/fpu/Versions b/sysdeps/aarch64/fpu/Versions
index ffe62a6f65..358efed5ee 100644
--- a/sysdeps/aarch64/fpu/Versions
+++ b/sysdeps/aarch64/fpu/Versions
@@ -22,6 +22,10 @@ libmvec {
     _ZGVnN2v_exp2;
     _ZGVsMxv_exp2f;
     _ZGVsMxv_exp2;
+    _ZGVnN4v_log10f;
+    _ZGVnN2v_log10;
+    _ZGVsMxv_log10f;
+    _ZGVsMxv_log10;
     _ZGVnN4v_log2f;
     _ZGVnN2v_log2;
     _ZGVsMxv_log2f;
diff --git a/sysdeps/aarch64/fpu/bits/math-vector.h b/sysdeps/aarch64/fpu/bits/math-vector.h
index 92f214b194..59f2efa6d7 100644
--- a/sysdeps/aarch64/fpu/bits/math-vector.h
+++ b/sysdeps/aarch64/fpu/bits/math-vector.h
@@ -53,6 +53,7 @@ __vpcs __f32x4_t _ZGVnN4v_cosf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_expf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_exp2f (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_logf (__f32x4_t);
+__vpcs __f32x4_t _ZGVnN4v_log10f (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_log2f (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_sinf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_tanf (__f32x4_t);
@@ -61,6 +62,7 @@ __vpcs __f64x2_t _ZGVnN2v_cos (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_exp (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_exp2 (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_log (__f64x2_t);
+__vpcs __f64x2_t _ZGVnN2v_log10 (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_log2 (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_sin (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_tan (__f64x2_t);
@@ -74,6 +76,7 @@ __sv_f32_t _ZGVsMxv_cosf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_expf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_exp2f (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_logf (__sv_f32_t, __sv_bool_t);
+__sv_f32_t _ZGVsMxv_log10f (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_log2f (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_sinf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_tanf (__sv_f32_t, __sv_bool_t);
@@ -82,6 +85,7 @@ __sv_f64_t _ZGVsMxv_cos (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_exp (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_exp2 (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_log (__sv_f64_t, __sv_bool_t);
+__sv_f64_t _ZGVsMxv_log10 (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_log2 (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_sin (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_tan (__sv_f64_t, __sv_bool_t);
diff --git a/sysdeps/aarch64/fpu/log10_advsimd.c b/sysdeps/aarch64/fpu/log10_advsimd.c
new file mode 100644
index 0000000000..05b509f134
--- /dev/null
+++ b/sysdeps/aarch64/fpu/log10_advsimd.c
@@ -0,0 +1,119 @@
+/* Double-precision vector (AdvSIMD) log10 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "v_math.h"
+#include "poly_advsimd_f64.h"
+
+#define N (1 << V_LOG10_TABLE_BITS)
+
+static const struct data
+{
+  uint64x2_t min_norm;
+  uint32x4_t special_bound;
+  float64x2_t poly[5];
+  float64x2_t invln10, log10_2, ln2;
+  uint64x2_t sign_exp_mask;
+} data = {
+  /* Computed from log coefficients divided by log(10) then rounded to double
+     precision.  */
+  .poly = { V2 (-0x1.bcb7b1526e506p-3), V2 (0x1.287a7636be1d1p-3),
+	    V2 (-0x1.bcb7b158af938p-4), V2 (0x1.63c78734e6d07p-4),
+	    V2 (-0x1.287461742fee4p-4) },
+  .ln2 = V2 (0x1.62e42fefa39efp-1),
+  .invln10 = V2 (0x1.bcb7b1526e50ep-2),
+  .log10_2 = V2 (0x1.34413509f79ffp-2),
+  .min_norm = V2 (0x0010000000000000), /* asuint64(0x1p-1022).  */
+  .special_bound = V4 (0x7fe00000),    /* asuint64(inf) - min_norm.  */
+  .sign_exp_mask = V2 (0xfff0000000000000),
+};
+
+#define Off v_u64 (0x3fe6900900000000)
+#define IndexMask (N - 1)
+
+#define T(s, i) __v_log10_data.s[i]
+
+struct entry
+{
+  float64x2_t invc;
+  float64x2_t log10c;
+};
+
+static inline struct entry
+lookup (uint64x2_t i)
+{
+  struct entry e;
+  uint64_t i0 = (i[0] >> (52 - V_LOG10_TABLE_BITS)) & IndexMask;
+  uint64_t i1 = (i[1] >> (52 - V_LOG10_TABLE_BITS)) & IndexMask;
+  float64x2_t e0 = vld1q_f64 (&__v_log10_data.table[i0].invc);
+  float64x2_t e1 = vld1q_f64 (&__v_log10_data.table[i1].invc);
+  e.invc = vuzp1q_f64 (e0, e1);
+  e.log10c = vuzp2q_f64 (e0, e1);
+  return e;
+}
+
+static float64x2_t VPCS_ATTR NOINLINE
+special_case (float64x2_t x, float64x2_t y, float64x2_t hi, float64x2_t r2,
+	      uint32x2_t special)
+{
+  return v_call_f64 (log10, x, vfmaq_f64 (hi, r2, y), vmovl_u32 (special));
+}
+
+/* Fast implementation of double-precision vector log10
+   is a slight modification of double-precision vector log.
+   Max ULP error: < 2.5 ulp (nearest rounding.)
+   Maximum measured at 2.46 ulp for x in [0.96, 0.97]
+   _ZGVnN2v_log10(0x1.13192407fcb46p+0) got 0x1.fff6be3cae4bbp-6
+				       want 0x1.fff6be3cae4b9p-6.  */
+float64x2_t VPCS_ATTR V_NAME_D1 (log10) (float64x2_t x)
+{
+  const struct data *d = ptr_barrier (&data);
+  uint64x2_t ix = vreinterpretq_u64_f64 (x);
+  uint32x2_t special = vcge_u32 (vsubhn_u64 (ix, d->min_norm),
+				 vget_low_u32 (d->special_bound));
+
+  /* x = 2^k z; where z is in range [OFF,2*OFF) and exact.
+     The range is split into N subintervals.
+     The ith subinterval contains z and c is near its center.  */
+  uint64x2_t tmp = vsubq_u64 (ix, Off);
+  int64x2_t k = vshrq_n_s64 (vreinterpretq_s64_u64 (tmp), 52);
+  uint64x2_t iz = vsubq_u64 (ix, vandq_u64 (tmp, d->sign_exp_mask));
+  float64x2_t z = vreinterpretq_f64_u64 (iz);
+
+  struct entry e = lookup (tmp);
+
+  /* log10(x) = log1p(z/c-1)/log(10) + log10(c) + k*log10(2).  */
+  float64x2_t r = vfmaq_f64 (v_f64 (-1.0), z, e.invc);
+  float64x2_t kd = vcvtq_f64_s64 (k);
+
+  /* hi = r / log(10) + log10(c) + k*log10(2).
+     Constants in v_log10_data.c are computed (in extended precision) as
+     e.log10c := e.logc * ivln10.  */
+  float64x2_t w = vfmaq_f64 (e.log10c, r, d->invln10);
+
+  /* y = log10(1+r) + n * log10(2).  */
+  float64x2_t hi = vfmaq_f64 (w, kd, d->log10_2);
+
+  /* y = r2*(A0 + r*A1 + r2*(A2 + r*A3 + r2*A4)) + hi.  */
+  float64x2_t r2 = vmulq_f64 (r, r);
+  float64x2_t y = v_pw_horner_4_f64 (r, r2, d->poly);
+
+  if (__glibc_unlikely (v_any_u32h (special)))
+    return special_case (x, y, hi, r2, special);
+  return vfmaq_f64 (hi, r2, y);
+}
diff --git a/sysdeps/aarch64/fpu/log10_sve.c b/sysdeps/aarch64/fpu/log10_sve.c
new file mode 100644
index 0000000000..91060ab4a2
--- /dev/null
+++ b/sysdeps/aarch64/fpu/log10_sve.c
@@ -0,0 +1,76 @@
+/* Double-precision vector (SVE) log10 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "sv_math.h"
+#include "poly_sve_f64.h"
+
+#define Min 0x0010000000000000
+#define Max 0x7ff0000000000000
+#define Thres 0x7fe0000000000000 /* Max - Min.  */
+#define Off 0x3fe6900900000000
+#define N (1 << V_LOG10_TABLE_BITS)
+
+static svfloat64_t NOINLINE
+special_case (svfloat64_t x, svfloat64_t y, svbool_t special)
+{
+  return sv_call_f64 (log10, x, y, special);
+}
+
+/* SVE log10 algorithm.
+   Maximum measured error is 2.46 ulps.
+   SV_NAME_D1 (log10)(0x1.131956cd4b627p+0) got 0x1.fffbdf6eaa669p-6
+					   want 0x1.fffbdf6eaa667p-6.  */
+svfloat64_t SV_NAME_D1 (log10) (svfloat64_t x, const svbool_t pg)
+{
+  svuint64_t ix = svreinterpret_u64 (x);
+  svbool_t special = svcmpge (pg, svsub_x (pg, ix, Min), Thres);
+
+  /* x = 2^k z; where z is in range [Off,2*Off) and exact.
+     The range is split into N subintervals.
+     The ith subinterval contains z and c is near its center.  */
+  svuint64_t tmp = svsub_x (pg, ix, Off);
+  svuint64_t i = svlsr_x (pg, tmp, 51 - V_LOG10_TABLE_BITS);
+  i = svand_x (pg, i, (N - 1) << 1);
+  svfloat64_t k = svcvt_f64_x (pg, svasr_x (pg, svreinterpret_s64 (tmp), 52));
+  svfloat64_t z = svreinterpret_f64 (
+      svsub_x (pg, ix, svand_x (pg, tmp, 0xfffULL << 52)));
+
+  /* log(x) = k*log(2) + log(c) + log(z/c).  */
+  svfloat64_t invc = svld1_gather_index (pg, &__v_log10_data.table[0].invc, i);
+  svfloat64_t logc
+      = svld1_gather_index (pg, &__v_log10_data.table[0].log10c, i);
+
+  /* We approximate log(z/c) with a polynomial P(x) ~= log(x + 1):
+     r = z/c - 1 (we look up precomputed 1/c)
+     log(z/c) ~= P(r).  */
+  svfloat64_t r = svmad_x (pg, invc, z, -1.0);
+
+  /* hi = log(c) + k*log(2).  */
+  svfloat64_t w = svmla_x (pg, logc, r, __v_log10_data.invln10);
+  svfloat64_t hi = svmla_x (pg, w, k, __v_log10_data.log10_2);
+
+  /* y = r2*(A0 + r*A1 + r2*(A2 + r*A3 + r2*A4)) + hi.  */
+  svfloat64_t r2 = svmul_x (pg, r, r);
+  svfloat64_t y = sv_pw_horner_4_f64_x (pg, r, r2, __v_log10_data.poly);
+
+  if (__glibc_unlikely (svptest_any (pg, special)))
+    return special_case (x, svmla_x (svnot_z (pg, special), hi, r2, y),
+			 special);
+  return svmla_x (pg, hi, r2, y);
+}
diff --git a/sysdeps/aarch64/fpu/log10f_advsimd.c b/sysdeps/aarch64/fpu/log10f_advsimd.c
new file mode 100644
index 0000000000..ba02060bbe
--- /dev/null
+++ b/sysdeps/aarch64/fpu/log10f_advsimd.c
@@ -0,0 +1,82 @@
+/* Single-precision vector (AdvSIMD) log10 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "v_math.h"
+#include "poly_advsimd_f32.h"
+
+static const struct data
+{
+  uint32x4_t min_norm;
+  uint16x8_t special_bound;
+  float32x4_t poly[8];
+  float32x4_t inv_ln10, ln2;
+  uint32x4_t off, mantissa_mask;
+} data = {
+  /* Use order 9 for log10(1+x), i.e. order 8 for log10(1+x)/x, with x in
+      [-1/3, 1/3] (offset=2/3). Max. relative error: 0x1.068ee468p-25.  */
+  .poly = { V4 (-0x1.bcb79cp-3f), V4 (0x1.2879c8p-3f), V4 (-0x1.bcd472p-4f),
+	    V4 (0x1.6408f8p-4f), V4 (-0x1.246f8p-4f), V4 (0x1.f0e514p-5f),
+	    V4 (-0x1.0fc92cp-4f), V4 (0x1.f5f76ap-5f) },
+  .ln2 = V4 (0x1.62e43p-1f),
+  .inv_ln10 = V4 (0x1.bcb7b2p-2f),
+  .min_norm = V4 (0x00800000),
+  .special_bound = V8 (0x7f00), /* asuint32(inf) - min_norm.  */
+  .off = V4 (0x3f2aaaab),	/* 0.666667.  */
+  .mantissa_mask = V4 (0x007fffff),
+};
+
+static float32x4_t VPCS_ATTR NOINLINE
+special_case (float32x4_t x, float32x4_t y, float32x4_t p, float32x4_t r2,
+	      uint16x4_t cmp)
+{
+  /* Fall back to scalar code.  */
+  return v_call_f32 (log10f, x, vfmaq_f32 (y, p, r2), vmovl_u16 (cmp));
+}
+
+/* Fast implementation of AdvSIMD log10f,
+   uses a similar approach as AdvSIMD logf with the same offset (i.e., 2/3) and
+   an order 9 polynomial.
+   Maximum error: 3.305ulps (nearest rounding.)
+   _ZGVnN4v_log10f(0x1.555c16p+0) got 0x1.ffe2fap-4
+				 want 0x1.ffe2f4p-4.  */
+float32x4_t VPCS_ATTR V_NAME_F1 (log10) (float32x4_t x)
+{
+  const struct data *d = ptr_barrier (&data);
+  uint32x4_t u = vreinterpretq_u32_f32 (x);
+  uint16x4_t special = vcge_u16 (vsubhn_u32 (u, d->min_norm),
+				 vget_low_u16 (d->special_bound));
+
+  /* x = 2^n * (1+r), where 2/3 < 1+r < 4/3.  */
+  u = vsubq_u32 (u, d->off);
+  float32x4_t n = vcvtq_f32_s32 (
+      vshrq_n_s32 (vreinterpretq_s32_u32 (u), 23)); /* signextend.  */
+  u = vaddq_u32 (vandq_u32 (u, d->mantissa_mask), d->off);
+  float32x4_t r = vsubq_f32 (vreinterpretq_f32_u32 (u), v_f32 (1.0f));
+
+  /* y = log10(1+r) + n * log10(2).  */
+  float32x4_t r2 = vmulq_f32 (r, r);
+  float32x4_t poly = v_pw_horner_7_f32 (r, r2, d->poly);
+  /* y = Log10(2) * n + poly * InvLn(10).  */
+  float32x4_t y = vfmaq_f32 (r, d->ln2, n);
+  y = vmulq_f32 (y, d->inv_ln10);
+
+  if (__glibc_unlikely (v_any_u16h (special)))
+    return special_case (x, y, poly, r2, special);
+  return vfmaq_f32 (y, poly, r2);
+}
diff --git a/sysdeps/aarch64/fpu/log10f_sve.c b/sysdeps/aarch64/fpu/log10f_sve.c
new file mode 100644
index 0000000000..8729b17ef9
--- /dev/null
+++ b/sysdeps/aarch64/fpu/log10f_sve.c
@@ -0,0 +1,94 @@
+/* Single-precision vector (SVE) log10 function
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "sv_math.h"
+
+static const struct data
+{
+  float poly_0246[4];
+  float poly_1357[4];
+  float ln2, inv_ln10;
+} data = {
+  .poly_1357 = {
+    /* Coefficients copied from the AdvSIMD routine, then rearranged so that coeffs
+       1, 3, 5 and 7 can be loaded as a single quad-word, hence used with _lane
+       variant of MLA intrinsic.  */
+    0x1.2879c8p-3f, 0x1.6408f8p-4f, 0x1.f0e514p-5f, 0x1.f5f76ap-5f
+  },
+  .poly_0246 = { -0x1.bcb79cp-3f, -0x1.bcd472p-4f, -0x1.246f8p-4f,
+		 -0x1.0fc92cp-4f },
+  .ln2 = 0x1.62e43p-1f,
+  .inv_ln10 = 0x1.bcb7b2p-2f,
+};
+
+#define Min 0x00800000
+#define Max 0x7f800000
+#define Thres 0x7f000000  /* Max - Min.  */
+#define Offset 0x3f2aaaab /* 0.666667.  */
+#define MantissaMask 0x007fffff
+
+static svfloat32_t NOINLINE
+special_case (svfloat32_t x, svfloat32_t y, svbool_t special)
+{
+  return sv_call_f32 (log10f, x, y, special);
+}
+
+/* Optimised implementation of SVE log10f using the same algorithm and
+   polynomial as AdvSIMD log10f.
+   Maximum error is 3.31ulps:
+   SV_NAME_F1 (log10)(0x1.555c16p+0) got 0x1.ffe2fap-4
+				    want 0x1.ffe2f4p-4.  */
+svfloat32_t SV_NAME_F1 (log10) (svfloat32_t x, const svbool_t pg)
+{
+  const struct data *d = ptr_barrier (&data);
+  svuint32_t ix = svreinterpret_u32 (x);
+  svbool_t special = svcmpge (pg, svsub_x (pg, ix, Min), Thres);
+
+  /* x = 2^n * (1+r), where 2/3 < 1+r < 4/3.  */
+  ix = svsub_x (pg, ix, Offset);
+  svfloat32_t n = svcvt_f32_x (
+      pg, svasr_x (pg, svreinterpret_s32 (ix), 23)); /* signextend.  */
+  ix = svand_x (pg, ix, MantissaMask);
+  ix = svadd_x (pg, ix, Offset);
+  svfloat32_t r = svsub_x (pg, svreinterpret_f32 (ix), 1.0f);
+
+  /* y = log10(1+r) + n*log10(2)
+     log10(1+r) ~ r * InvLn(10) + P(r)
+     where P(r) is a polynomial. Use order 9 for log10(1+x), i.e. order 8 for
+     log10(1+x)/x, with x in [-1/3, 1/3] (offset=2/3).  */
+  svfloat32_t r2 = svmul_x (pg, r, r);
+  svfloat32_t r4 = svmul_x (pg, r2, r2);
+  svfloat32_t p_1357 = svld1rq (svptrue_b32 (), &d->poly_1357[0]);
+  svfloat32_t q_01 = svmla_lane (sv_f32 (d->poly_0246[0]), r, p_1357, 0);
+  svfloat32_t q_23 = svmla_lane (sv_f32 (d->poly_0246[1]), r, p_1357, 1);
+  svfloat32_t q_45 = svmla_lane (sv_f32 (d->poly_0246[2]), r, p_1357, 2);
+  svfloat32_t q_67 = svmla_lane (sv_f32 (d->poly_0246[3]), r, p_1357, 3);
+  svfloat32_t q_47 = svmla_x (pg, q_45, r2, q_67);
+  svfloat32_t q_03 = svmla_x (pg, q_01, r2, q_23);
+  svfloat32_t y = svmla_x (pg, q_03, r4, q_47);
+
+  /* Using hi = Log10(2)*n + r*InvLn(10) is faster but less accurate.  */
+  svfloat32_t hi = svmla_x (pg, r, n, d->ln2);
+  hi = svmul_x (pg, hi, d->inv_ln10);
+
+  if (__glibc_unlikely (svptest_any (pg, special)))
+    return special_case (x, svmla_x (svnot_z (pg, special), hi, r2, y),
+			 special);
+  return svmla_x (pg, hi, r2, y);
+}
diff --git a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
index d30dcd6f95..8d05498ec9 100644
--- a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
@@ -27,6 +27,7 @@ VPCS_VECTOR_WRAPPER (cos_advsimd, _ZGVnN2v_cos)
 VPCS_VECTOR_WRAPPER (exp_advsimd, _ZGVnN2v_exp)
 VPCS_VECTOR_WRAPPER (exp2_advsimd, _ZGVnN2v_exp2)
 VPCS_VECTOR_WRAPPER (log_advsimd, _ZGVnN2v_log)
+VPCS_VECTOR_WRAPPER (log10_advsimd, _ZGVnN2v_log10)
 VPCS_VECTOR_WRAPPER (log2_advsimd, _ZGVnN2v_log2)
 VPCS_VECTOR_WRAPPER (sin_advsimd, _ZGVnN2v_sin)
 VPCS_VECTOR_WRAPPER (tan_advsimd, _ZGVnN2v_tan)
diff --git a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
index 22a8479100..b65bc6f1e6 100644
--- a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
@@ -36,6 +36,7 @@ SVE_VECTOR_WRAPPER (cos_sve, _ZGVsMxv_cos)
 SVE_VECTOR_WRAPPER (exp_sve, _ZGVsMxv_exp)
 SVE_VECTOR_WRAPPER (exp2_sve, _ZGVsMxv_exp2)
 SVE_VECTOR_WRAPPER (log_sve, _ZGVsMxv_log)
+SVE_VECTOR_WRAPPER (log10_sve, _ZGVsMxv_log10)
 SVE_VECTOR_WRAPPER (log2_sve, _ZGVsMxv_log2)
 SVE_VECTOR_WRAPPER (sin_sve, _ZGVsMxv_sin)
 SVE_VECTOR_WRAPPER (tan_sve, _ZGVsMxv_tan)
diff --git a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
index e8f7f47c67..6ced0d4488 100644
--- a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
@@ -27,6 +27,7 @@ VPCS_VECTOR_WRAPPER (cosf_advsimd, _ZGVnN4v_cosf)
 VPCS_VECTOR_WRAPPER (expf_advsimd, _ZGVnN4v_expf)
 VPCS_VECTOR_WRAPPER (exp2f_advsimd, _ZGVnN4v_exp2f)
 VPCS_VECTOR_WRAPPER (logf_advsimd, _ZGVnN4v_logf)
+VPCS_VECTOR_WRAPPER (log10f_advsimd, _ZGVnN4v_log10f)
 VPCS_VECTOR_WRAPPER (log2f_advsimd, _ZGVnN4v_log2f)
 VPCS_VECTOR_WRAPPER (sinf_advsimd, _ZGVnN4v_sinf)
 VPCS_VECTOR_WRAPPER (tanf_advsimd, _ZGVnN4v_tanf)
diff --git a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
index f5e9584265..2ed8d0659a 100644
--- a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
@@ -36,6 +36,7 @@ SVE_VECTOR_WRAPPER (cosf_sve, _ZGVsMxv_cosf)
 SVE_VECTOR_WRAPPER (expf_sve, _ZGVsMxv_expf)
 SVE_VECTOR_WRAPPER (exp2f_sve, _ZGVsMxv_exp2f)
 SVE_VECTOR_WRAPPER (logf_sve, _ZGVsMxv_logf)
+SVE_VECTOR_WRAPPER (log10f_sve, _ZGVsMxv_log10f)
 SVE_VECTOR_WRAPPER (log2f_sve, _ZGVsMxv_log2f)
 SVE_VECTOR_WRAPPER (sinf_sve, _ZGVsMxv_sinf)
 SVE_VECTOR_WRAPPER (tanf_sve, _ZGVsMxv_tanf)
diff --git a/sysdeps/aarch64/fpu/v_log10_data.c b/sysdeps/aarch64/fpu/v_log10_data.c
new file mode 100644
index 0000000000..564be8eebe
--- /dev/null
+++ b/sysdeps/aarch64/fpu/v_log10_data.c
@@ -0,0 +1,175 @@
+/* Lookup table for double-precision log10(x) vector function.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "vecmath_config.h"
+
+const struct v_log10_data __v_log10_data = {
+  /* Computed from log's coefficients div by log(10) then rounded to double
+     precision.  */
+  .poly = { -0x1.bcb7b1526e506p-3, 0x1.287a7636be1d1p-3, -0x1.bcb7b158af938p-4,
+	    0x1.63c78734e6d07p-4, -0x1.287461742fee4p-4 },
+  .invln10 = 0x1.bcb7b1526e50ep-2,
+  .log10_2 = 0x1.34413509f79ffp-2,
+  /* Algorithm:
+
+	x = 2^k z
+	log10(x) = k log10(2) + log10(c) + poly(z/c - 1) / log(10)
+
+     where z is in [a;2a) which is split into N subintervals (a=0x1.69009p-1,
+     N=128) and log(c) and 1/c for the ith subinterval comes from lookup
+     tables:
+
+	table[i].invc = 1/c
+	table[i].log10c = (double)log10(c)
+
+     where c is near the center of the subinterval and is chosen by trying
+     several floating point invc candidates around 1/center and selecting one
+     for which the error in (double)log(c) is minimized (< 0x1p-74), except the
+     subinterval that contains 1 and the previous one got tweaked to avoid
+     cancellation. NB: invc should be optimized to minimize error in
+     (double)log10(c) instead.  */
+  .table = { { 0x1.6a133d0dec120p+0, -0x1.345825f221684p-3 },
+	     { 0x1.6815f2f3e42edp+0, -0x1.2f71a1f0c554ep-3 },
+	     { 0x1.661e39be1ac9ep+0, -0x1.2a91fdb30b1f4p-3 },
+	     { 0x1.642bfa30ac371p+0, -0x1.25b9260981a04p-3 },
+	     { 0x1.623f1d916f323p+0, -0x1.20e7081762193p-3 },
+	     { 0x1.60578da220f65p+0, -0x1.1c1b914aeefacp-3 },
+	     { 0x1.5e75349dea571p+0, -0x1.1756af5de404dp-3 },
+	     { 0x1.5c97fd387a75ap+0, -0x1.12985059c90bfp-3 },
+	     { 0x1.5abfd2981f200p+0, -0x1.0de0628f63df4p-3 },
+	     { 0x1.58eca051dc99cp+0, -0x1.092ed492e08eep-3 },
+	     { 0x1.571e526d9df12p+0, -0x1.0483954caf1dfp-3 },
+	     { 0x1.5554d555b3fcbp+0, -0x1.ffbd27a9adbcp-4 },
+	     { 0x1.539015e2a20cdp+0, -0x1.f67f7f2e3d1ap-4 },
+	     { 0x1.51d0014ee0164p+0, -0x1.ed4e1071ceebep-4 },
+	     { 0x1.50148538cd9eep+0, -0x1.e428bb47413c4p-4 },
+	     { 0x1.4e5d8f9f698a1p+0, -0x1.db0f6003028d6p-4 },
+	     { 0x1.4cab0edca66bep+0, -0x1.d201df6749831p-4 },
+	     { 0x1.4afcf1a9db874p+0, -0x1.c9001ac5c9672p-4 },
+	     { 0x1.495327136e16fp+0, -0x1.c009f3c78c79p-4 },
+	     { 0x1.47ad9e84af28fp+0, -0x1.b71f4cb642e53p-4 },
+	     { 0x1.460c47b39ae15p+0, -0x1.ae400818526b2p-4 },
+	     { 0x1.446f12b278001p+0, -0x1.a56c091954f87p-4 },
+	     { 0x1.42d5efdd720ecp+0, -0x1.9ca3332f096eep-4 },
+	     { 0x1.4140cfe001a0fp+0, -0x1.93e56a3f23e55p-4 },
+	     { 0x1.3fafa3b421f69p+0, -0x1.8b3292a3903bp-4 },
+	     { 0x1.3e225c9c8ece5p+0, -0x1.828a9112d9618p-4 },
+	     { 0x1.3c98ec29a211ap+0, -0x1.79ed4ac35f5acp-4 },
+	     { 0x1.3b13442a413fep+0, -0x1.715aa51ed28c4p-4 },
+	     { 0x1.399156baa3c54p+0, -0x1.68d2861c999e9p-4 },
+	     { 0x1.38131639b4cdbp+0, -0x1.6054d40ded21p-4 },
+	     { 0x1.36987540fbf53p+0, -0x1.57e17576bc9a2p-4 },
+	     { 0x1.352166b648f61p+0, -0x1.4f7851798bb0bp-4 },
+	     { 0x1.33adddb3eb575p+0, -0x1.47194f5690ae3p-4 },
+	     { 0x1.323dcd99fc1d3p+0, -0x1.3ec456d58ec47p-4 },
+	     { 0x1.30d129fefc7d2p+0, -0x1.36794ff3e5f55p-4 },
+	     { 0x1.2f67e6b72fe7dp+0, -0x1.2e382315725e4p-4 },
+	     { 0x1.2e01f7cf8b187p+0, -0x1.2600b8ed82e91p-4 },
+	     { 0x1.2c9f518ddc86ep+0, -0x1.1dd2fa85efc12p-4 },
+	     { 0x1.2b3fe86e5f413p+0, -0x1.15aed136e3961p-4 },
+	     { 0x1.29e3b1211b25cp+0, -0x1.0d94269d1a30dp-4 },
+	     { 0x1.288aa08b373cfp+0, -0x1.0582e4a7659f5p-4 },
+	     { 0x1.2734abcaa8467p+0, -0x1.faf5eb655742dp-5 },
+	     { 0x1.25e1c82459b81p+0, -0x1.eaf888487e8eep-5 },
+	     { 0x1.2491eb1ad59c5p+0, -0x1.db0d75ef25a82p-5 },
+	     { 0x1.23450a54048b5p+0, -0x1.cb348a49e6431p-5 },
+	     { 0x1.21fb1bb09e578p+0, -0x1.bb6d9c69acdd8p-5 },
+	     { 0x1.20b415346d8f7p+0, -0x1.abb88368aa7ap-5 },
+	     { 0x1.1f6fed179a1acp+0, -0x1.9c1517476af14p-5 },
+	     { 0x1.1e2e99b93c7b3p+0, -0x1.8c833051bfa4dp-5 },
+	     { 0x1.1cf011a7a882ap+0, -0x1.7d02a78e7fb31p-5 },
+	     { 0x1.1bb44b97dba5ap+0, -0x1.6d93565e97c5fp-5 },
+	     { 0x1.1a7b3e66cdd4fp+0, -0x1.5e351695db0c5p-5 },
+	     { 0x1.1944e11dc56cdp+0, -0x1.4ee7c2ba67adcp-5 },
+	     { 0x1.18112aebb1a6ep+0, -0x1.3fab35ba16c01p-5 },
+	     { 0x1.16e013231b7e9p+0, -0x1.307f4ad854bc9p-5 },
+	     { 0x1.15b1913f156cfp+0, -0x1.2163ddf4f988cp-5 },
+	     { 0x1.14859cdedde13p+0, -0x1.1258cb5d19e22p-5 },
+	     { 0x1.135c2dc68cfa4p+0, -0x1.035defdba3188p-5 },
+	     { 0x1.12353bdb01684p+0, -0x1.e8e651191bce4p-6 },
+	     { 0x1.1110bf25b85b4p+0, -0x1.cb30a62be444cp-6 },
+	     { 0x1.0feeafd2f8577p+0, -0x1.ad9a9b3043823p-6 },
+	     { 0x1.0ecf062c51c3bp+0, -0x1.9023ecda1ccdep-6 },
+	     { 0x1.0db1baa076c8bp+0, -0x1.72cc592bd82dp-6 },
+	     { 0x1.0c96c5bb3048ep+0, -0x1.55939eb1f9c6ep-6 },
+	     { 0x1.0b7e20263e070p+0, -0x1.38797ca6cc5ap-6 },
+	     { 0x1.0a67c2acd0ce3p+0, -0x1.1b7db35c2c072p-6 },
+	     { 0x1.0953a6391e982p+0, -0x1.fd400812ee9a2p-7 },
+	     { 0x1.0841c3caea380p+0, -0x1.c3c05fb4620f1p-7 },
+	     { 0x1.07321489b13eap+0, -0x1.8a7bf3c40e2e3p-7 },
+	     { 0x1.062491aee9904p+0, -0x1.517249c15a75cp-7 },
+	     { 0x1.05193497a7cc5p+0, -0x1.18a2ea5330c91p-7 },
+	     { 0x1.040ff6b5f5e9fp+0, -0x1.c01abc8cdc4e2p-8 },
+	     { 0x1.0308d19aa6127p+0, -0x1.4f6261750dec9p-8 },
+	     { 0x1.0203beedb0c67p+0, -0x1.be37b6612afa7p-9 },
+	     { 0x1.010037d38bcc2p+0, -0x1.bc3a8398ac26p-10 },
+	     { 1.0, 0.0 },
+	     { 0x1.fc06d493cca10p-1, 0x1.bb796219f30a5p-9 },
+	     { 0x1.f81e6ac3b918fp-1, 0x1.b984fdcba61cep-8 },
+	     { 0x1.f44546ef18996p-1, 0x1.49cf12adf8e8cp-7 },
+	     { 0x1.f07b10382c84bp-1, 0x1.b6075b5217083p-7 },
+	     { 0x1.ecbf7070e59d4p-1, 0x1.10b7466fc30ddp-6 },
+	     { 0x1.e91213f715939p-1, 0x1.4603e4db6a3a1p-6 },
+	     { 0x1.e572a9a75f7b7p-1, 0x1.7aeb10e99e105p-6 },
+	     { 0x1.e1e0e2c530207p-1, 0x1.af6e49b0f0e36p-6 },
+	     { 0x1.de5c72d8a8be3p-1, 0x1.e38f064f41179p-6 },
+	     { 0x1.dae50fa5658ccp-1, 0x1.0ba75abbb7623p-5 },
+	     { 0x1.d77a71145a2dap-1, 0x1.25575ee2dba86p-5 },
+	     { 0x1.d41c51166623ep-1, 0x1.3ed83f477f946p-5 },
+	     { 0x1.d0ca6ba0bb29fp-1, 0x1.582aa79af60efp-5 },
+	     { 0x1.cd847e8e59681p-1, 0x1.714f400fa83aep-5 },
+	     { 0x1.ca4a499693e00p-1, 0x1.8a46ad3901cb9p-5 },
+	     { 0x1.c71b8e399e821p-1, 0x1.a311903b6b87p-5 },
+	     { 0x1.c3f80faf19077p-1, 0x1.bbb086f216911p-5 },
+	     { 0x1.c0df92dc2b0ecp-1, 0x1.d4242bdda648ep-5 },
+	     { 0x1.bdd1de3cbb542p-1, 0x1.ec6d167c2af1p-5 },
+	     { 0x1.baceb9e1007a3p-1, 0x1.0245ed8221426p-4 },
+	     { 0x1.b7d5ef543e55ep-1, 0x1.0e40856c74f64p-4 },
+	     { 0x1.b4e749977d953p-1, 0x1.1a269a31120fep-4 },
+	     { 0x1.b20295155478ep-1, 0x1.25f8718fc076cp-4 },
+	     { 0x1.af279f8e82be2p-1, 0x1.31b64ffc95bfp-4 },
+	     { 0x1.ac5638197fdf3p-1, 0x1.3d60787ca5063p-4 },
+	     { 0x1.a98e2f102e087p-1, 0x1.48f72ccd187fdp-4 },
+	     { 0x1.a6cf5606d05c1p-1, 0x1.547aad6602f1cp-4 },
+	     { 0x1.a4197fc04d746p-1, 0x1.5feb3989d3acbp-4 },
+	     { 0x1.a16c80293dc01p-1, 0x1.6b490f3978c79p-4 },
+	     { 0x1.9ec82c4dc5bc9p-1, 0x1.76946b3f5e703p-4 },
+	     { 0x1.9c2c5a491f534p-1, 0x1.81cd895717c83p-4 },
+	     { 0x1.9998e1480b618p-1, 0x1.8cf4a4055c30ep-4 },
+	     { 0x1.970d9977c6c2dp-1, 0x1.9809f4c48c0ebp-4 },
+	     { 0x1.948a5c023d212p-1, 0x1.a30db3f9899efp-4 },
+	     { 0x1.920f0303d6809p-1, 0x1.ae001905458fcp-4 },
+	     { 0x1.8f9b698a98b45p-1, 0x1.b8e15a2e3a2cdp-4 },
+	     { 0x1.8d2f6b81726f6p-1, 0x1.c3b1ace2b0996p-4 },
+	     { 0x1.8acae5bb55badp-1, 0x1.ce71456edfa62p-4 },
+	     { 0x1.886db5d9275b8p-1, 0x1.d9205759882c4p-4 },
+	     { 0x1.8617ba567c13cp-1, 0x1.e3bf1513af0dfp-4 },
+	     { 0x1.83c8d27487800p-1, 0x1.ee4db0412c414p-4 },
+	     { 0x1.8180de3c5dbe7p-1, 0x1.f8cc5998de3a5p-4 },
+	     { 0x1.7f3fbe71cdb71p-1, 0x1.019da085eaeb1p-3 },
+	     { 0x1.7d055498071c1p-1, 0x1.06cd4acdb4e3dp-3 },
+	     { 0x1.7ad182e54f65ap-1, 0x1.0bf542bef813fp-3 },
+	     { 0x1.78a42c3c90125p-1, 0x1.11159f14da262p-3 },
+	     { 0x1.767d342f76944p-1, 0x1.162e761c10d1cp-3 },
+	     { 0x1.745c7ef26b00ap-1, 0x1.1b3fddc60d43ep-3 },
+	     { 0x1.7241f15769d0fp-1, 0x1.2049ebac86aa6p-3 },
+	     { 0x1.702d70d396e41p-1, 0x1.254cb4fb7836ap-3 },
+	     { 0x1.6e1ee3700cd11p-1, 0x1.2a484e8d0d252p-3 },
+	     { 0x1.6c162fc9cbe02p-1, 0x1.2f3ccce1c860bp-3 } }
+};
diff --git a/sysdeps/aarch64/fpu/vecmath_config.h b/sysdeps/aarch64/fpu/vecmath_config.h
index 3aa6c280aa..2c8e243236 100644
--- a/sysdeps/aarch64/fpu/vecmath_config.h
+++ b/sysdeps/aarch64/fpu/vecmath_config.h
@@ -62,4 +62,15 @@ extern const struct v_log2_data
   } table[1 << V_LOG2_TABLE_BITS];
 } __v_log2_data attribute_hidden;
 
+#define V_LOG10_TABLE_BITS 7
+extern const struct v_log10_data
+{
+  double poly[5];
+  double invln10, log10_2;
+  struct
+  {
+    double invc, log10c;
+  } table[1 << V_LOG10_TABLE_BITS];
+} __v_log10_data attribute_hidden;
+
 #endif
diff --git a/sysdeps/aarch64/libm-test-ulps b/sysdeps/aarch64/libm-test-ulps
index 7a5af571e2..6641c7fa0b 100644
--- a/sysdeps/aarch64/libm-test-ulps
+++ b/sysdeps/aarch64/libm-test-ulps
@@ -1180,11 +1180,19 @@ double: 2
 float: 2
 ldouble: 2
 
+Function: "log10_advsimd":
+double: 1
+float: 2
+
 Function: "log10_downward":
 double: 2
 float: 3
 ldouble: 1
 
+Function: "log10_sve":
+double: 1
+float: 2
+
 Function: "log10_towardzero":
 double: 2
 float: 2
diff --git a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
index 657edab7ae..f8776a6bea 100644
--- a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
+++ b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
@@ -15,13 +15,17 @@ GLIBC_2.38 _ZGVsMxv_logf F
 GLIBC_2.38 _ZGVsMxv_sin F
 GLIBC_2.38 _ZGVsMxv_sinf F
 GLIBC_2.39 _ZGVnN2v_exp2 F
+GLIBC_2.39 _ZGVnN2v_log10 F
 GLIBC_2.39 _ZGVnN2v_log2 F
 GLIBC_2.39 _ZGVnN2v_tan F
 GLIBC_2.39 _ZGVnN4v_exp2f F
+GLIBC_2.39 _ZGVnN4v_log10f F
 GLIBC_2.39 _ZGVnN4v_log2f F
 GLIBC_2.39 _ZGVnN4v_tanf F
 GLIBC_2.39 _ZGVsMxv_exp2 F
 GLIBC_2.39 _ZGVsMxv_exp2f F
+GLIBC_2.39 _ZGVsMxv_log10 F
+GLIBC_2.39 _ZGVsMxv_log10f F
 GLIBC_2.39 _ZGVsMxv_log2 F
 GLIBC_2.39 _ZGVsMxv_log2f F
 GLIBC_2.39 _ZGVsMxv_tan F
-- 
2.27.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 5/5] aarch64: Add vector implementations of exp10 routines
  2023-10-05 16:10 [PATCH v2 1/5] aarch64: Add vector implementations of tan routines Joe Ramsay
                   ` (2 preceding siblings ...)
  2023-10-05 16:10 ` [PATCH 4/5] aarch64: Add vector implementations of log10 routines Joe Ramsay
@ 2023-10-05 16:10 ` Joe Ramsay
  2023-10-18 16:29 ` [PATCH v2 1/5] aarch64: Add vector implementations of tan routines Szabolcs Nagy
  4 siblings, 0 replies; 7+ messages in thread
From: Joe Ramsay @ 2023-10-05 16:10 UTC (permalink / raw)
  To: libc-alpha; +Cc: Joe Ramsay

Double-precision routines either reuse the exp table (AdvSIMD) or use
SVE FEXPA intruction.
---
Changes from v1:
* Use overloaded intrinsics, reflects recent updates in AOR
Thanks,
Joe
 sysdeps/aarch64/fpu/Makefile                  |   1 +
 sysdeps/aarch64/fpu/Versions                  |   4 +
 sysdeps/aarch64/fpu/bits/math-vector.h        |   4 +
 sysdeps/aarch64/fpu/exp10_advsimd.c           | 145 ++++++++++++++++++
 sysdeps/aarch64/fpu/exp10_sve.c               | 127 +++++++++++++++
 sysdeps/aarch64/fpu/exp10f_advsimd.c          | 140 +++++++++++++++++
 sysdeps/aarch64/fpu/exp10f_sve.c              |  91 +++++++++++
 .../fpu/test-double-advsimd-wrappers.c        |   1 +
 .../aarch64/fpu/test-double-sve-wrappers.c    |   1 +
 .../aarch64/fpu/test-float-advsimd-wrappers.c |   1 +
 sysdeps/aarch64/fpu/test-float-sve-wrappers.c |   1 +
 sysdeps/aarch64/libm-test-ulps                |   8 +
 .../unix/sysv/linux/aarch64/libmvec.abilist   |   4 +
 13 files changed, 528 insertions(+)
 create mode 100644 sysdeps/aarch64/fpu/exp10_advsimd.c
 create mode 100644 sysdeps/aarch64/fpu/exp10_sve.c
 create mode 100644 sysdeps/aarch64/fpu/exp10f_advsimd.c
 create mode 100644 sysdeps/aarch64/fpu/exp10f_sve.c

diff --git a/sysdeps/aarch64/fpu/Makefile b/sysdeps/aarch64/fpu/Makefile
index 0a047a150d..1f1ac2a2b8 100644
--- a/sysdeps/aarch64/fpu/Makefile
+++ b/sysdeps/aarch64/fpu/Makefile
@@ -1,5 +1,6 @@
 libmvec-supported-funcs = cos \
                           exp \
+                          exp10 \
                           exp2 \
                           log \
                           log10 \
diff --git a/sysdeps/aarch64/fpu/Versions b/sysdeps/aarch64/fpu/Versions
index 358efed5ee..eb5ad50017 100644
--- a/sysdeps/aarch64/fpu/Versions
+++ b/sysdeps/aarch64/fpu/Versions
@@ -18,6 +18,10 @@ libmvec {
     _ZGVsMxv_sinf;
   }
   GLIBC_2.39 {
+    _ZGVnN4v_exp10f;
+    _ZGVnN2v_exp10;
+    _ZGVsMxv_exp10f;
+    _ZGVsMxv_exp10;
     _ZGVnN4v_exp2f;
     _ZGVnN2v_exp2;
     _ZGVsMxv_exp2f;
diff --git a/sysdeps/aarch64/fpu/bits/math-vector.h b/sysdeps/aarch64/fpu/bits/math-vector.h
index 59f2efa6d7..06587ffa91 100644
--- a/sysdeps/aarch64/fpu/bits/math-vector.h
+++ b/sysdeps/aarch64/fpu/bits/math-vector.h
@@ -51,6 +51,7 @@ typedef __SVBool_t __sv_bool_t;
 
 __vpcs __f32x4_t _ZGVnN4v_cosf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_expf (__f32x4_t);
+__vpcs __f32x4_t _ZGVnN4v_exp10f (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_exp2f (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_logf (__f32x4_t);
 __vpcs __f32x4_t _ZGVnN4v_log10f (__f32x4_t);
@@ -60,6 +61,7 @@ __vpcs __f32x4_t _ZGVnN4v_tanf (__f32x4_t);
 
 __vpcs __f64x2_t _ZGVnN2v_cos (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_exp (__f64x2_t);
+__vpcs __f64x2_t _ZGVnN2v_exp10 (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_exp2 (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_log (__f64x2_t);
 __vpcs __f64x2_t _ZGVnN2v_log10 (__f64x2_t);
@@ -74,6 +76,7 @@ __vpcs __f64x2_t _ZGVnN2v_tan (__f64x2_t);
 
 __sv_f32_t _ZGVsMxv_cosf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_expf (__sv_f32_t, __sv_bool_t);
+__sv_f32_t _ZGVsMxv_exp10f (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_exp2f (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_logf (__sv_f32_t, __sv_bool_t);
 __sv_f32_t _ZGVsMxv_log10f (__sv_f32_t, __sv_bool_t);
@@ -83,6 +86,7 @@ __sv_f32_t _ZGVsMxv_tanf (__sv_f32_t, __sv_bool_t);
 
 __sv_f64_t _ZGVsMxv_cos (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_exp (__sv_f64_t, __sv_bool_t);
+__sv_f64_t _ZGVsMxv_exp10 (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_exp2 (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_log (__sv_f64_t, __sv_bool_t);
 __sv_f64_t _ZGVsMxv_log10 (__sv_f64_t, __sv_bool_t);
diff --git a/sysdeps/aarch64/fpu/exp10_advsimd.c b/sysdeps/aarch64/fpu/exp10_advsimd.c
new file mode 100644
index 0000000000..c584e2a270
--- /dev/null
+++ b/sysdeps/aarch64/fpu/exp10_advsimd.c
@@ -0,0 +1,145 @@
+/* Double-precision vector (AdvSIMD) exp10 function.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "v_math.h"
+
+/* Value of |x| above which scale overflows without special treatment.  */
+#define SpecialBound 306.0 /* floor (log10 (2^1023)) - 1.  */
+/* Value of n above which scale overflows even with special treatment.  */
+#define ScaleBound 163840.0 /* 1280.0 * N.  */
+
+const static struct data
+{
+  float64x2_t poly[4];
+  float64x2_t log10_2, log2_10_hi, log2_10_lo, shift;
+#if !WANT_SIMD_EXCEPT
+  float64x2_t special_bound, scale_thresh;
+#endif
+} data = {
+  /* Coefficients generated using Remez algorithm.
+     rel error: 0x1.5ddf8f28p-54
+     abs error: 0x1.5ed266c8p-54 in [ -log10(2)/256, log10(2)/256 ]
+     maxerr: 1.14432 +0.5 ulp.  */
+  .poly = { V2 (0x1.26bb1bbb5524p1), V2 (0x1.53524c73cecdap1),
+	    V2 (0x1.047060efb781cp1), V2 (0x1.2bd76040f0d16p0) },
+  .log10_2 = V2 (0x1.a934f0979a371p8),	   /* N/log2(10).  */
+  .log2_10_hi = V2 (0x1.34413509f79ffp-9), /* log2(10)/N.  */
+  .log2_10_lo = V2 (-0x1.9dc1da994fd21p-66),
+  .shift = V2 (0x1.8p+52),
+#if !WANT_SIMD_EXCEPT
+  .scale_thresh = V2 (ScaleBound),
+  .special_bound = V2 (SpecialBound),
+#endif
+};
+
+#define N (1 << V_EXP_TABLE_BITS)
+#define IndexMask v_u64 (N - 1)
+
+#if WANT_SIMD_EXCEPT
+
+# define TinyBound v_u64 (0x2000000000000000) /* asuint64 (0x1p-511).  */
+# define BigBound v_u64 (0x4070000000000000)  /* asuint64 (0x1p8).  */
+# define Thres v_u64 (0x2070000000000000)     /* BigBound - TinyBound.  */
+
+static inline float64x2_t VPCS_ATTR
+special_case (float64x2_t x, float64x2_t y, uint64x2_t cmp)
+{
+  /* If fenv exceptions are to be triggered correctly, fall back to the scalar
+     routine for special lanes.  */
+  return v_call_f64 (exp10, x, y, cmp);
+}
+
+#else
+
+# define SpecialOffset v_u64 (0x6000000000000000) /* 0x1p513.  */
+/* SpecialBias1 + SpecialBias1 = asuint(1.0).  */
+# define SpecialBias1 v_u64 (0x7000000000000000)  /* 0x1p769.  */
+# define SpecialBias2 v_u64 (0x3010000000000000)  /* 0x1p-254.  */
+
+static float64x2_t VPCS_ATTR NOINLINE
+special_case (float64x2_t s, float64x2_t y, float64x2_t n,
+	      const struct data *d)
+{
+  /* 2^(n/N) may overflow, break it up into s1*s2.  */
+  uint64x2_t b = vandq_u64 (vcltzq_f64 (n), SpecialOffset);
+  float64x2_t s1 = vreinterpretq_f64_u64 (vsubq_u64 (SpecialBias1, b));
+  float64x2_t s2 = vreinterpretq_f64_u64 (
+      vaddq_u64 (vsubq_u64 (vreinterpretq_u64_f64 (s), SpecialBias2), b));
+  uint64x2_t cmp = vcagtq_f64 (n, d->scale_thresh);
+  float64x2_t r1 = vmulq_f64 (s1, s1);
+  float64x2_t r0 = vmulq_f64 (vfmaq_f64 (s2, y, s2), s1);
+  return vbslq_f64 (cmp, r1, r0);
+}
+
+#endif
+
+/* Fast vector implementation of exp10.
+   Maximum measured error is 1.64 ulp.
+   _ZGVnN2v_exp10(0x1.ccd1c9d82cc8cp+0) got 0x1.f8dab6d7fed0cp+5
+				       want 0x1.f8dab6d7fed0ap+5.  */
+float64x2_t VPCS_ATTR V_NAME_D1 (exp10) (float64x2_t x)
+{
+  const struct data *d = ptr_barrier (&data);
+  uint64x2_t cmp;
+#if WANT_SIMD_EXCEPT
+  /* If any lanes are special, mask them with 1 and retain a copy of x to allow
+     special_case to fix special lanes later. This is only necessary if fenv
+     exceptions are to be triggered correctly.  */
+  float64x2_t xm = x;
+  uint64x2_t iax = vreinterpretq_u64_f64 (vabsq_f64 (x));
+  cmp = vcgeq_u64 (vsubq_u64 (iax, TinyBound), Thres);
+  if (__glibc_unlikely (v_any_u64 (cmp)))
+    x = vbslq_f64 (cmp, v_f64 (1), x);
+#else
+  cmp = vcageq_f64 (x, d->special_bound);
+#endif
+
+  /* n = round(x/(log10(2)/N)).  */
+  float64x2_t z = vfmaq_f64 (d->shift, x, d->log10_2);
+  uint64x2_t u = vreinterpretq_u64_f64 (z);
+  float64x2_t n = vsubq_f64 (z, d->shift);
+
+  /* r = x - n*log10(2)/N.  */
+  float64x2_t r = x;
+  r = vfmsq_f64 (r, d->log2_10_hi, n);
+  r = vfmsq_f64 (r, d->log2_10_lo, n);
+
+  uint64x2_t e = vshlq_n_u64 (u, 52 - V_EXP_TABLE_BITS);
+  uint64x2_t i = vandq_u64 (u, IndexMask);
+
+  /* y = exp10(r) - 1 ~= C0 r + C1 r^2 + C2 r^3 + C3 r^4.  */
+  float64x2_t r2 = vmulq_f64 (r, r);
+  float64x2_t p = vfmaq_f64 (d->poly[0], r, d->poly[1]);
+  float64x2_t y = vfmaq_f64 (d->poly[2], r, d->poly[3]);
+  p = vfmaq_f64 (p, y, r2);
+  y = vmulq_f64 (r, p);
+
+  /* s = 2^(n/N).  */
+  u = v_lookup_u64 (__v_exp_data, i);
+  float64x2_t s = vreinterpretq_f64_u64 (vaddq_u64 (u, e));
+
+  if (__glibc_unlikely (v_any_u64 (cmp)))
+#if WANT_SIMD_EXCEPT
+    return special_case (xm, vfmaq_f64 (s, y, s), cmp);
+#else
+    return special_case (s, y, n, d);
+#endif
+
+  return vfmaq_f64 (s, y, s);
+}
diff --git a/sysdeps/aarch64/fpu/exp10_sve.c b/sysdeps/aarch64/fpu/exp10_sve.c
new file mode 100644
index 0000000000..a8cef7b692
--- /dev/null
+++ b/sysdeps/aarch64/fpu/exp10_sve.c
@@ -0,0 +1,127 @@
+/* Double-precision vector (SVE) exp10 function.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "sv_math.h"
+#include "poly_sve_f64.h"
+
+#define SpecialBound 307.0 /* floor (log10 (2^1023)).  */
+
+static const struct data
+{
+  double poly[5];
+  double shift, log10_2, log2_10_hi, log2_10_lo, scale_thres, special_bound;
+} data = {
+  /* Coefficients generated using Remez algorithm.
+     rel error: 0x1.9fcb9b3p-60
+     abs error: 0x1.a20d9598p-60 in [ -log10(2)/128, log10(2)/128 ]
+     max ulp err 0.52 +0.5.  */
+  .poly = { 0x1.26bb1bbb55516p1, 0x1.53524c73cd32ap1, 0x1.0470591daeafbp1,
+	    0x1.2bd77b1361ef6p0, 0x1.142b5d54e9621p-1 },
+  /* 1.5*2^46+1023. This value is further explained below.  */
+  .shift = 0x1.800000000ffc0p+46,
+  .log10_2 = 0x1.a934f0979a371p1,     /* 1/log2(10).  */
+  .log2_10_hi = 0x1.34413509f79ffp-2, /* log2(10).  */
+  .log2_10_lo = -0x1.9dc1da994fd21p-59,
+  .scale_thres = 1280.0,
+  .special_bound = SpecialBound,
+};
+
+#define SpecialOffset 0x6000000000000000 /* 0x1p513.  */
+/* SpecialBias1 + SpecialBias1 = asuint(1.0).  */
+#define SpecialBias1 0x7000000000000000 /* 0x1p769.  */
+#define SpecialBias2 0x3010000000000000 /* 0x1p-254.  */
+
+/* Update of both special and non-special cases, if any special case is
+   detected.  */
+static inline svfloat64_t
+special_case (svbool_t pg, svfloat64_t s, svfloat64_t y, svfloat64_t n,
+	      const struct data *d)
+{
+  /* s=2^n may overflow, break it up into s=s1*s2,
+     such that exp = s + s*y can be computed as s1*(s2+s2*y)
+     and s1*s1 overflows only if n>0.  */
+
+  /* If n<=0 then set b to 0x6, 0 otherwise.  */
+  svbool_t p_sign = svcmple (pg, n, 0.0); /* n <= 0.  */
+  svuint64_t b = svdup_u64_z (p_sign, SpecialOffset);
+
+  /* Set s1 to generate overflow depending on sign of exponent n.  */
+  svfloat64_t s1 = svreinterpret_f64 (svsubr_x (pg, b, SpecialBias1));
+  /* Offset s to avoid overflow in final result if n is below threshold.  */
+  svfloat64_t s2 = svreinterpret_f64 (
+      svadd_x (pg, svsub_x (pg, svreinterpret_u64 (s), SpecialBias2), b));
+
+  /* |n| > 1280 => 2^(n) overflows.  */
+  svbool_t p_cmp = svacgt (pg, n, d->scale_thres);
+
+  svfloat64_t r1 = svmul_x (pg, s1, s1);
+  svfloat64_t r2 = svmla_x (pg, s2, s2, y);
+  svfloat64_t r0 = svmul_x (pg, r2, s1);
+
+  return svsel (p_cmp, r1, r0);
+}
+
+/* Fast vector implementation of exp10 using FEXPA instruction.
+   Maximum measured error is 1.02 ulp.
+   SV_NAME_D1 (exp10)(-0x1.2862fec805e58p+2) got 0x1.885a89551d782p-16
+					    want 0x1.885a89551d781p-16.  */
+svfloat64_t SV_NAME_D1 (exp10) (svfloat64_t x, svbool_t pg)
+{
+  const struct data *d = ptr_barrier (&data);
+  svbool_t no_big_scale = svacle (pg, x, d->special_bound);
+  svbool_t special = svnot_z (pg, no_big_scale);
+
+  /* n = round(x/(log10(2)/N)).  */
+  svfloat64_t shift = sv_f64 (d->shift);
+  svfloat64_t z = svmla_x (pg, shift, x, d->log10_2);
+  svfloat64_t n = svsub_x (pg, z, shift);
+
+  /* r = x - n*log10(2)/N.  */
+  svfloat64_t log2_10 = svld1rq (svptrue_b64 (), &d->log2_10_hi);
+  svfloat64_t r = x;
+  r = svmls_lane (r, n, log2_10, 0);
+  r = svmls_lane (r, n, log2_10, 1);
+
+  /* scale = 2^(n/N), computed using FEXPA. FEXPA does not propagate NaNs, so
+     for consistent NaN handling we have to manually propagate them. This
+     comes at significant performance cost.  */
+  svuint64_t u = svreinterpret_u64 (z);
+  svfloat64_t scale = svexpa (u);
+
+  /* Approximate exp10(r) using polynomial.  */
+  svfloat64_t r2 = svmul_x (pg, r, r);
+  svfloat64_t y = svmla_x (pg, svmul_x (pg, r, d->poly[0]), r2,
+			   sv_pairwise_poly_3_f64_x (pg, r, r2, d->poly + 1));
+
+  /* Assemble result as exp10(x) = 2^n * exp10(r).  If |x| > SpecialBound
+     multiplication may overflow, so use special case routine.  */
+  if (__glibc_unlikely (svptest_any (pg, special)))
+    {
+      /* FEXPA zeroes the sign bit, however the sign is meaningful to the
+	 special case function so needs to be copied.
+	 e = sign bit of u << 46.  */
+      svuint64_t e = svand_x (pg, svlsl_x (pg, u, 46), 0x8000000000000000);
+      /* Copy sign to scale.  */
+      scale = svreinterpret_f64 (svadd_x (pg, e, svreinterpret_u64 (scale)));
+      return special_case (pg, scale, y, n, d);
+    }
+
+  /* No special case.  */
+  return svmla_x (pg, scale, scale, y);
+}
diff --git a/sysdeps/aarch64/fpu/exp10f_advsimd.c b/sysdeps/aarch64/fpu/exp10f_advsimd.c
new file mode 100644
index 0000000000..9e754c46fa
--- /dev/null
+++ b/sysdeps/aarch64/fpu/exp10f_advsimd.c
@@ -0,0 +1,140 @@
+/* Single-precision vector (AdvSIMD) exp10 function.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "v_math.h"
+#include "poly_advsimd_f32.h"
+
+#define ScaleBound 192.0f
+
+static const struct data
+{
+  float32x4_t poly[5];
+  float32x4_t shift, log10_2, log2_10_hi, log2_10_lo;
+#if !WANT_SIMD_EXCEPT
+  float32x4_t scale_thresh;
+#endif
+} data = {
+  /* Coefficients generated using Remez algorithm with minimisation of relative
+     error.
+     rel error: 0x1.89dafa3p-24
+     abs error: 0x1.167d55p-23 in [-log10(2)/2, log10(2)/2]
+     maxerr: 1.85943 +0.5 ulp.  */
+  .poly = { V4 (0x1.26bb16p+1f), V4 (0x1.5350d2p+1f), V4 (0x1.04744ap+1f),
+	    V4 (0x1.2d8176p+0f), V4 (0x1.12b41ap-1f) },
+  .shift = V4 (0x1.8p23f),
+  .log10_2 = V4 (0x1.a934fp+1),
+  .log2_10_hi = V4 (0x1.344136p-2),
+  .log2_10_lo = V4 (-0x1.ec10cp-27),
+#if !WANT_SIMD_EXCEPT
+  .scale_thresh = V4 (ScaleBound)
+#endif
+};
+
+#define ExponentBias v_u32 (0x3f800000)
+
+#if WANT_SIMD_EXCEPT
+
+# define SpecialBound 38.0f	       /* rint(log10(2^127)).  */
+# define TinyBound v_u32 (0x20000000) /* asuint (0x1p-63).  */
+# define BigBound v_u32 (0x42180000)  /* asuint (SpecialBound).  */
+# define Thres v_u32 (0x22180000)     /* BigBound - TinyBound.  */
+
+static float32x4_t VPCS_ATTR NOINLINE
+special_case (float32x4_t x, float32x4_t y, uint32x4_t cmp)
+{
+  /* If fenv exceptions are to be triggered correctly, fall back to the scalar
+     routine to special lanes.  */
+  return v_call_f32 (exp10f, x, y, cmp);
+}
+
+#else
+
+# define SpecialBound 126.0f /* rint (log2 (2^127 / (1 + sqrt (2)))).  */
+# define SpecialOffset v_u32 (0x82000000)
+# define SpecialBias v_u32 (0x7f000000)
+
+static float32x4_t VPCS_ATTR NOINLINE
+special_case (float32x4_t poly, float32x4_t n, uint32x4_t e, uint32x4_t cmp1,
+	      float32x4_t scale, const struct data *d)
+{
+  /* 2^n may overflow, break it up into s1*s2.  */
+  uint32x4_t b = vandq_u32 (vclezq_f32 (n), SpecialOffset);
+  float32x4_t s1 = vreinterpretq_f32_u32 (vaddq_u32 (b, SpecialBias));
+  float32x4_t s2 = vreinterpretq_f32_u32 (vsubq_u32 (e, b));
+  uint32x4_t cmp2 = vcagtq_f32 (n, d->scale_thresh);
+  float32x4_t r2 = vmulq_f32 (s1, s1);
+  float32x4_t r1 = vmulq_f32 (vfmaq_f32 (s2, poly, s2), s1);
+  /* Similar to r1 but avoids double rounding in the subnormal range.  */
+  float32x4_t r0 = vfmaq_f32 (scale, poly, scale);
+  float32x4_t r = vbslq_f32 (cmp1, r1, r0);
+  return vbslq_f32 (cmp2, r2, r);
+}
+
+#endif
+
+/* Fast vector implementation of single-precision exp10.
+   Algorithm is accurate to 2.36 ULP.
+   _ZGVnN4v_exp10f(0x1.be2b36p+1) got 0x1.7e79c4p+11
+				 want 0x1.7e79cp+11.  */
+float32x4_t VPCS_ATTR V_NAME_F1 (exp10) (float32x4_t x)
+{
+  const struct data *d = ptr_barrier (&data);
+#if WANT_SIMD_EXCEPT
+  /* asuint(x) - TinyBound >= BigBound - TinyBound.  */
+  uint32x4_t cmp = vcgeq_u32 (
+      vsubq_u32 (vandq_u32 (vreinterpretq_u32_f32 (x), v_u32 (0x7fffffff)),
+		 TinyBound),
+      Thres);
+  float32x4_t xm = x;
+  /* If any lanes are special, mask them with 1 and retain a copy of x to allow
+     special case handler to fix special lanes later. This is only necessary if
+     fenv exceptions are to be triggered correctly.  */
+  if (__glibc_unlikely (v_any_u32 (cmp)))
+    x = vbslq_f32 (cmp, v_f32 (1), x);
+#endif
+
+  /* exp10(x) = 2^n * 10^r = 2^n * (1 + poly (r)),
+     with poly(r) in [1/sqrt(2), sqrt(2)] and
+     x = r + n * log10 (2), with r in [-log10(2)/2, log10(2)/2].  */
+  float32x4_t z = vfmaq_f32 (d->shift, x, d->log10_2);
+  float32x4_t n = vsubq_f32 (z, d->shift);
+  float32x4_t r = vfmsq_f32 (x, n, d->log2_10_hi);
+  r = vfmsq_f32 (r, n, d->log2_10_lo);
+  uint32x4_t e = vshlq_n_u32 (vreinterpretq_u32_f32 (z), 23);
+
+  float32x4_t scale = vreinterpretq_f32_u32 (vaddq_u32 (e, ExponentBias));
+
+#if !WANT_SIMD_EXCEPT
+  uint32x4_t cmp = vcagtq_f32 (n, v_f32 (SpecialBound));
+#endif
+
+  float32x4_t r2 = vmulq_f32 (r, r);
+  float32x4_t poly
+      = vfmaq_f32 (vmulq_f32 (r, d->poly[0]),
+		   v_pairwise_poly_3_f32 (r, r2, d->poly + 1), r2);
+
+  if (__glibc_unlikely (v_any_u32 (cmp)))
+#if WANT_SIMD_EXCEPT
+    return special_case (xm, vfmaq_f32 (scale, poly, scale), cmp);
+#else
+    return special_case (poly, n, e, cmp, scale, d);
+#endif
+
+  return vfmaq_f32 (scale, poly, scale);
+}
diff --git a/sysdeps/aarch64/fpu/exp10f_sve.c b/sysdeps/aarch64/fpu/exp10f_sve.c
new file mode 100644
index 0000000000..3df3246e05
--- /dev/null
+++ b/sysdeps/aarch64/fpu/exp10f_sve.c
@@ -0,0 +1,91 @@
+/* Single-precision vector (SVE) exp10 function.
+
+   Copyright (C) 2023 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include "sv_math.h"
+#include "poly_sve_f32.h"
+
+/* For x < -SpecialBound, the result is subnormal and not handled correctly by
+   FEXPA.  */
+#define SpecialBound 37.9
+
+static const struct data
+{
+  float poly[5];
+  float shift, log10_2, log2_10_hi, log2_10_lo, special_bound;
+} data = {
+  /* Coefficients generated using Remez algorithm with minimisation of relative
+     error.
+     rel error: 0x1.89dafa3p-24
+     abs error: 0x1.167d55p-23 in [-log10(2)/2, log10(2)/2]
+     maxerr: 0.52 +0.5 ulp.  */
+  .poly = { 0x1.26bb16p+1f, 0x1.5350d2p+1f, 0x1.04744ap+1f, 0x1.2d8176p+0f,
+	    0x1.12b41ap-1f },
+  /* 1.5*2^17 + 127, a shift value suitable for FEXPA.  */
+  .shift = 0x1.903f8p17f,
+  .log10_2 = 0x1.a934fp+1,
+  .log2_10_hi = 0x1.344136p-2,
+  .log2_10_lo = -0x1.ec10cp-27,
+  .special_bound = SpecialBound,
+};
+
+static svfloat32_t NOINLINE
+special_case (svfloat32_t x, svfloat32_t y, svbool_t special)
+{
+  return sv_call_f32 (exp10f, x, y, special);
+}
+
+/* Single-precision SVE exp10f routine. Implements the same algorithm
+   as AdvSIMD exp10f.
+   Worst case error is 1.02 ULPs.
+   _ZGVsMxv_exp10f(-0x1.040488p-4) got 0x1.ba5f9ep-1
+				  want 0x1.ba5f9cp-1.  */
+svfloat32_t SV_NAME_F1 (exp10) (svfloat32_t x, const svbool_t pg)
+{
+  const struct data *d = ptr_barrier (&data);
+  /* exp10(x) = 2^(n/N) * 10^r = 2^n * (1 + poly (r)),
+     with poly(r) in [1/sqrt(2), sqrt(2)] and
+     x = r + n * log10(2) / N, with r in [-log10(2)/2N, log10(2)/2N].  */
+
+  /* Load some constants in quad-word chunks to minimise memory access (last
+     lane is wasted).  */
+  svfloat32_t log10_2_and_inv = svld1rq (svptrue_b32 (), &d->log10_2);
+
+  /* n = round(x/(log10(2)/N)).  */
+  svfloat32_t shift = sv_f32 (d->shift);
+  svfloat32_t z = svmla_lane (shift, x, log10_2_and_inv, 0);
+  svfloat32_t n = svsub_x (pg, z, shift);
+
+  /* r = x - n*log10(2)/N.  */
+  svfloat32_t r = svmls_lane (x, n, log10_2_and_inv, 1);
+  r = svmls_lane (r, n, log10_2_and_inv, 2);
+
+  svbool_t special = svacgt (pg, x, d->special_bound);
+  svfloat32_t scale = svexpa (svreinterpret_u32 (z));
+
+  /* Polynomial evaluation: poly(r) ~ exp10(r)-1.  */
+  svfloat32_t r2 = svmul_x (pg, r, r);
+  svfloat32_t poly
+      = svmla_x (pg, svmul_x (pg, r, d->poly[0]),
+		 sv_pairwise_poly_3_f32_x (pg, r, r2, d->poly + 1), r2);
+
+  if (__glibc_unlikely (svptest_any (pg, special)))
+    return special_case (x, svmla_x (pg, scale, scale, poly), special);
+
+  return svmla_x (pg, scale, scale, poly);
+}
diff --git a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
index 8d05498ec9..26d5ecf66f 100644
--- a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c
@@ -25,6 +25,7 @@
 
 VPCS_VECTOR_WRAPPER (cos_advsimd, _ZGVnN2v_cos)
 VPCS_VECTOR_WRAPPER (exp_advsimd, _ZGVnN2v_exp)
+VPCS_VECTOR_WRAPPER (exp10_advsimd, _ZGVnN2v_exp10)
 VPCS_VECTOR_WRAPPER (exp2_advsimd, _ZGVnN2v_exp2)
 VPCS_VECTOR_WRAPPER (log_advsimd, _ZGVnN2v_log)
 VPCS_VECTOR_WRAPPER (log10_advsimd, _ZGVnN2v_log10)
diff --git a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
index b65bc6f1e6..86efd60779 100644
--- a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c
@@ -34,6 +34,7 @@
 
 SVE_VECTOR_WRAPPER (cos_sve, _ZGVsMxv_cos)
 SVE_VECTOR_WRAPPER (exp_sve, _ZGVsMxv_exp)
+SVE_VECTOR_WRAPPER (exp10_sve, _ZGVsMxv_exp10)
 SVE_VECTOR_WRAPPER (exp2_sve, _ZGVsMxv_exp2)
 SVE_VECTOR_WRAPPER (log_sve, _ZGVsMxv_log)
 SVE_VECTOR_WRAPPER (log10_sve, _ZGVsMxv_log10)
diff --git a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
index 6ced0d4488..8f7ebea1ac 100644
--- a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c
@@ -25,6 +25,7 @@
 
 VPCS_VECTOR_WRAPPER (cosf_advsimd, _ZGVnN4v_cosf)
 VPCS_VECTOR_WRAPPER (expf_advsimd, _ZGVnN4v_expf)
+VPCS_VECTOR_WRAPPER (exp10f_advsimd, _ZGVnN4v_exp10f)
 VPCS_VECTOR_WRAPPER (exp2f_advsimd, _ZGVnN4v_exp2f)
 VPCS_VECTOR_WRAPPER (logf_advsimd, _ZGVnN4v_logf)
 VPCS_VECTOR_WRAPPER (log10f_advsimd, _ZGVnN4v_log10f)
diff --git a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
index 2ed8d0659a..885e58ac39 100644
--- a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
+++ b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c
@@ -34,6 +34,7 @@
 
 SVE_VECTOR_WRAPPER (cosf_sve, _ZGVsMxv_cosf)
 SVE_VECTOR_WRAPPER (expf_sve, _ZGVsMxv_expf)
+SVE_VECTOR_WRAPPER (exp10f_sve, _ZGVsMxv_exp10f)
 SVE_VECTOR_WRAPPER (exp2f_sve, _ZGVsMxv_exp2f)
 SVE_VECTOR_WRAPPER (logf_sve, _ZGVsMxv_logf)
 SVE_VECTOR_WRAPPER (log10f_sve, _ZGVsMxv_log10f)
diff --git a/sysdeps/aarch64/libm-test-ulps b/sysdeps/aarch64/libm-test-ulps
index 6641c7fa0b..d117209c06 100644
--- a/sysdeps/aarch64/libm-test-ulps
+++ b/sysdeps/aarch64/libm-test-ulps
@@ -970,11 +970,19 @@ double: 2
 float: 1
 ldouble: 2
 
+Function: "exp10_advsimd":
+double: 1
+float: 2
+
 Function: "exp10_downward":
 double: 2
 float: 1
 ldouble: 3
 
+Function: "exp10_sve":
+double: 1
+float: 1
+
 Function: "exp10_towardzero":
 double: 2
 float: 1
diff --git a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
index f8776a6bea..cad774521a 100644
--- a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
+++ b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist
@@ -14,14 +14,18 @@ GLIBC_2.38 _ZGVsMxv_log F
 GLIBC_2.38 _ZGVsMxv_logf F
 GLIBC_2.38 _ZGVsMxv_sin F
 GLIBC_2.38 _ZGVsMxv_sinf F
+GLIBC_2.39 _ZGVnN2v_exp10 F
 GLIBC_2.39 _ZGVnN2v_exp2 F
 GLIBC_2.39 _ZGVnN2v_log10 F
 GLIBC_2.39 _ZGVnN2v_log2 F
 GLIBC_2.39 _ZGVnN2v_tan F
+GLIBC_2.39 _ZGVnN4v_exp10f F
 GLIBC_2.39 _ZGVnN4v_exp2f F
 GLIBC_2.39 _ZGVnN4v_log10f F
 GLIBC_2.39 _ZGVnN4v_log2f F
 GLIBC_2.39 _ZGVnN4v_tanf F
+GLIBC_2.39 _ZGVsMxv_exp10 F
+GLIBC_2.39 _ZGVsMxv_exp10f F
 GLIBC_2.39 _ZGVsMxv_exp2 F
 GLIBC_2.39 _ZGVsMxv_exp2f F
 GLIBC_2.39 _ZGVsMxv_log10 F
-- 
2.27.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/5] aarch64: Add vector implementations of tan routines
  2023-10-05 16:10 [PATCH v2 1/5] aarch64: Add vector implementations of tan routines Joe Ramsay
                   ` (3 preceding siblings ...)
  2023-10-05 16:10 ` [PATCH 5/5] aarch64: Add vector implementations of exp10 routines Joe Ramsay
@ 2023-10-18 16:29 ` Szabolcs Nagy
  2023-10-23 13:47   ` Carlos O'Donell
  4 siblings, 1 reply; 7+ messages in thread
From: Szabolcs Nagy @ 2023-10-18 16:29 UTC (permalink / raw)
  To: Joe Ramsay, libc-alpha

The 10/05/2023 17:10, Joe Ramsay wrote:
> * Silence sign-of-zero check for mathvec tan

the aarch64 bits of the patchset looks good to me
but before committing them i wanted to ask feedback
about the sign-of-zero policy change for mathvec
(it affects testing on other targets too), maybe we
should document the mathvec requirement somewhere?


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/5] aarch64: Add vector implementations of tan routines
  2023-10-18 16:29 ` [PATCH v2 1/5] aarch64: Add vector implementations of tan routines Szabolcs Nagy
@ 2023-10-23 13:47   ` Carlos O'Donell
  0 siblings, 0 replies; 7+ messages in thread
From: Carlos O'Donell @ 2023-10-23 13:47 UTC (permalink / raw)
  To: Szabolcs Nagy, Joe Ramsay, libc-alpha

On 10/18/23 12:29, Szabolcs Nagy wrote:
> The 10/05/2023 17:10, Joe Ramsay wrote:
>> * Silence sign-of-zero check for mathvec tan
> 
> the aarch64 bits of the patchset looks good to me
> but before committing them i wanted to ask feedback
> about the sign-of-zero policy change for mathvec
> (it affects testing on other targets too), maybe we
> should document the mathvec requirement somewhere?
> 

We did this for sin already and I think we should do it for tan
also. Please go ahead.

-- 
Cheers,
Carlos.


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-10-23 13:47 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-05 16:10 [PATCH v2 1/5] aarch64: Add vector implementations of tan routines Joe Ramsay
2023-10-05 16:10 ` [PATCH v2 2/5] aarch64: Add vector implementations of exp2 routines Joe Ramsay
2023-10-05 16:10 ` [PATCH 3/5] aarch64: Add vector implementations of log2 routines Joe Ramsay
2023-10-05 16:10 ` [PATCH 4/5] aarch64: Add vector implementations of log10 routines Joe Ramsay
2023-10-05 16:10 ` [PATCH 5/5] aarch64: Add vector implementations of exp10 routines Joe Ramsay
2023-10-18 16:29 ` [PATCH v2 1/5] aarch64: Add vector implementations of tan routines Szabolcs Nagy
2023-10-23 13:47   ` Carlos O'Donell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).