From: Noah Goldstein <goldstein.w.n@gmail.com>
To: "H.J. Lu" <hjl.tools@gmail.com>
Cc: GNU C Library <libc-alpha@sourceware.org>,
"Carlos O'Donell" <carlos@systemhalted.org>
Subject: Re: [PATCH v2 3/7] x86: Improve svml_s_atanhf4_core_sse4.S
Date: Thu, 9 Jun 2022 09:56:41 -0700 [thread overview]
Message-ID: <CAFUsyfJ4sVDgSbXzcM0OwF_Ziwz_rp4ZFoNKnSVGgGLYHmzjog@mail.gmail.com> (raw)
In-Reply-To: <CAMe9rOqR3DyqoEd4nbVvZSdiPmfDfpEjFB28gJgAPO9x=SxbHw@mail.gmail.com>
On Thu, Jun 9, 2022 at 9:03 AM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Wed, Jun 8, 2022 at 5:05 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Improvements are:
> > 1. Reduce code size (-62 bytes).
> > 2. Remove redundant move instructions.
> > 3. Slightly improve instruction selection/scheduling where
> > possible.
> > 4. Prefer registers which get short instruction encoding.
> > 5. Reduce rodata usage (-16 bytes).
> >
> > The throughput improvement is not significant as the port 0 bottleneck
> > is unavoidable.
> >
> > Function, New Time, Old Time, New / Old
> > _ZGVbN4v_atanhf, 8.821, 8.903, 0.991
> > ---
> > .../fpu/multiarch/svml_s_atanhf4_core_sse4.S | 378 ++++++++----------
> > 1 file changed, 169 insertions(+), 209 deletions(-)
> >
> > diff --git a/sysdeps/x86_64/fpu/multiarch/svml_s_atanhf4_core_sse4.S b/sysdeps/x86_64/fpu/multiarch/svml_s_atanhf4_core_sse4.S
> > index 2d3ad2617f..e6683785fb 100644
> > --- a/sysdeps/x86_64/fpu/multiarch/svml_s_atanhf4_core_sse4.S
> > +++ b/sysdeps/x86_64/fpu/multiarch/svml_s_atanhf4_core_sse4.S
> > @@ -30,96 +30,80 @@
> > *
> > */
> >
> > -/* Offsets for data table __svml_satanh_data_internal
> > - */
> > -#define SgnMask 0
> > -#define sOne 16
> > -#define sPoly 32
> > -#define iBrkValue 160
> > -#define iOffExpoMask 176
> > -#define sHalf 192
> > -#define sSign 208
> > -#define sTopMask12 224
> > -#define TinyRange 240
> > -#define sLn2 256
> > +/* Offsets for data table __svml_satanh_data_internal_avx512. Ordered
> > + by use in the function. On cold-starts this might help the
> > + prefetcher. Possibly a better idea is to interleave start/end so
> > + that the prefetcher is less likely to detect a stream and pull
> > + irrelivant lines into cache. */
> > +#define sOne 0
> > +#define SgnMask 16
> > +#define sTopMask12 32
> > +#define iBrkValue 48
> > +#define iOffExpoMask 64
> > +#define sPoly 80
> > +#define sLn2 208
> > +#define TinyRange 224
> >
> > #include <sysdep.h>
> > +#define ATANHF_DATA(x) ((x)+__svml_satanh_data_internal)
> >
> > .section .text.sse4, "ax", @progbits
> > ENTRY(_ZGVbN4v_atanhf_sse4)
> > - subq $72, %rsp
> > - cfi_def_cfa_offset(80)
> > movaps %xmm0, %xmm5
> >
> > /* Load constants including One = 1 */
> > - movups sOne+__svml_satanh_data_internal(%rip), %xmm4
> > + movups ATANHF_DATA(sOne)(%rip), %xmm4
> > movaps %xmm5, %xmm3
> >
> > /* Strip off the sign, so treat X as positive until right at the end */
> > - movups SgnMask+__svml_satanh_data_internal(%rip), %xmm7
> > - movaps %xmm4, %xmm8
> > - andps %xmm5, %xmm7
> > + movups ATANHF_DATA(SgnMask)(%rip), %xmm1
> > + movaps %xmm4, %xmm2
> > + andps %xmm1, %xmm0
> > movaps %xmm4, %xmm10
> > - movups sTopMask12+__svml_satanh_data_internal(%rip), %xmm11
> > + movups ATANHF_DATA(sTopMask12)(%rip), %xmm11
> > movaps %xmm4, %xmm14
> > movaps %xmm11, %xmm9
> >
> > +
> > /*
> > * Compute V = 2 * X trivially, and UHi + U_lo = 1 - X in two pieces,
> > * the upper part UHi being <= 12 bits long. Then we have
> > * atanh(X) = 1/2 * log((1 + X) / (1 - X)) = 1/2 * log1p(V / (UHi + ULo)).
> > */
> > - movaps %xmm7, %xmm12
> > + movaps %xmm0, %xmm6
> > + mulps %xmm5, %xmm3
> > + subps %xmm0, %xmm2
> > + addps %xmm0, %xmm6
> > + subps %xmm2, %xmm10
> > + addps %xmm5, %xmm3
> > + subps %xmm0, %xmm10
> > + andps %xmm2, %xmm9
> > +
> >
> > /*
> > * Check whether |X| < 1, in which case we use the main function.
> > * Otherwise set the rangemask so that the callout will get used.
> > * Note that this will also use the callout for NaNs since not(NaN < 1).
> > */
> > - movaps %xmm7, %xmm6
> > - movaps %xmm7, %xmm2
> > - cmpnltps %xmm4, %xmm6
> > - cmpltps TinyRange+__svml_satanh_data_internal(%rip), %xmm2
> > - mulps %xmm5, %xmm3
> > - subps %xmm7, %xmm8
> > - addps %xmm7, %xmm12
> > - movmskps %xmm6, %edx
> > - subps %xmm8, %xmm10
> > - addps %xmm5, %xmm3
> > - subps %xmm7, %xmm10
> > - andps %xmm8, %xmm9
> > + rcpps %xmm9, %xmm7
> > + subps %xmm9, %xmm2
> > + andps %xmm11, %xmm7
> >
> > - /*
> > - * Now we feed into the log1p code, using H in place of _VARG1 and
> > - * later incorporating L into the reduced argument.
> > - * compute 1+x as high, low parts
> > - */
> > - movaps %xmm4, %xmm7
> > -
> > - /*
> > - * Now compute R = 1/(UHi+ULo) * (1 - E) and the error term E
> > - * The first FMR is exact (we force R to 12 bits just in case it
> > - * isn't already, to make absolutely sure), and since E is ~ 2^-12,
> > - * the rounding error in the other one is acceptable.
> > - */
> > - rcpps %xmm9, %xmm15
> > - subps %xmm9, %xmm8
> > - andps %xmm11, %xmm15
> >
> > /*
> > * Split V as well into upper 12 bits and lower part, so that we can get
> > * a preliminary quotient estimate without rounding error.
> > */
> > - andps %xmm12, %xmm11
> > - mulps %xmm15, %xmm9
> > - addps %xmm8, %xmm10
> > - subps %xmm11, %xmm12
> > + andps %xmm6, %xmm11
> > + mulps %xmm7, %xmm9
> > + addps %xmm2, %xmm10
> > + subps %xmm11, %xmm6
> >
> > /* Hence get initial quotient estimate QHi + QLo = R * VHi + R * VLo */
> > - mulps %xmm15, %xmm11
> > - mulps %xmm15, %xmm10
> > + mulps %xmm7, %xmm11
> > + mulps %xmm7, %xmm10
> > subps %xmm9, %xmm14
> > - mulps %xmm12, %xmm15
> > + mulps %xmm6, %xmm7
> > subps %xmm10, %xmm14
> >
> > /* Compute D = E + E^2 */
> > @@ -127,8 +111,8 @@ ENTRY(_ZGVbN4v_atanhf_sse4)
> > movaps %xmm4, %xmm8
> > mulps %xmm14, %xmm13
> >
> > - /* reduction: compute r, n */
> > - movdqu iBrkValue+__svml_satanh_data_internal(%rip), %xmm9
> > + /* reduction: compute r,n */
> > + movdqu ATANHF_DATA(iBrkValue)(%rip), %xmm9
> > addps %xmm13, %xmm14
> >
> > /*
> > @@ -136,168 +120,149 @@ ENTRY(_ZGVbN4v_atanhf_sse4)
> > * = R * (VHi + VLo) * (1 + D)
> > * = QHi + (QHi * D + QLo + QLo * D)
> > */
> > - movaps %xmm14, %xmm0
> > - mulps %xmm15, %xmm14
> > - mulps %xmm11, %xmm0
> > - addps %xmm14, %xmm15
> > - movdqu iOffExpoMask+__svml_satanh_data_internal(%rip), %xmm12
> > + movaps %xmm14, %xmm2
> > + mulps %xmm7, %xmm14
> > + mulps %xmm11, %xmm2
> > + addps %xmm14, %xmm7
> > + movdqu ATANHF_DATA(iOffExpoMask)(%rip), %xmm12
> > movaps %xmm4, %xmm14
> >
> > /* Record the sign for eventual reincorporation. */
> > - movups sSign+__svml_satanh_data_internal(%rip), %xmm1
> > - addps %xmm15, %xmm0
> > + addps %xmm7, %xmm2
> > +
> >
> > /*
> > * Now finally accumulate the high and low parts of the
> > * argument to log1p, H + L, with a final compensated summation.
> > */
> > - movaps %xmm0, %xmm6
> > - andps %xmm5, %xmm1
> > -
> > + movaps %xmm2, %xmm6
> > + andnps %xmm5, %xmm1
> > + movaps %xmm4, %xmm7
> > /* Or the sign bit in with the tiny result to handle atanh(-0) correctly */
> > - orps %xmm1, %xmm3
> > addps %xmm11, %xmm6
> > maxps %xmm6, %xmm7
> > minps %xmm6, %xmm8
> > subps %xmm6, %xmm11
> > movaps %xmm7, %xmm10
> > - andps %xmm2, %xmm3
> > addps %xmm8, %xmm10
> > - addps %xmm11, %xmm0
> > + addps %xmm11, %xmm2
> > subps %xmm10, %xmm7
> > psubd %xmm9, %xmm10
> > - addps %xmm7, %xmm8
> > + addps %xmm8, %xmm7
> > pand %xmm10, %xmm12
> > psrad $23, %xmm10
> > cvtdq2ps %xmm10, %xmm13
> > - addps %xmm8, %xmm0
> > + addps %xmm7, %xmm2
> >
> > /* final reconstruction */
> > - mulps sLn2+__svml_satanh_data_internal(%rip), %xmm13
> > pslld $23, %xmm10
> > paddd %xmm9, %xmm12
> > psubd %xmm10, %xmm14
> >
> > /* polynomial evaluation */
> > subps %xmm4, %xmm12
> > - mulps %xmm0, %xmm14
> > - movups sPoly+112+__svml_satanh_data_internal(%rip), %xmm0
> > - addps %xmm12, %xmm14
> > - mulps %xmm14, %xmm0
> > + mulps %xmm14, %xmm2
> > + movups ATANHF_DATA(sPoly+0)(%rip), %xmm7
> > + addps %xmm12, %xmm2
> > + mulps %xmm2, %xmm7
> > +
> >
> > /* Finally, halve the result and reincorporate the sign */
> > - movups sHalf+__svml_satanh_data_internal(%rip), %xmm4
> > - pxor %xmm1, %xmm4
> > - addps sPoly+96+__svml_satanh_data_internal(%rip), %xmm0
> > - mulps %xmm14, %xmm0
> > - addps sPoly+80+__svml_satanh_data_internal(%rip), %xmm0
> > - mulps %xmm14, %xmm0
> > - addps sPoly+64+__svml_satanh_data_internal(%rip), %xmm0
> > - mulps %xmm14, %xmm0
> > - addps sPoly+48+__svml_satanh_data_internal(%rip), %xmm0
> > - mulps %xmm14, %xmm0
> > - addps sPoly+32+__svml_satanh_data_internal(%rip), %xmm0
> > - mulps %xmm14, %xmm0
> > - addps sPoly+16+__svml_satanh_data_internal(%rip), %xmm0
> > - mulps %xmm14, %xmm0
> > - addps sPoly+__svml_satanh_data_internal(%rip), %xmm0
> > - mulps %xmm14, %xmm0
> > - mulps %xmm14, %xmm0
> > - addps %xmm0, %xmm14
> > - movaps %xmm2, %xmm0
> > - addps %xmm13, %xmm14
> > - mulps %xmm14, %xmm4
> > - andnps %xmm4, %xmm0
> > - orps %xmm3, %xmm0
> > - testl %edx, %edx
> > + addps ATANHF_DATA(sPoly+16)(%rip), %xmm7
> > + mulps %xmm2, %xmm7
> > + addps ATANHF_DATA(sPoly+32)(%rip), %xmm7
> > + mulps %xmm2, %xmm7
> > + addps ATANHF_DATA(sPoly+48)(%rip), %xmm7
> > + mulps %xmm2, %xmm7
> > + addps ATANHF_DATA(sPoly+64)(%rip), %xmm7
> > + mulps %xmm2, %xmm7
> > + addps ATANHF_DATA(sPoly+80)(%rip), %xmm7
> > + mulps %xmm2, %xmm7
> > + addps ATANHF_DATA(sPoly+96)(%rip), %xmm7
> > + mulps %xmm2, %xmm7
> > + movaps ATANHF_DATA(sPoly+112)(%rip), %xmm6
> > + addps %xmm6, %xmm7
> > + mulps %xmm2, %xmm7
> > + mulps %xmm2, %xmm7
> > + mulps ATANHF_DATA(sLn2)(%rip), %xmm13
> > + /* We can build `sHalf` with `sPoly & sOne`. */
> > + andps %xmm4, %xmm6
> > + orps %xmm1, %xmm3
> > + xorps %xmm6, %xmm1
> >
> > - /* Go to special inputs processing branch */
> > - jne L(SPECIAL_VALUES_BRANCH)
> > - # LOE rbx rbp r12 r13 r14 r15 edx xmm0 xmm5
> > + addps %xmm2, %xmm7
> > + addps %xmm13, %xmm7
> > + mulps %xmm7, %xmm1
> >
> > - /* Restore registers
> > - * and exit the function
> > - */
> > + /* Finish check of NaNs. */
> > + cmpleps %xmm0, %xmm4
> > + movmskps %xmm4, %edx
> > + cmpltps ATANHF_DATA(TinyRange)(%rip), %xmm0
> >
> > -L(EXIT):
> > - addq $72, %rsp
> > - cfi_def_cfa_offset(8)
> > + andps %xmm0, %xmm3
> > + andnps %xmm1, %xmm0
> > + orps %xmm3, %xmm0
> > +
> > + testl %edx, %edx
> > + /* Go to special inputs processing branch. */
> > + jne L(SPECIAL_VALUES_BRANCH)
> > + # LOE rbx rbp r12 r13 r14 r15 xmm0
> > + /* No registers to restore on fast path. */
> > ret
> > - cfi_def_cfa_offset(80)
> >
> > - /* Branch to process
> > - * special inputs
> > - */
> >
> > + /* Cold case. edx has 1s where there was a special value that
> > + needs to be handled by a atanhf call. Optimize for code size
> > + moreso than speed here. */
> > L(SPECIAL_VALUES_BRANCH):
> > - movups %xmm5, 32(%rsp)
> > - movups %xmm0, 48(%rsp)
> > - # LOE rbx rbp r12 r13 r14 r15 edx
> > -
> > - xorl %eax, %eax
> > - movq %r12, 16(%rsp)
> > - cfi_offset(12, -64)
> > - movl %eax, %r12d
> > - movq %r13, 8(%rsp)
> > - cfi_offset(13, -72)
> > - movl %edx, %r13d
> > - movq %r14, (%rsp)
> > - cfi_offset(14, -80)
> > - # LOE rbx rbp r15 r12d r13d
> > -
> > - /* Range mask
> > - * bits check
> > - */
> > -
> > -L(RANGEMASK_CHECK):
> > - btl %r12d, %r13d
> > -
> > - /* Call scalar math function */
> > - jc L(SCALAR_MATH_CALL)
> > - # LOE rbx rbp r15 r12d r13d
> > -
> > - /* Special inputs
> > - * processing loop
> > - */
> > -
> > + # LOE rbx rdx rbp r12 r13 r14 r15 xmm0 xmm5
> > + /* Stack coming in 16-byte aligned. Set 8-byte misaligned so on
> > + call entry will be 16-byte aligned. */
> > + subq $56, %rsp
> > + cfi_def_cfa_offset(64)
> > + movups %xmm0, 24(%rsp)
> > + movups %xmm5, 40(%rsp)
> > +
> > + /* Use rbx/rbp for callee save registers as they get short
> > + encoding for many instructions (as compared with r12/r13). */
> > + movq %rbx, (%rsp)
> > + cfi_offset(rbx, -64)
> > + movq %rbp, 8(%rsp)
> > + cfi_offset(rbp, -56)
> > + /* edx has 1s where there was a special value that needs to be handled
> > + by a tanhf call. */
> > + movl %edx, %ebx
> > L(SPECIAL_VALUES_LOOP):
> > - incl %r12d
> > - cmpl $4, %r12d
> > -
> > - /* Check bits in range mask */
> > - jl L(RANGEMASK_CHECK)
> > - # LOE rbx rbp r15 r12d r13d
> > -
> > - movq 16(%rsp), %r12
> > - cfi_restore(12)
> > - movq 8(%rsp), %r13
> > - cfi_restore(13)
> > - movq (%rsp), %r14
> > - cfi_restore(14)
> > - movups 48(%rsp), %xmm0
> > -
> > - /* Go to exit */
> > - jmp L(EXIT)
> > - cfi_offset(12, -64)
> > - cfi_offset(13, -72)
> > - cfi_offset(14, -80)
> > - # LOE rbx rbp r12 r13 r14 r15 xmm0
> > -
> > - /* Scalar math fucntion call
> > - * to process special input
> > - */
> > -
> > -L(SCALAR_MATH_CALL):
> > - movl %r12d, %r14d
> > - movss 32(%rsp, %r14, 4), %xmm0
> > + # LOE rbx rbp r12 r13 r14 r15
> > + /* use rbp as index for special value that is saved across calls to
> > + tanhf. We technically don't need a callee save register here as offset
> > + to rsp is always [0, 12] so we can restore rsp by realigning to 64.
> > + Essentially the tradeoff is 1 extra save/restore vs 2 extra instructions
> > + in the loop. */
> > + xorl %ebp, %ebp
> > + bsfl %ebx, %ebp
> > +
> > + /* Scalar math fucntion call to process special input. */
> > + movss 40(%rsp, %rbp, 4), %xmm0
> > call atanhf@PLT
> > - # LOE rbx rbp r14 r15 r12d r13d xmm0
> > -
> > - movss %xmm0, 48(%rsp, %r14, 4)
> > -
> > - /* Process special inputs in loop */
> > - jmp L(SPECIAL_VALUES_LOOP)
> > - # LOE rbx rbp r15 r12d r13d
> > + /* No good way to avoid the store-forwarding fault this will cause on
> > + return. `lfence` avoids the SF fault but at greater cost as it
> > + serialized stack/callee save restoration. */
> > + movss %xmm0, 24(%rsp, %rbp, 4)
> > +
> > + leal -1(%rbx), %eax
> > + andl %eax, %ebx
> > + jnz L(SPECIAL_VALUES_LOOP)
> > + # LOE r12 r13 r14 r15
> > + /* All results have been written to 16(%rsp). */
>
> Where does 16 come from?
Incorrect from prior version. Fixed in V2.
> > + movups 24(%rsp), %xmm0
> > + movq (%rsp), %rbx
> > + cfi_restore(rbx)
> > + movq 8(%rsp), %rbp
> > + cfi_restore(rbp)
> > + addq $56, %rsp
> > + cfi_def_cfa_offset(8)
> > + ret
> > END(_ZGVbN4v_atanhf_sse4)
> >
> > .section .rodata, "a"
> > @@ -305,56 +270,51 @@ END(_ZGVbN4v_atanhf_sse4)
> >
> > #ifdef __svml_satanh_data_internal_typedef
> > typedef unsigned int VUINT32;
> > -typedef struct {
> > - __declspec(align(16)) VUINT32 SgnMask[4][1];
> > +typedef struct{
> > __declspec(align(16)) VUINT32 sOne[4][1];
> > - __declspec(align(16)) VUINT32 sPoly[8][4][1];
> > + __declspec(align(16)) VUINT32 SgnMask[4][1];
> > + __declspec(align(16)) VUINT32 sTopMask12[4][1];
> > __declspec(align(16)) VUINT32 iBrkValue[4][1];
> > __declspec(align(16)) VUINT32 iOffExpoMask[4][1];
> > - __declspec(align(16)) VUINT32 sHalf[4][1];
> > - __declspec(align(16)) VUINT32 sSign[4][1];
> > - __declspec(align(16)) VUINT32 sTopMask12[4][1];
> > - __declspec(align(16)) VUINT32 TinyRange[4][1];
> > + __declspec(align(16)) VUINT32 sPoly[8][4][1];
> > __declspec(align(16)) VUINT32 sLn2[4][1];
> > + __declspec(align(16)) VUINT32 TinyRange[4][1];
> > } __svml_satanh_data_internal;
> > #endif
> > +
> > __svml_satanh_data_internal:
> > - /* SgnMask */
> > - .long 0x7fffffff, 0x7fffffff, 0x7fffffff, 0x7fffffff
> > /* sOne = SP 1.0 */
> > .align 16
> > .long 0x3f800000, 0x3f800000, 0x3f800000, 0x3f800000
> > - /* sPoly[] = SP polynomial */
> > + /* SgnMask */
> > + .long 0x7fffffff, 0x7fffffff, 0x7fffffff, 0x7fffffff
> > + /* sTopMask12 */
> > .align 16
> > - .long 0xbf000000, 0xbf000000, 0xbf000000, 0xbf000000 /* -5.0000000000000000000000000e-01 P0 */
> > - .long 0x3eaaaa94, 0x3eaaaa94, 0x3eaaaa94, 0x3eaaaa94 /* 3.3333265781402587890625000e-01 P1 */
> > - .long 0xbe80058e, 0xbe80058e, 0xbe80058e, 0xbe80058e /* -2.5004237890243530273437500e-01 P2 */
> > - .long 0x3e4ce190, 0x3e4ce190, 0x3e4ce190, 0x3e4ce190 /* 2.0007920265197753906250000e-01 P3 */
> > - .long 0xbe28ad37, 0xbe28ad37, 0xbe28ad37, 0xbe28ad37 /* -1.6472326219081878662109375e-01 P4 */
> > - .long 0x3e0fcb12, 0x3e0fcb12, 0x3e0fcb12, 0x3e0fcb12 /* 1.4042308926582336425781250e-01 P5 */
> > - .long 0xbe1ad9e3, 0xbe1ad9e3, 0xbe1ad9e3, 0xbe1ad9e3 /* -1.5122179687023162841796875e-01 P6 */
> > - .long 0x3e0d84ed, 0x3e0d84ed, 0x3e0d84ed, 0x3e0d84ed /* 1.3820238411426544189453125e-01 P7 */
> > + .long 0xFFFFF000, 0xFFFFF000, 0xFFFFF000, 0xFFFFF000
> > /* iBrkValue = SP 2/3 */
> > .align 16
> > .long 0x3f2aaaab, 0x3f2aaaab, 0x3f2aaaab, 0x3f2aaaab
> > - /* iOffExpoMask = SP significand mask */
> > + /* iOffExpoMask = SP significand mask ==*/
> > .align 16
> > .long 0x007fffff, 0x007fffff, 0x007fffff, 0x007fffff
> > - /* sHalf */
> > - .align 16
> > - .long 0x3F000000, 0x3F000000, 0x3F000000, 0x3F000000
> > - /* sSign */
> > +
> > + /* sPoly[] = SP polynomial */
> > .align 16
> > - .long 0x80000000, 0x80000000, 0x80000000, 0x80000000
> > - /* sTopMask12 */
> > + .long 0x3e0d84ed, 0x3e0d84ed, 0x3e0d84ed, 0x3e0d84ed /* 1.3820238411426544189453125e-01 P7 */
> > + .long 0xbe1ad9e3, 0xbe1ad9e3, 0xbe1ad9e3, 0xbe1ad9e3 /* -1.5122179687023162841796875e-01 P6 */
> > + .long 0x3e0fcb12, 0x3e0fcb12, 0x3e0fcb12, 0x3e0fcb12 /* 1.4042308926582336425781250e-01 P5 */
> > + .long 0xbe28ad37, 0xbe28ad37, 0xbe28ad37, 0xbe28ad37 /* -1.6472326219081878662109375e-01 P4 */
> > + .long 0x3e4ce190, 0x3e4ce190, 0x3e4ce190, 0x3e4ce190 /* 2.0007920265197753906250000e-01 P3 */
> > + .long 0xbe80058e, 0xbe80058e, 0xbe80058e, 0xbe80058e /* -2.5004237890243530273437500e-01 P2 */
> > + .long 0x3eaaaa94, 0x3eaaaa94, 0x3eaaaa94, 0x3eaaaa94 /* 3.3333265781402587890625000e-01 P1 */
> > + .long 0xbf000000, 0xbf000000, 0xbf000000, 0xbf000000 /* -5.0000000000000000000000000e-01 P0 */
> > +
> > + /* sLn2 = SP ln(2) */
> > .align 16
> > - .long 0xFFFFF000, 0xFFFFF000, 0xFFFFF000, 0xFFFFF000
> > + .long 0x3f317218, 0x3f317218, 0x3f317218, 0x3f317218
> > /* TinyRange */
> > .align 16
> > .long 0x0C000000, 0x0C000000, 0x0C000000, 0x0C000000
> > - /* sLn2 = SP ln(2) */
> > - .align 16
> > - .long 0x3f317218, 0x3f317218, 0x3f317218, 0x3f317218
> > .align 16
> > .type __svml_satanh_data_internal, @object
> > .size __svml_satanh_data_internal, .-__svml_satanh_data_internal
> > --
> > 2.34.1
> >
>
>
> --
> H.J.
next prev parent reply other threads:[~2022-06-09 16:56 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-06-07 20:06 [PATCH v1 1/7] x86: Improve svml_s_atanhf16_core_avx512.S Noah Goldstein
2022-06-07 20:06 ` [PATCH v1 2/7] x86: Improvement svml_s_atanhf8_core_avx2.S Noah Goldstein
2022-06-07 20:06 ` [PATCH v1 3/7] x86: Improve svml_s_atanhf4_core_sse4.S Noah Goldstein
2022-06-07 20:06 ` [PATCH v1 4/7] x86: Optimize svml_s_tanhf16_core_avx512.S Noah Goldstein
2022-06-07 20:06 ` [PATCH v1 5/7] x86: Add data file that can be shared by tanhf-avx2 and tanhf-sse4 Noah Goldstein
2022-06-07 20:06 ` [PATCH v1 6/7] x86: Optimize svml_s_tanhf8_core_avx2.S Noah Goldstein
2022-06-07 20:06 ` [PATCH v1 7/7] x86: Optimize svml_s_tanhf4_core_sse4.S Noah Goldstein
2022-06-08 2:42 ` H.J. Lu
2022-06-08 3:07 ` H.J. Lu
2022-06-09 0:06 ` Noah Goldstein
2022-06-09 0:05 ` [PATCH v2 1/7] x86: Improve svml_s_atanhf16_core_avx512.S Noah Goldstein
2022-06-09 0:05 ` [PATCH v2 2/7] x86: Improvement svml_s_atanhf8_core_avx2.S Noah Goldstein
2022-06-09 16:01 ` H.J. Lu
2022-06-09 16:56 ` Noah Goldstein
2022-06-09 0:05 ` [PATCH v2 3/7] x86: Improve svml_s_atanhf4_core_sse4.S Noah Goldstein
2022-06-09 16:03 ` H.J. Lu
2022-06-09 16:56 ` Noah Goldstein [this message]
2022-06-09 0:05 ` [PATCH v2 4/7] x86: Optimize svml_s_tanhf16_core_avx512.S Noah Goldstein
2022-06-09 16:04 ` H.J. Lu
2022-06-09 16:57 ` Noah Goldstein
2022-06-09 0:05 ` [PATCH v2 5/7] x86: Add data file that can be shared by tanhf-avx2 and tanhf-sse4 Noah Goldstein
2022-06-09 16:05 ` H.J. Lu
2022-06-09 0:05 ` [PATCH v2 6/7] x86: Optimize svml_s_tanhf8_core_avx2.S Noah Goldstein
2022-06-09 16:10 ` H.J. Lu
2022-06-09 16:58 ` Noah Goldstein
2022-06-09 0:05 ` [PATCH v2 7/7] x86: Optimize svml_s_tanhf4_core_sse4.S Noah Goldstein
2022-06-09 15:59 ` [PATCH v2 1/7] x86: Improve svml_s_atanhf16_core_avx512.S H.J. Lu
2022-06-09 16:56 ` Noah Goldstein
2022-06-09 16:57 ` H.J. Lu
2022-06-09 16:58 ` [PATCH v3 " Noah Goldstein
2022-06-09 16:58 ` [PATCH v3 2/7] x86: Improvement svml_s_atanhf8_core_avx2.S Noah Goldstein
2022-06-09 17:05 ` H.J. Lu
2022-06-09 16:58 ` [PATCH v3 3/7] x86: Improve svml_s_atanhf4_core_sse4.S Noah Goldstein
2022-06-09 17:07 ` H.J. Lu
2022-06-09 16:58 ` [PATCH v3 4/7] x86: Optimize svml_s_tanhf16_core_avx512.S Noah Goldstein
2022-06-09 17:07 ` H.J. Lu
2022-06-09 16:58 ` [PATCH v3 5/7] x86: Add data file that can be shared by tanhf-avx2 and tanhf-sse4 Noah Goldstein
2022-06-09 17:11 ` H.J. Lu
2022-06-09 16:58 ` [PATCH v3 6/7] x86: Optimize svml_s_tanhf8_core_avx2.S Noah Goldstein
2022-06-09 17:09 ` H.J. Lu
2022-06-09 16:58 ` [PATCH v3 7/7] x86: Optimize svml_s_tanhf4_core_sse4.S Noah Goldstein
2022-06-09 17:10 ` H.J. Lu
2022-06-09 17:04 ` [PATCH v3 1/7] x86: Improve svml_s_atanhf16_core_avx512.S H.J. Lu
2022-06-09 18:16 ` [PATCH v4 " Noah Goldstein
2022-06-09 18:16 ` [PATCH v4 2/7] x86: Improve svml_s_atanhf8_core_avx2.S Noah Goldstein
2022-06-09 19:34 ` H.J. Lu
2022-06-09 18:16 ` [PATCH v4 6/7] x86: Optimize svml_s_tanhf8_core_avx2.S Noah Goldstein
2022-06-09 19:33 ` [PATCH v4 1/7] x86: Improve svml_s_atanhf16_core_avx512.S H.J. Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAFUsyfJ4sVDgSbXzcM0OwF_Ziwz_rp4ZFoNKnSVGgGLYHmzjog@mail.gmail.com \
--to=goldstein.w.n@gmail.com \
--cc=carlos@systemhalted.org \
--cc=hjl.tools@gmail.com \
--cc=libc-alpha@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).