public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* memcpy performance regressions 2.19 -> 2.24(5)
@ 2017-05-05 17:09 Erich Elsen
  2017-05-05 18:09 ` Carlos O'Donell
  0 siblings, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-05 17:09 UTC (permalink / raw)
  To: libc-alpha

Hi everyone,

I've noticed that there seem to be some noticeable performance
regressions for certain processors and certain sizes when moving from
2.19 to 2.24 (and 2.25).

In this (https://docs.google.com/spreadsheets/d/1Mpu1Kr9CNaa9HQjzKGL0tb2x_Nsx8vtLK3b0QnKesHg/edit?usp=sharing)
spreadsheet the regressions are highlighted with red.  The three
benchmarks are:

readwritecache: both read and write locations are cached (if possible)
nocache: neither read or write locations will be cached
readcache: only the read location will be cached (if possible)

The regressions on IvyBridge are especially concerning and can be
fixed by using __memcpy_avx_unaligned instead of the current default
(__sse2_unaligned_erms).

The regressions at large sizes on IvyBridge and SandyBridge seem to be
due to using non-temporal stores and avoiding them also restores the
performance to 2.19 levels.

The regressions on Haswell can be fixed by using
__memcpy_avx_unaligned instead of __memcpy_avx_unaligned_erms in the
region of 32K <= N <= 4MB.

I had a couple of questions:

1) Are the large regressions at large sizes for IvyBridge and
SandyBridge expected?  Is avoiding non-temporal stores a reasonable
solution?

2) Is it possible to fix the IvyBridge regressions by using model
information to force a specific implementation?  I'm not sure how
other cpus (AMD) would be affected if the selection logic was modified
based on feature flags.

Thanks,
Erich

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-05 17:09 memcpy performance regressions 2.19 -> 2.24(5) Erich Elsen
@ 2017-05-05 18:09 ` Carlos O'Donell
  2017-05-06  0:57   ` Erich Elsen
  0 siblings, 1 reply; 31+ messages in thread
From: Carlos O'Donell @ 2017-05-05 18:09 UTC (permalink / raw)
  To: Erich Elsen, libc-alpha, H.J. Lu

On 05/05/2017 01:09 PM, Erich Elsen wrote:
> I had a couple of questions:
> 
> 1) Are the large regressions at large sizes for IvyBridge and
> SandyBridge expected?  Is avoiding non-temporal stores a reasonable
> solution?

No large regressions are expected.
 
> 2) Is it possible to fix the IvyBridge regressions by using model
> information to force a specific implementation?  I'm not sure how
> other cpus (AMD) would be affected if the selection logic was modified
> based on feature flags.

A different memcpy can be used for any detectable difference in hardware.
What you can't do is select a different memcpy for a different range of
inputs. You have to make the choice upfront with only the knowledge of
the hardware as your input. Though today we could augment that choice
with a glibc tunable set by the shell starting the process.

I have questions of my own:

(a) How statistically relevant were your results?
- What are your confidence intervals?
- What is your standard deviation?
- How many runs did you average?

(b) Was your machine hardware stable?
- See:
https://developers.redhat.com/blog/2016/03/11/practical-micro-benchmarking-with-ltrace-and-sched/
- What methodology did you use to carry out your tests? Like CPU pinning.

(c) Exactly what hardware did you use?
- You mention IvyBridge and SandyBridge, but what exact hardware did
  you use for the tests, and what exact kernel version?

(d) If you run glibc's own microbenchmarks do you see the same
    performance problems? e.g. make bench, and look at the detailed
    bench-memcpy, bench-memcpy-large, and bench-memcpy-random results.

https://sourceware.org/glibc/wiki/Testing/Builds

(e) Are you willing to publish your microbencmark sources for others
    to confirm the results?

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-05 18:09 ` Carlos O'Donell
@ 2017-05-06  0:57   ` Erich Elsen
  2017-05-06 15:41     ` H.J. Lu
  0 siblings, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-06  0:57 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: libc-alpha, H.J. Lu

Hi Carlos,

a/b) The number of runs is dependent on the time taken; the number
iterations was such that each size took at least 500ms for all
iterations.  For many of the smaller sizes this means 10-100 million
iterations, for the largest size, 64MB, it was ~60.  10 runs were
launched separately, the difference between the maximum and the
minimum average was never more than 6% for any size; all of the
regressions are larger than this difference (usually much larger).
The times on the spreadsheet are from a randomly chosen run - it would
be possible to use a median or average, but given the large size of
effect, it didn't seem necessary.

b) The machines were idle (background processes only) except for the
test being run.  Boost was disabled.  The benchmark is single
threaded.  I did not explicitly pin the process - but given that the
machine was otherwise idle - it would be surprising if it was
migrated.  I can add this to see if the results change.

c) The specific processors were E5-2699 (Haswell), E5-2696 (Ivy),
E5-2689 (Sandy); I don't have motherboard or memory info.  The kernel
on the benchmark machines is 3.11.10.

d)  Only bench-memcpy-large would expose the problem at the largest
sizes.  2.19 did not have bench-memcpy-large.  The current benchmarks
will not reveal the regressions on Ivy and Haswell in the intermediate
size range because they only correspond to the readwritecache case on
the spreadsheet.  That is, they loop over the same src and dst buffers
in the timing loop.

nocache means that both the src and dst buffers go through memory with
strides such that nothing will be cached.
readcache means that the src buffer is fixed, but the dst buffer
strides through memory.

To see the difference at the largest sizes with the bench-memcpy-large
you can run it twice; once forcing __x86_shared_non_temporal_threshold
to LONG_MAX so the non-temporal path is never taken.

e) Yes, I can do this. It needs to go through approval to share
publicly, will take a few days.

Thanks,
Erich

On Fri, May 5, 2017 at 11:09 AM, Carlos O'Donell <carlos@redhat.com> wrote:
> On 05/05/2017 01:09 PM, Erich Elsen wrote:
>> I had a couple of questions:
>>
>> 1) Are the large regressions at large sizes for IvyBridge and
>> SandyBridge expected?  Is avoiding non-temporal stores a reasonable
>> solution?
>
> No large regressions are expected.
>
>> 2) Is it possible to fix the IvyBridge regressions by using model
>> information to force a specific implementation?  I'm not sure how
>> other cpus (AMD) would be affected if the selection logic was modified
>> based on feature flags.
>
> A different memcpy can be used for any detectable difference in hardware.
> What you can't do is select a different memcpy for a different range of
> inputs. You have to make the choice upfront with only the knowledge of
> the hardware as your input. Though today we could augment that choice
> with a glibc tunable set by the shell starting the process.
>
> I have questions of my own:
>
> (a) How statistically relevant were your results?
> - What are your confidence intervals?
> - What is your standard deviation?
> - How many runs did you average?
>
> (b) Was your machine hardware stable?
> - See:
> https://developers.redhat.com/blog/2016/03/11/practical-micro-benchmarking-with-ltrace-and-sched/
> - What methodology did you use to carry out your tests? Like CPU pinning.
>
> (c) Exactly what hardware did you use?
> - You mention IvyBridge and SandyBridge, but what exact hardware did
>   you use for the tests, and what exact kernel version?
>
> (d) If you run glibc's own microbenchmarks do you see the same
>     performance problems? e.g. make bench, and look at the detailed
>     bench-memcpy, bench-memcpy-large, and bench-memcpy-random results.
>
> https://sourceware.org/glibc/wiki/Testing/Builds
>
> (e) Are you willing to publish your microbencmark sources for others
>     to confirm the results?
>
> --
> Cheers,
> Carlos.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-06  0:57   ` Erich Elsen
@ 2017-05-06 15:41     ` H.J. Lu
  2017-05-09 23:48       ` Erich Elsen
  0 siblings, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-06 15:41 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Fri, May 5, 2017 at 5:57 PM, Erich Elsen <eriche@google.com> wrote:
> Hi Carlos,
>
> a/b) The number of runs is dependent on the time taken; the number
> iterations was such that each size took at least 500ms for all
> iterations.  For many of the smaller sizes this means 10-100 million
> iterations, for the largest size, 64MB, it was ~60.  10 runs were
> launched separately, the difference between the maximum and the
> minimum average was never more than 6% for any size; all of the
> regressions are larger than this difference (usually much larger).
> The times on the spreadsheet are from a randomly chosen run - it would
> be possible to use a median or average, but given the large size of
> effect, it didn't seem necessary.
>
> b) The machines were idle (background processes only) except for the
> test being run.  Boost was disabled.  The benchmark is single
> threaded.  I did not explicitly pin the process - but given that the
> machine was otherwise idle - it would be surprising if it was
> migrated.  I can add this to see if the results change.
>
> c) The specific processors were E5-2699 (Haswell), E5-2696 (Ivy),
> E5-2689 (Sandy); I don't have motherboard or memory info.  The kernel
> on the benchmark machines is 3.11.10.
>
> d)  Only bench-memcpy-large would expose the problem at the largest
> sizes.  2.19 did not have bench-memcpy-large.  The current benchmarks
> will not reveal the regressions on Ivy and Haswell in the intermediate
> size range because they only correspond to the readwritecache case on
> the spreadsheet.  That is, they loop over the same src and dst buffers
> in the timing loop.
>
> nocache means that both the src and dst buffers go through memory with
> strides such that nothing will be cached.
> readcache means that the src buffer is fixed, but the dst buffer
> strides through memory.
>
> To see the difference at the largest sizes with the bench-memcpy-large
> you can run it twice; once forcing __x86_shared_non_temporal_threshold
> to LONG_MAX so the non-temporal path is never taken.

The purpose of using non-tempora store is to avoid cache pullution
so that cache is also available to other threads.  We can improve the
heuristic for non-temporal threshold.   But we can't give all cache to
a single thread by default.

As for Haswell, there are some cases where the SSSE3 memcpy in
glibc 2.19 is faster than the new AVX memcpy.  But the new AVX
memcpy is faster than the SSSE3 memcpy in majority of cases.  The
new AVX memcpy in glibc 2.24 replaces the old AVX memcpy in glibc
2.23. So there is no regression from 2.23 to 2.24.

I also  checked my glibc performance data.  For data > 32K,
__memcpy_avx_unaligned is slower than __memcpy_avx_unaligned_erms.
We have

/* Threshold to use Enhanced REP MOVSB.  Since there is overhead to set
   up REP MOVSB operation, REP MOVSB isn't faster on short data.  The
   memcpy micro benchmark in glibc shows that 2KB is the approximate
   value above which REP MOVSB becomes faster than SSE2 optimization
   on processors with Enhanced REP MOVSB.  Since larger register size
   can move more data with a single load and store, the threshold is
   higher with larger register size.  */
#ifndef REP_MOVSB_THRESHOLD
# define REP_MOVSB_THRESHOLD (2048 * (VEC_SIZE / 16))
#endif

We can change it if there is improvement in glibc benchmarks.


H.J.

> e) Yes, I can do this. It needs to go through approval to share
> publicly, will take a few days.
>
> Thanks,
> Erich
>
> On Fri, May 5, 2017 at 11:09 AM, Carlos O'Donell <carlos@redhat.com> wrote:
>> On 05/05/2017 01:09 PM, Erich Elsen wrote:
>>> I had a couple of questions:
>>>
>>> 1) Are the large regressions at large sizes for IvyBridge and
>>> SandyBridge expected?  Is avoiding non-temporal stores a reasonable
>>> solution?
>>
>> No large regressions are expected.
>>
>>> 2) Is it possible to fix the IvyBridge regressions by using model
>>> information to force a specific implementation?  I'm not sure how
>>> other cpus (AMD) would be affected if the selection logic was modified
>>> based on feature flags.
>>
>> A different memcpy can be used for any detectable difference in hardware.
>> What you can't do is select a different memcpy for a different range of
>> inputs. You have to make the choice upfront with only the knowledge of
>> the hardware as your input. Though today we could augment that choice
>> with a glibc tunable set by the shell starting the process.
>>
>> I have questions of my own:
>>
>> (a) How statistically relevant were your results?
>> - What are your confidence intervals?
>> - What is your standard deviation?
>> - How many runs did you average?
>>
>> (b) Was your machine hardware stable?
>> - See:
>> https://developers.redhat.com/blog/2016/03/11/practical-micro-benchmarking-with-ltrace-and-sched/
>> - What methodology did you use to carry out your tests? Like CPU pinning.
>>
>> (c) Exactly what hardware did you use?
>> - You mention IvyBridge and SandyBridge, but what exact hardware did
>>   you use for the tests, and what exact kernel version?
>>
>> (d) If you run glibc's own microbenchmarks do you see the same
>>     performance problems? e.g. make bench, and look at the detailed
>>     bench-memcpy, bench-memcpy-large, and bench-memcpy-random results.
>>
>> https://sourceware.org/glibc/wiki/Testing/Builds
>>
>> (e) Are you willing to publish your microbencmark sources for others
>>     to confirm the results?
>>
>> --
>> Cheers,
>> Carlos.



-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-06 15:41     ` H.J. Lu
@ 2017-05-09 23:48       ` Erich Elsen
  2017-05-10 17:33         ` H.J. Lu
  0 siblings, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-09 23:48 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

I've created a shareable benchmark, available here:
https://gist.github.com/ekelsen/b66cc085eb39f0495b57679cdb1874fa .
This is not the one the numbers on the spreadsheet are generated from,
but the results are similar.

I think libc 2.19 chooses sse2_unaligned for all the cpus on the spreadsheet.

You can use this to see the difference on Haswell between
avx_unaligned and avx_unaligned_erms on the readcache and nocache
benchmarks.  It's true that for readwritecache, which corresponds to
the libc benchmarks, avx_unaligned_erms is always at least as fast.

You can also use it to see the regression on IvyBridge from 2.19 to 2.24.

Are there standard benchmarks showing that using the non-temporal
store is a net win even though it causes a 2-3x decrease in single
threaded performance for some processors?  Or how else is the decision
about the threshold made?

Thanks,
Erich

On Sat, May 6, 2017 at 8:41 AM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Fri, May 5, 2017 at 5:57 PM, Erich Elsen <eriche@google.com> wrote:
>> Hi Carlos,
>>
>> a/b) The number of runs is dependent on the time taken; the number
>> iterations was such that each size took at least 500ms for all
>> iterations.  For many of the smaller sizes this means 10-100 million
>> iterations, for the largest size, 64MB, it was ~60.  10 runs were
>> launched separately, the difference between the maximum and the
>> minimum average was never more than 6% for any size; all of the
>> regressions are larger than this difference (usually much larger).
>> The times on the spreadsheet are from a randomly chosen run - it would
>> be possible to use a median or average, but given the large size of
>> effect, it didn't seem necessary.
>>
>> b) The machines were idle (background processes only) except for the
>> test being run.  Boost was disabled.  The benchmark is single
>> threaded.  I did not explicitly pin the process - but given that the
>> machine was otherwise idle - it would be surprising if it was
>> migrated.  I can add this to see if the results change.
>>
>> c) The specific processors were E5-2699 (Haswell), E5-2696 (Ivy),
>> E5-2689 (Sandy); I don't have motherboard or memory info.  The kernel
>> on the benchmark machines is 3.11.10.
>>
>> d)  Only bench-memcpy-large would expose the problem at the largest
>> sizes.  2.19 did not have bench-memcpy-large.  The current benchmarks
>> will not reveal the regressions on Ivy and Haswell in the intermediate
>> size range because they only correspond to the readwritecache case on
>> the spreadsheet.  That is, they loop over the same src and dst buffers
>> in the timing loop.
>>
>> nocache means that both the src and dst buffers go through memory with
>> strides such that nothing will be cached.
>> readcache means that the src buffer is fixed, but the dst buffer
>> strides through memory.
>>
>> To see the difference at the largest sizes with the bench-memcpy-large
>> you can run it twice; once forcing __x86_shared_non_temporal_threshold
>> to LONG_MAX so the non-temporal path is never taken.
>
> The purpose of using non-tempora store is to avoid cache pullution
> so that cache is also available to other threads.  We can improve the
> heuristic for non-temporal threshold.   But we can't give all cache to
> a single thread by default.
>
> As for Haswell, there are some cases where the SSSE3 memcpy in
> glibc 2.19 is faster than the new AVX memcpy.  But the new AVX
> memcpy is faster than the SSSE3 memcpy in majority of cases.  The
> new AVX memcpy in glibc 2.24 replaces the old AVX memcpy in glibc
> 2.23. So there is no regression from 2.23 to 2.24.
>
> I also  checked my glibc performance data.  For data > 32K,
> __memcpy_avx_unaligned is slower than __memcpy_avx_unaligned_erms.
> We have
>
> /* Threshold to use Enhanced REP MOVSB.  Since there is overhead to set
>    up REP MOVSB operation, REP MOVSB isn't faster on short data.  The
>    memcpy micro benchmark in glibc shows that 2KB is the approximate
>    value above which REP MOVSB becomes faster than SSE2 optimization
>    on processors with Enhanced REP MOVSB.  Since larger register size
>    can move more data with a single load and store, the threshold is
>    higher with larger register size.  */
> #ifndef REP_MOVSB_THRESHOLD
> # define REP_MOVSB_THRESHOLD (2048 * (VEC_SIZE / 16))
> #endif
>
> We can change it if there is improvement in glibc benchmarks.
>
>
> H.J.
>
>> e) Yes, I can do this. It needs to go through approval to share
>> publicly, will take a few days.
>>
>> Thanks,
>> Erich
>>
>> On Fri, May 5, 2017 at 11:09 AM, Carlos O'Donell <carlos@redhat.com> wrote:
>>> On 05/05/2017 01:09 PM, Erich Elsen wrote:
>>>> I had a couple of questions:
>>>>
>>>> 1) Are the large regressions at large sizes for IvyBridge and
>>>> SandyBridge expected?  Is avoiding non-temporal stores a reasonable
>>>> solution?
>>>
>>> No large regressions are expected.
>>>
>>>> 2) Is it possible to fix the IvyBridge regressions by using model
>>>> information to force a specific implementation?  I'm not sure how
>>>> other cpus (AMD) would be affected if the selection logic was modified
>>>> based on feature flags.
>>>
>>> A different memcpy can be used for any detectable difference in hardware.
>>> What you can't do is select a different memcpy for a different range of
>>> inputs. You have to make the choice upfront with only the knowledge of
>>> the hardware as your input. Though today we could augment that choice
>>> with a glibc tunable set by the shell starting the process.
>>>
>>> I have questions of my own:
>>>
>>> (a) How statistically relevant were your results?
>>> - What are your confidence intervals?
>>> - What is your standard deviation?
>>> - How many runs did you average?
>>>
>>> (b) Was your machine hardware stable?
>>> - See:
>>> https://developers.redhat.com/blog/2016/03/11/practical-micro-benchmarking-with-ltrace-and-sched/
>>> - What methodology did you use to carry out your tests? Like CPU pinning.
>>>
>>> (c) Exactly what hardware did you use?
>>> - You mention IvyBridge and SandyBridge, but what exact hardware did
>>>   you use for the tests, and what exact kernel version?
>>>
>>> (d) If you run glibc's own microbenchmarks do you see the same
>>>     performance problems? e.g. make bench, and look at the detailed
>>>     bench-memcpy, bench-memcpy-large, and bench-memcpy-random results.
>>>
>>> https://sourceware.org/glibc/wiki/Testing/Builds
>>>
>>> (e) Are you willing to publish your microbencmark sources for others
>>>     to confirm the results?
>>>
>>> --
>>> Cheers,
>>> Carlos.
>
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-09 23:48       ` Erich Elsen
@ 2017-05-10 17:33         ` H.J. Lu
  2017-05-11  2:17           ` Carlos O'Donell
  0 siblings, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-10 17:33 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Tue, May 9, 2017 at 4:48 PM, Erich Elsen <eriche@google.com> wrote:
> I've created a shareable benchmark, available here:
> https://gist.github.com/ekelsen/b66cc085eb39f0495b57679cdb1874fa .
> This is not the one the numbers on the spreadsheet are generated from,
> but the results are similar.

I will take a look.

> I think libc 2.19 chooses sse2_unaligned for all the cpus on the spreadsheet.
>
> You can use this to see the difference on Haswell between
> avx_unaligned and avx_unaligned_erms on the readcache and nocache
> benchmarks.  It's true that for readwritecache, which corresponds to
> the libc benchmarks, avx_unaligned_erms is always at least as fast.

I created hjl/x86/optimize branch with memcpy-sse2-unaligned.S
from glibc 2.19 so that we can compare its performance against
others with glibc benchmark.

> You can also use it to see the regression on IvyBridge from 2.19 to 2.24.

That is expected since memcpy-sse2-unaligned.S doesn't use
non-temporal store.

> Are there standard benchmarks showing that using the non-temporal

How responsive is your glibc 2.19 machine when your memcpy benchmark
is running? I would expect glibc 2.24 machine is more responsive.

> store is a net win even though it causes a 2-3x decrease in single
> threaded performance for some processors?  Or how else is the decision
> about the threshold made?

There is no perfect number to make everyone happy.  I am open
to suggestion to improve the compromise.

H.J.

> Thanks,
> Erich
>
> On Sat, May 6, 2017 at 8:41 AM, H.J. Lu <hjl.tools@gmail.com> wrote:
>> On Fri, May 5, 2017 at 5:57 PM, Erich Elsen <eriche@google.com> wrote:
>>> Hi Carlos,
>>>
>>> a/b) The number of runs is dependent on the time taken; the number
>>> iterations was such that each size took at least 500ms for all
>>> iterations.  For many of the smaller sizes this means 10-100 million
>>> iterations, for the largest size, 64MB, it was ~60.  10 runs were
>>> launched separately, the difference between the maximum and the
>>> minimum average was never more than 6% for any size; all of the
>>> regressions are larger than this difference (usually much larger).
>>> The times on the spreadsheet are from a randomly chosen run - it would
>>> be possible to use a median or average, but given the large size of
>>> effect, it didn't seem necessary.
>>>
>>> b) The machines were idle (background processes only) except for the
>>> test being run.  Boost was disabled.  The benchmark is single
>>> threaded.  I did not explicitly pin the process - but given that the
>>> machine was otherwise idle - it would be surprising if it was
>>> migrated.  I can add this to see if the results change.
>>>
>>> c) The specific processors were E5-2699 (Haswell), E5-2696 (Ivy),
>>> E5-2689 (Sandy); I don't have motherboard or memory info.  The kernel
>>> on the benchmark machines is 3.11.10.
>>>
>>> d)  Only bench-memcpy-large would expose the problem at the largest
>>> sizes.  2.19 did not have bench-memcpy-large.  The current benchmarks
>>> will not reveal the regressions on Ivy and Haswell in the intermediate
>>> size range because they only correspond to the readwritecache case on
>>> the spreadsheet.  That is, they loop over the same src and dst buffers
>>> in the timing loop.
>>>
>>> nocache means that both the src and dst buffers go through memory with
>>> strides such that nothing will be cached.
>>> readcache means that the src buffer is fixed, but the dst buffer
>>> strides through memory.
>>>
>>> To see the difference at the largest sizes with the bench-memcpy-large
>>> you can run it twice; once forcing __x86_shared_non_temporal_threshold
>>> to LONG_MAX so the non-temporal path is never taken.
>>
>> The purpose of using non-tempora store is to avoid cache pullution
>> so that cache is also available to other threads.  We can improve the
>> heuristic for non-temporal threshold.   But we can't give all cache to
>> a single thread by default.
>>
>> As for Haswell, there are some cases where the SSSE3 memcpy in
>> glibc 2.19 is faster than the new AVX memcpy.  But the new AVX
>> memcpy is faster than the SSSE3 memcpy in majority of cases.  The
>> new AVX memcpy in glibc 2.24 replaces the old AVX memcpy in glibc
>> 2.23. So there is no regression from 2.23 to 2.24.
>>
>> I also  checked my glibc performance data.  For data > 32K,
>> __memcpy_avx_unaligned is slower than __memcpy_avx_unaligned_erms.
>> We have
>>
>> /* Threshold to use Enhanced REP MOVSB.  Since there is overhead to set
>>    up REP MOVSB operation, REP MOVSB isn't faster on short data.  The
>>    memcpy micro benchmark in glibc shows that 2KB is the approximate
>>    value above which REP MOVSB becomes faster than SSE2 optimization
>>    on processors with Enhanced REP MOVSB.  Since larger register size
>>    can move more data with a single load and store, the threshold is
>>    higher with larger register size.  */
>> #ifndef REP_MOVSB_THRESHOLD
>> # define REP_MOVSB_THRESHOLD (2048 * (VEC_SIZE / 16))
>> #endif
>>
>> We can change it if there is improvement in glibc benchmarks.
>>
>>
>> H.J.
>>
>>> e) Yes, I can do this. It needs to go through approval to share
>>> publicly, will take a few days.
>>>
>>> Thanks,
>>> Erich
>>>
>>> On Fri, May 5, 2017 at 11:09 AM, Carlos O'Donell <carlos@redhat.com> wrote:
>>>> On 05/05/2017 01:09 PM, Erich Elsen wrote:
>>>>> I had a couple of questions:
>>>>>
>>>>> 1) Are the large regressions at large sizes for IvyBridge and
>>>>> SandyBridge expected?  Is avoiding non-temporal stores a reasonable
>>>>> solution?
>>>>
>>>> No large regressions are expected.
>>>>
>>>>> 2) Is it possible to fix the IvyBridge regressions by using model
>>>>> information to force a specific implementation?  I'm not sure how
>>>>> other cpus (AMD) would be affected if the selection logic was modified
>>>>> based on feature flags.
>>>>
>>>> A different memcpy can be used for any detectable difference in hardware.
>>>> What you can't do is select a different memcpy for a different range of
>>>> inputs. You have to make the choice upfront with only the knowledge of
>>>> the hardware as your input. Though today we could augment that choice
>>>> with a glibc tunable set by the shell starting the process.
>>>>
>>>> I have questions of my own:
>>>>
>>>> (a) How statistically relevant were your results?
>>>> - What are your confidence intervals?
>>>> - What is your standard deviation?
>>>> - How many runs did you average?
>>>>
>>>> (b) Was your machine hardware stable?
>>>> - See:
>>>> https://developers.redhat.com/blog/2016/03/11/practical-micro-benchmarking-with-ltrace-and-sched/
>>>> - What methodology did you use to carry out your tests? Like CPU pinning.
>>>>
>>>> (c) Exactly what hardware did you use?
>>>> - You mention IvyBridge and SandyBridge, but what exact hardware did
>>>>   you use for the tests, and what exact kernel version?
>>>>
>>>> (d) If you run glibc's own microbenchmarks do you see the same
>>>>     performance problems? e.g. make bench, and look at the detailed
>>>>     bench-memcpy, bench-memcpy-large, and bench-memcpy-random results.
>>>>
>>>> https://sourceware.org/glibc/wiki/Testing/Builds
>>>>
>>>> (e) Are you willing to publish your microbencmark sources for others
>>>>     to confirm the results?
>>>>
>>>> --
>>>> Cheers,
>>>> Carlos.
>>
>>
>>
>> --
>> H.J.



-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-10 17:33         ` H.J. Lu
@ 2017-05-11  2:17           ` Carlos O'Donell
  2017-05-12 19:47             ` Erich Elsen
       [not found]             ` <CAOVZoAPp3_T+ourRkNFXHfCSQUOMFn4iBBm9j50==h=VJcGSzw@mail.gmail.com>
  0 siblings, 2 replies; 31+ messages in thread
From: Carlos O'Donell @ 2017-05-11  2:17 UTC (permalink / raw)
  To: H.J. Lu, Erich Elsen; +Cc: GNU C Library

On 05/10/2017 01:33 PM, H.J. Lu wrote:
> On Tue, May 9, 2017 at 4:48 PM, Erich Elsen <eriche@google.com> wrote:
>> store is a net win even though it causes a 2-3x decrease in single
>> threaded performance for some processors?  Or how else is the decision
>> about the threshold made?
> 
> There is no perfect number to make everyone happy.  I am open
> to suggestion to improve the compromise.
> 
> H.J.

I agree with H.J., there is a compromise to be made here. Having a single
process thrash the box by taking all of the memory bandwidth might be
sensible for a microservice, but glibc has to default to something that
works well on average.

With the new tunables infrastructure we can start talking about ways in
which a tunable could influence IFUNC selection though, allowing users
some kind of choice into tweaking for single-threaded or multi-threaded,
single-user or multi-user etc.

What I would like to see as the output of any discussion is a set of
microbenchmarks (benchtests/) added to glibc that are the distillation
of whatever workload we're talking about here. This is crucial to the
community having a way to test from release-to-release that we don't
regress performance.

Unless you want to sign up to test your workload at every release then
we need this kind of microbenchmark addition. And microbenchmarks are
dead-easy to integrate with glibc so most people should have no excuse.

The hardware vendors and distros who want particular performance tests
are putting such tests in place (representative of their users), and direct
end-users  who want particular performance are also adding tests.

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-11  2:17           ` Carlos O'Donell
@ 2017-05-12 19:47             ` Erich Elsen
       [not found]             ` <CAOVZoAPp3_T+ourRkNFXHfCSQUOMFn4iBBm9j50==h=VJcGSzw@mail.gmail.com>
  1 sibling, 0 replies; 31+ messages in thread
From: Erich Elsen @ 2017-05-12 19:47 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: H.J. Lu, GNU C Library

HJ - yes, the benchmark still shows the same behavior.  I did have to
modify the build to add -std=c++11.

Carlos - Maybe the first step is to add a tunable that allows for
selection of the non-temporal-store size threshold without changing
the implementation that is selected.  I can work on submitting this
patch.

On Wed, May 10, 2017 at 7:17 PM, Carlos O'Donell <carlos@redhat.com> wrote:
> On 05/10/2017 01:33 PM, H.J. Lu wrote:
>> On Tue, May 9, 2017 at 4:48 PM, Erich Elsen <eriche@google.com> wrote:
>>> store is a net win even though it causes a 2-3x decrease in single
>>> threaded performance for some processors?  Or how else is the decision
>>> about the threshold made?
>>
>> There is no perfect number to make everyone happy.  I am open
>> to suggestion to improve the compromise.
>>
>> H.J.
>
> I agree with H.J., there is a compromise to be made here. Having a single
> process thrash the box by taking all of the memory bandwidth might be
> sensible for a microservice, but glibc has to default to something that
> works well on average.
>
> With the new tunables infrastructure we can start talking about ways in
> which a tunable could influence IFUNC selection though, allowing users
> some kind of choice into tweaking for single-threaded or multi-threaded,
> single-user or multi-user etc.
>
> What I would like to see as the output of any discussion is a set of
> microbenchmarks (benchtests/) added to glibc that are the distillation
> of whatever workload we're talking about here. This is crucial to the
> community having a way to test from release-to-release that we don't
> regress performance.
>
> Unless you want to sign up to test your workload at every release then
> we need this kind of microbenchmark addition. And microbenchmarks are
> dead-easy to integrate with glibc so most people should have no excuse.
>
> The hardware vendors and distros who want particular performance tests
> are putting such tests in place (representative of their users), and direct
> end-users  who want particular performance are also adding tests.
>
> --
> Cheers,
> Carlos.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
       [not found]             ` <CAOVZoAPp3_T+ourRkNFXHfCSQUOMFn4iBBm9j50==h=VJcGSzw@mail.gmail.com>
@ 2017-05-12 20:21               ` H.J. Lu
  2017-05-12 21:21                 ` H.J. Lu
  0 siblings, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-12 20:21 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Fri, May 12, 2017 at 12:43 PM, Erich Elsen <eriche@google.com> wrote:
> HJ - yes, the benchmark still shows the same behavior.  I did have to modify
> the build to add -std=c++11.

I updated hjl/x86/optimize branch with memcpy_benchmark2.cc
to change its output for easy comparison.  Please take a look to see
if it is still valid.

H.J.
> Carlos - Maybe the first step is to add a tunable that allows for selection
> of the non-temporal-store size threshold without changing the implementation
> that is selected.  I can work on submitting this patch.
>
> On Wed, May 10, 2017 at 7:17 PM, Carlos O'Donell <carlos@redhat.com> wrote:
>>
>> On 05/10/2017 01:33 PM, H.J. Lu wrote:
>> > On Tue, May 9, 2017 at 4:48 PM, Erich Elsen <eriche@google.com> wrote:
>> >> store is a net win even though it causes a 2-3x decrease in single
>> >> threaded performance for some processors?  Or how else is the decision
>> >> about the threshold made?
>> >
>> > There is no perfect number to make everyone happy.  I am open
>> > to suggestion to improve the compromise.
>> >
>> > H.J.
>>
>> I agree with H.J., there is a compromise to be made here. Having a single
>> process thrash the box by taking all of the memory bandwidth might be
>> sensible for a microservice, but glibc has to default to something that
>> works well on average.
>>
>> With the new tunables infrastructure we can start talking about ways in
>> which a tunable could influence IFUNC selection though, allowing users
>> some kind of choice into tweaking for single-threaded or multi-threaded,
>> single-user or multi-user etc.
>>
>> What I would like to see as the output of any discussion is a set of
>> microbenchmarks (benchtests/) added to glibc that are the distillation
>> of whatever workload we're talking about here. This is crucial to the
>> community having a way to test from release-to-release that we don't
>> regress performance.
>>
>> Unless you want to sign up to test your workload at every release then
>> we need this kind of microbenchmark addition. And microbenchmarks are
>> dead-easy to integrate with glibc so most people should have no excuse.
>>
>> The hardware vendors and distros who want particular performance tests
>> are putting such tests in place (representative of their users), and
>> direct
>> end-users  who want particular performance are also adding tests.
>>
>> --
>> Cheers,
>> Carlos.
>
>



-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-12 20:21               ` H.J. Lu
@ 2017-05-12 21:21                 ` H.J. Lu
  2017-05-18 20:59                   ` Erich Elsen
  0 siblings, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-12 21:21 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Fri, May 12, 2017 at 1:21 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Fri, May 12, 2017 at 12:43 PM, Erich Elsen <eriche@google.com> wrote:
>> HJ - yes, the benchmark still shows the same behavior.  I did have to modify
>> the build to add -std=c++11.
>
> I updated hjl/x86/optimize branch with memcpy_benchmark2.cc
> to change its output for easy comparison.  Please take a look to see
> if it is still valid.
>
> H.J.
>> Carlos - Maybe the first step is to add a tunable that allows for selection
>> of the non-temporal-store size threshold without changing the implementation
>> that is selected.  I can work on submitting this patch.

There are

 /* The large memcpy micro benchmark in glibc shows that 6 times of
     shared cache size is the approximate value above which non-temporal
     store becomes faster.  */
  __x86_shared_non_temporal_threshold = __x86_shared_cache_size * 6;

I did the measurement on a 8-core processor.  6 / 8 is .75 of the shared
cache.   But on processors with 56 cores, 6 / 56 may be too small.

H.J.
>> On Wed, May 10, 2017 at 7:17 PM, Carlos O'Donell <carlos@redhat.com> wrote:
>>>
>>> On 05/10/2017 01:33 PM, H.J. Lu wrote:
>>> > On Tue, May 9, 2017 at 4:48 PM, Erich Elsen <eriche@google.com> wrote:
>>> >> store is a net win even though it causes a 2-3x decrease in single
>>> >> threaded performance for some processors?  Or how else is the decision
>>> >> about the threshold made?
>>> >
>>> > There is no perfect number to make everyone happy.  I am open
>>> > to suggestion to improve the compromise.
>>> >
>>> > H.J.
>>>
>>> I agree with H.J., there is a compromise to be made here. Having a single
>>> process thrash the box by taking all of the memory bandwidth might be
>>> sensible for a microservice, but glibc has to default to something that
>>> works well on average.
>>>
>>> With the new tunables infrastructure we can start talking about ways in
>>> which a tunable could influence IFUNC selection though, allowing users
>>> some kind of choice into tweaking for single-threaded or multi-threaded,
>>> single-user or multi-user etc.
>>>
>>> What I would like to see as the output of any discussion is a set of
>>> microbenchmarks (benchtests/) added to glibc that are the distillation
>>> of whatever workload we're talking about here. This is crucial to the
>>> community having a way to test from release-to-release that we don't
>>> regress performance.
>>>
>>> Unless you want to sign up to test your workload at every release then
>>> we need this kind of microbenchmark addition. And microbenchmarks are
>>> dead-easy to integrate with glibc so most people should have no excuse.
>>>
>>> The hardware vendors and distros who want particular performance tests
>>> are putting such tests in place (representative of their users), and
>>> direct
>>> end-users  who want particular performance are also adding tests.
>>>
>>> --
>>> Cheers,
>>> Carlos.
>>
>>
>
>
>
> --
> H.J.



-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-12 21:21                 ` H.J. Lu
@ 2017-05-18 20:59                   ` Erich Elsen
  2017-05-22 19:17                     ` H.J. Lu
  0 siblings, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-18 20:59 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

Hi H.J.,

I was on vacation, sorry for the slow reply.  The updated benchmark
still shows the same behavior, thanks.

I'll try my hand at creating a patch that makes that variable
__x86_shared_non_temporal_threshold a tunable.  It will be necessary
to do internal experiments anyway.

Best,
Erich

On Fri, May 12, 2017 at 2:20 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Fri, May 12, 2017 at 1:21 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>> On Fri, May 12, 2017 at 12:43 PM, Erich Elsen <eriche@google.com> wrote:
>>> HJ - yes, the benchmark still shows the same behavior.  I did have to modify
>>> the build to add -std=c++11.
>>
>> I updated hjl/x86/optimize branch with memcpy_benchmark2.cc
>> to change its output for easy comparison.  Please take a look to see
>> if it is still valid.
>>
>> H.J.
>>> Carlos - Maybe the first step is to add a tunable that allows for selection
>>> of the non-temporal-store size threshold without changing the implementation
>>> that is selected.  I can work on submitting this patch.
>
> There are
>
>  /* The large memcpy micro benchmark in glibc shows that 6 times of
>      shared cache size is the approximate value above which non-temporal
>      store becomes faster.  */
>   __x86_shared_non_temporal_threshold = __x86_shared_cache_size * 6;
>
> I did the measurement on a 8-core processor.  6 / 8 is .75 of the shared
> cache.   But on processors with 56 cores, 6 / 56 may be too small.
>
> H.J.
>>> On Wed, May 10, 2017 at 7:17 PM, Carlos O'Donell <carlos@redhat.com> wrote:
>>>>
>>>> On 05/10/2017 01:33 PM, H.J. Lu wrote:
>>>> > On Tue, May 9, 2017 at 4:48 PM, Erich Elsen <eriche@google.com> wrote:
>>>> >> store is a net win even though it causes a 2-3x decrease in single
>>>> >> threaded performance for some processors?  Or how else is the decision
>>>> >> about the threshold made?
>>>> >
>>>> > There is no perfect number to make everyone happy.  I am open
>>>> > to suggestion to improve the compromise.
>>>> >
>>>> > H.J.
>>>>
>>>> I agree with H.J., there is a compromise to be made here. Having a single
>>>> process thrash the box by taking all of the memory bandwidth might be
>>>> sensible for a microservice, but glibc has to default to something that
>>>> works well on average.
>>>>
>>>> With the new tunables infrastructure we can start talking about ways in
>>>> which a tunable could influence IFUNC selection though, allowing users
>>>> some kind of choice into tweaking for single-threaded or multi-threaded,
>>>> single-user or multi-user etc.
>>>>
>>>> What I would like to see as the output of any discussion is a set of
>>>> microbenchmarks (benchtests/) added to glibc that are the distillation
>>>> of whatever workload we're talking about here. This is crucial to the
>>>> community having a way to test from release-to-release that we don't
>>>> regress performance.
>>>>
>>>> Unless you want to sign up to test your workload at every release then
>>>> we need this kind of microbenchmark addition. And microbenchmarks are
>>>> dead-easy to integrate with glibc so most people should have no excuse.
>>>>
>>>> The hardware vendors and distros who want particular performance tests
>>>> are putting such tests in place (representative of their users), and
>>>> direct
>>>> end-users  who want particular performance are also adding tests.
>>>>
>>>> --
>>>> Cheers,
>>>> Carlos.
>>>
>>>
>>
>>
>>
>> --
>> H.J.
>
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-18 20:59                   ` Erich Elsen
@ 2017-05-22 19:17                     ` H.J. Lu
  2017-05-22 20:22                       ` H.J. Lu
  2017-05-23  1:23                       ` Erich Elsen
  0 siblings, 2 replies; 31+ messages in thread
From: H.J. Lu @ 2017-05-22 19:17 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

[-- Attachment #1: Type: text/plain, Size: 817 bytes --]

On Thu, May 18, 2017 at 1:59 PM, Erich Elsen <eriche@google.com> wrote:
> Hi H.J.,
>
> I was on vacation, sorry for the slow reply.  The updated benchmark
> still shows the same behavior, thanks.
>
> I'll try my hand at creating a patch that makes that variable
> __x86_shared_non_temporal_threshold a tunable.  It will be necessary
> to do internal experiments anyway.
>

__x86_shared_non_temporal_threshold was set to 6 times of per-core
shared cache size, based on the large memcpy micro benchmark in glibc
on a 8-core processor.  For a processor with more than 8 cores, the
threshold is too low.  Set __x86_shared_non_temporal_threshold to the
3/4 of the total shared cache size so that it is unchanged on 8-core
processors.  On processors with less than 8 cores, the threshold is
lower.

Any comments?

-- 
H.J.

[-- Attachment #2: 0001-x86-Update-__x86_shared_non_temporal_threshold.patch --]
[-- Type: text/x-patch, Size: 1495 bytes --]

From bfb716e07b77f0ed8e0c2689d5cd01e2c8251fc5 Mon Sep 17 00:00:00 2001
From: "H.J. Lu" <hjl.tools@gmail.com>
Date: Fri, 12 May 2017 13:38:04 -0700
Subject: [PATCH] x86: Update __x86_shared_non_temporal_threshold

__x86_shared_non_temporal_threshold was set to 6 times of per-core
shared cache size, based on the large memcpy micro benchmark in glibc
on a 8-core processor.  For a processor with more than 8 cores, the
threshold is too low.  Set __x86_shared_non_temporal_threshold to the
3/4 of the total shared cache size so that it is unchanged on 8-core
processors.  On processors with less than 8 cores, the threshold is
lower.

	* sysdeps/x86/cacheinfo.c (__x86_shared_non_temporal_threshold):
	Set to the 3/4 of the total shared cache size.
---
 sysdeps/x86/cacheinfo.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c
index 1ccbe41..3434d97 100644
--- a/sysdeps/x86/cacheinfo.c
+++ b/sysdeps/x86/cacheinfo.c
@@ -766,6 +766,8 @@ intel_bug_no_cache_info:
 
   /* The large memcpy micro benchmark in glibc shows that 6 times of
      shared cache size is the approximate value above which non-temporal
-     store becomes faster.  */
-  __x86_shared_non_temporal_threshold = __x86_shared_cache_size * 6;
+     store becomes faster on a 8-core processor.  This is the 3/4 of the
+     total shared cache size.  */
+  __x86_shared_non_temporal_threshold
+    = __x86_shared_cache_size * threads * 3 / 4;
 }
-- 
2.9.4


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-22 19:17                     ` H.J. Lu
@ 2017-05-22 20:22                       ` H.J. Lu
  2017-05-23  1:23                       ` Erich Elsen
  1 sibling, 0 replies; 31+ messages in thread
From: H.J. Lu @ 2017-05-22 20:22 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

[-- Attachment #1: Type: text/plain, Size: 1023 bytes --]

On Mon, May 22, 2017 at 12:17 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Thu, May 18, 2017 at 1:59 PM, Erich Elsen <eriche@google.com> wrote:
>> Hi H.J.,
>>
>> I was on vacation, sorry for the slow reply.  The updated benchmark
>> still shows the same behavior, thanks.
>>
>> I'll try my hand at creating a patch that makes that variable
>> __x86_shared_non_temporal_threshold a tunable.  It will be necessary
>> to do internal experiments anyway.
>>
>
> __x86_shared_non_temporal_threshold was set to 6 times of per-core
> shared cache size, based on the large memcpy micro benchmark in glibc
> on a 8-core processor.  For a processor with more than 8 cores, the
> threshold is too low.  Set __x86_shared_non_temporal_threshold to the
> 3/4 of the total shared cache size so that it is unchanged on 8-core
> processors.  On processors with less than 8 cores, the threshold is
> lower.
>
> Any comments?
>

Here is a patch to add support for
"glibc.x86_cache.non_temporal_threshold=number"
to GLIBC_TUNABLES.


-- 
H.J.

[-- Attachment #2: 0001-Add-x86_cache.non_temporal_threshold-to-GLIBC_TUNABL.patch --]
[-- Type: text/x-patch, Size: 2684 bytes --]

From 3e31bc4a930e7b32924befe762014f85d5408692 Mon Sep 17 00:00:00 2001
From: "H.J. Lu" <hjl.tools@gmail.com>
Date: Mon, 22 May 2017 12:00:43 -0700
Subject: [PATCH] Add x86_cache.non_temporal_threshold to GLIBC_TUNABLES

Add support for "glibc.x86_cache.non_temporal_threshold=number" to
GLIBC_TUNABLES.

	* elf/dl-tunables.list (x86_cache): New name space.
	* sysdeps/x86/cacheinfo.c [HAVE_TUNABLES] (TUNABLE_NAMESPACE):
	New.
	[HAVE_TUNABLES]: Include <elf/dl-tunables.h>.
	[HAVE_TUNABLES] (DL_TUNABLE_CALLBACK (set_non_temporal_threshold)):
	New.
	[HAVE_TUNABLES] (init_cacheinfo): Call TUNABLE_SET_VAL_WITH_CALLBACK
	with set_non_temporal_threshold.
---
 elf/dl-tunables.list    |  6 ++++++
 sysdeps/x86/cacheinfo.c | 22 +++++++++++++++++++---
 2 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/elf/dl-tunables.list b/elf/dl-tunables.list
index b9f1488..2c899fe 100644
--- a/elf/dl-tunables.list
+++ b/elf/dl-tunables.list
@@ -77,4 +77,10 @@ glibc {
       security_level: SXID_IGNORE
     }
   }
+  x86_cache {
+    non_temporal_threshold {
+      type: SIZE_T
+      security_level: SXID_IGNORE
+    }
+  }
 }
diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c
index 3434d97..1b195eb 100644
--- a/sysdeps/x86/cacheinfo.c
+++ b/sysdeps/x86/cacheinfo.c
@@ -23,6 +23,20 @@
 #include <cpuid.h>
 #include <init-arch.h>
 
+/* Threshold to use non temporal store.  */
+long int __x86_shared_non_temporal_threshold attribute_hidden;
+
+#if HAVE_TUNABLES
+# define TUNABLE_NAMESPACE x86_cache
+# include <elf/dl-tunables.h>
+
+void
+DL_TUNABLE_CALLBACK (set_non_temporal_threshold) (tunable_val_t *valp)
+{
+  __x86_shared_non_temporal_threshold = (long int) valp->numval;
+}
+#endif
+
 #define is_intel GLRO(dl_x86_cpu_features).kind == arch_kind_intel
 #define is_amd GLRO(dl_x86_cpu_features).kind == arch_kind_amd
 #define max_cpuid GLRO(dl_x86_cpu_features).max_cpuid
@@ -466,9 +480,6 @@ long int __x86_raw_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
 /* Similar to __x86_shared_cache_size, but not rounded.  */
 long int __x86_raw_shared_cache_size attribute_hidden = 1024 * 1024;
 
-/* Threshold to use non temporal store.  */
-long int __x86_shared_non_temporal_threshold attribute_hidden;
-
 #ifndef DISABLE_PREFETCHW
 /* PREFETCHW support flag for use in memory and string routines.  */
 int __x86_prefetchw attribute_hidden;
@@ -770,4 +781,9 @@ intel_bug_no_cache_info:
      total shared cache size.  */
   __x86_shared_non_temporal_threshold
     = __x86_shared_cache_size * threads * 3 / 4;
+
+#if HAVE_TUNABLES
+  TUNABLE_SET_VAL_WITH_CALLBACK (non_temporal_threshold, NULL,
+				 set_non_temporal_threshold);
+#endif
 }
-- 
2.9.4


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-22 19:17                     ` H.J. Lu
  2017-05-22 20:22                       ` H.J. Lu
@ 2017-05-23  1:23                       ` Erich Elsen
  2017-05-23  2:25                         ` H.J. Lu
  1 sibling, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-23  1:23 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

I definitely think increasing the size in the case of processors with
a large number of cores makes sense.  Hopefully with some testing we
can confirm it is a net win and/or find a more empirical number.

Thanks for that patch with the tunable support.  I've just put a
similar patch in review for sharing right now.  It adds support in the
case that HAVE_TUNABLES isn't defined like the similar code in arena.c
 and also makes a minor change that turns init_cacheinfo into a
init_cacheinfo_impl (a hidden callable).  init_cacheinfo is now a
constructor that just calls the impl and passes the cpu_features
struct.  This is useful in that it makes the code a bit more modular
(something that we'll need to be able to test this internally).

On Mon, May 22, 2017 at 12:17 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Thu, May 18, 2017 at 1:59 PM, Erich Elsen <eriche@google.com> wrote:
>> Hi H.J.,
>>
>> I was on vacation, sorry for the slow reply.  The updated benchmark
>> still shows the same behavior, thanks.
>>
>> I'll try my hand at creating a patch that makes that variable
>> __x86_shared_non_temporal_threshold a tunable.  It will be necessary
>> to do internal experiments anyway.
>>
>
> __x86_shared_non_temporal_threshold was set to 6 times of per-core
> shared cache size, based on the large memcpy micro benchmark in glibc
> on a 8-core processor.  For a processor with more than 8 cores, the
> threshold is too low.  Set __x86_shared_non_temporal_threshold to the
> 3/4 of the total shared cache size so that it is unchanged on 8-core
> processors.  On processors with less than 8 cores, the threshold is
> lower.
>
> Any comments?
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-23  1:23                       ` Erich Elsen
@ 2017-05-23  2:25                         ` H.J. Lu
  2017-05-23  3:19                           ` Erich Elsen
  0 siblings, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-23  2:25 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Mon, May 22, 2017 at 6:23 PM, Erich Elsen <eriche@google.com> wrote:
> I definitely think increasing the size in the case of processors with
> a large number of cores makes sense.  Hopefully with some testing we
> can confirm it is a net win and/or find a more empirical number.
>
> Thanks for that patch with the tunable support.  I've just put a
> similar patch in review for sharing right now.  It adds support in the
> case that HAVE_TUNABLES isn't defined like the similar code in arena.c
>  and also makes a minor change that turns init_cacheinfo into a
> init_cacheinfo_impl (a hidden callable).  init_cacheinfo is now a
> constructor that just calls the impl and passes the cpu_features
> struct.  This is useful in that it makes the code a bit more modular
> (something that we'll need to be able to test this internally).

This sounds a good idea.  I'd also like to add tunable support in
init_cpu_features to turn on/off CPU features.   non_temporal_threshold
will be one of them.


-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-23  2:25                         ` H.J. Lu
@ 2017-05-23  3:19                           ` Erich Elsen
  2017-05-23 20:39                             ` Erich Elsen
  2017-05-24 21:36                             ` H.J. Lu
  0 siblings, 2 replies; 31+ messages in thread
From: Erich Elsen @ 2017-05-23  3:19 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

[-- Attachment #1: Type: text/plain, Size: 1175 bytes --]

Here is the patch that slightly refactors how init_cacheinfo is called.

On Mon, May 22, 2017 at 7:24 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Mon, May 22, 2017 at 6:23 PM, Erich Elsen <eriche@google.com> wrote:
>> I definitely think increasing the size in the case of processors with
>> a large number of cores makes sense.  Hopefully with some testing we
>> can confirm it is a net win and/or find a more empirical number.
>>
>> Thanks for that patch with the tunable support.  I've just put a
>> similar patch in review for sharing right now.  It adds support in the
>> case that HAVE_TUNABLES isn't defined like the similar code in arena.c
>>  and also makes a minor change that turns init_cacheinfo into a
>> init_cacheinfo_impl (a hidden callable).  init_cacheinfo is now a
>> constructor that just calls the impl and passes the cpu_features
>> struct.  This is useful in that it makes the code a bit more modular
>> (something that we'll need to be able to test this internally).
>
> This sounds a good idea.  I'd also like to add tunable support in
> init_cpu_features to turn on/off CPU features.   non_temporal_threshold
> will be one of them.
>
>
> --
> H.J.

[-- Attachment #2: 0001-add-tunable-for-non-temporal-store.-slightly-refacto.patch --]
[-- Type: text/x-patch, Size: 7054 bytes --]

From 87b133a3df55e4e444f893a354f01e10e7557ac6 Mon Sep 17 00:00:00 2001
From: Erich Elsen <eriche@google.com>
Date: Mon, 22 May 2017 18:08:58 -0700
Subject: [PATCH 1/2] add tunable for non temporal store. slightly refactor
 cache info code to be allow for the possiblity of calling the implementation.

---
 elf/dl-tunables.list    |  7 ++++
 sysdeps/x86/cacheinfo.c | 95 +++++++++++++++++++++++++++++++++++++++----------
 2 files changed, 84 insertions(+), 18 deletions(-)

diff --git a/elf/dl-tunables.list b/elf/dl-tunables.list
index b9f1488798..d19fb0f175 100644
--- a/elf/dl-tunables.list
+++ b/elf/dl-tunables.list
@@ -30,6 +30,13 @@
 # 	     NONE: Read all the time.
 
 glibc {
+  x86_cache {
+    x86_shared_non_temporal_threshold {
+      type: SIZE_T
+      env_alias: SHARED_NON_TEMPORAL_THRESHOLD
+      security_level: SXID_IGNORE
+    }
+  }
   malloc {
     check {
       type: INT_32
diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c
index 1ccbe41b8f..2619c5a83c 100644
--- a/sysdeps/x86/cacheinfo.c
+++ b/sysdeps/x86/cacheinfo.c
@@ -23,6 +23,15 @@
 #include <cpuid.h>
 #include <init-arch.h>
 
+#if HAVE_TUNABLES
+# define TUNABLE_NAMESPACE x86_cache
+#else
+  #include <string.h>
+  extern char **_environ;
+#endif
+#include <elf/dl-tunables.h>
+
+
 #define is_intel GLRO(dl_x86_cpu_features).kind == arch_kind_intel
 #define is_amd GLRO(dl_x86_cpu_features).kind == arch_kind_amd
 #define max_cpuid GLRO(dl_x86_cpu_features).max_cpuid
@@ -128,7 +137,7 @@ intel_02_known_compare (const void *p1, const void *p2)
 static long int
 __attribute__ ((noinline))
 intel_check_word (int name, unsigned int value, bool *has_level_2,
-		  bool *no_level_2_or_3)
+		  bool *no_level_2_or_3, const struct cpu_features* x86_cpu_features)
 {
   if ((value & 0x80000000) != 0)
     /* The register value is reserved.  */
@@ -206,8 +215,8 @@ intel_check_word (int name, unsigned int value, bool *has_level_2,
 	      /* Intel reused this value.  For family 15, model 6 it
 		 specifies the 3rd level cache.  Otherwise the 2nd
 		 level cache.  */
-	      unsigned int family = GLRO(dl_x86_cpu_features).family;
-	      unsigned int model = GLRO(dl_x86_cpu_features).model;
+	      unsigned int family = x86_cpu_features->family;
+	      unsigned int model = x86_cpu_features->model;
 
 	      if (family == 15 && model == 6)
 		{
@@ -257,7 +266,8 @@ intel_check_word (int name, unsigned int value, bool *has_level_2,
 
 
 static long int __attribute__ ((noinline))
-handle_intel (int name, unsigned int maxidx)
+handle_intel (int name, unsigned int maxidx,
+              const struct cpu_features* x86_cpu_features)
 {
   /* Return -1 for older CPUs.  */
   if (maxidx < 2)
@@ -289,19 +299,23 @@ handle_intel (int name, unsigned int maxidx)
 	}
 
       /* Process the individual registers' value.  */
-      result = intel_check_word (name, eax, &has_level_2, &no_level_2_or_3);
+      result = intel_check_word (name, eax, &has_level_2, &no_level_2_or_3,
+                                 x86_cpu_features);
       if (result != 0)
 	return result;
 
-      result = intel_check_word (name, ebx, &has_level_2, &no_level_2_or_3);
+      result = intel_check_word (name, ebx, &has_level_2, &no_level_2_or_3,
+                                 x86_cpu_features);
       if (result != 0)
 	return result;
 
-      result = intel_check_word (name, ecx, &has_level_2, &no_level_2_or_3);
+      result = intel_check_word (name, ecx, &has_level_2, &no_level_2_or_3,
+                                 x86_cpu_features);
       if (result != 0)
 	return result;
 
-      result = intel_check_word (name, edx, &has_level_2, &no_level_2_or_3);
+      result = intel_check_word (name, edx, &has_level_2, &no_level_2_or_3,
+                                 x86_cpu_features);
       if (result != 0)
 	return result;
     }
@@ -437,7 +451,7 @@ attribute_hidden
 __cache_sysconf (int name)
 {
   if (is_intel)
-    return handle_intel (name, max_cpuid);
+    return handle_intel (name, max_cpuid, &GLRO(dl_x86_cpu_features));
 
   if (is_amd)
     return handle_amd (name);
@@ -475,9 +489,9 @@ int __x86_prefetchw attribute_hidden;
 #endif
 
 
-static void
-__attribute__((constructor))
-init_cacheinfo (void)
+void
+attribute_hidden
+__init_cacheinfo_impl (const struct cpu_features* x86_cpu_features)
 {
   /* Find out what brand of processor.  */
   unsigned int eax;
@@ -492,14 +506,17 @@ init_cacheinfo (void)
 
   if (is_intel)
     {
-      data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, max_cpuid);
+      data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, max_cpuid,
+                           x86_cpu_features);
 
-      long int core = handle_intel (_SC_LEVEL2_CACHE_SIZE, max_cpuid);
+      long int core = handle_intel (_SC_LEVEL2_CACHE_SIZE, max_cpuid,
+                                    x86_cpu_features);
       bool inclusive_cache = true;
 
       /* Try L3 first.  */
       level  = 3;
-      shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, max_cpuid);
+      shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, max_cpuid,
+                             x86_cpu_features);
 
       /* Number of logical processors sharing L2 cache.  */
       int threads_l2;
@@ -529,8 +546,8 @@ init_cacheinfo (void)
 	     highest cache level.  */
 	  if (max_cpuid >= 4)
 	    {
-	      unsigned int family = GLRO(dl_x86_cpu_features).family;
-	      unsigned int model = GLRO(dl_x86_cpu_features).model;
+				unsigned int family = x86_cpu_features->family;
+				unsigned int model = x86_cpu_features->model;
 
 	      int i = 0;
 
@@ -673,7 +690,7 @@ intel_bug_no_cache_info:
 		 level.  */
 
 	      threads
-		= ((GLRO(dl_x86_cpu_features).cpuid[COMMON_CPUID_INDEX_1].ebx
+		= ((x86_cpu_features->cpuid[COMMON_CPUID_INDEX_1].ebx
 		    >> 16) & 0xff);
 	    }
 
@@ -768,4 +785,46 @@ intel_bug_no_cache_info:
      shared cache size is the approximate value above which non-temporal
      store becomes faster.  */
   __x86_shared_non_temporal_threshold = __x86_shared_cache_size * 6;
+
+#if HAVE_TUNABLES
+  TUNABLE_SET_VAL(x86_shared_non_temporal_threshold,
+                  &__x86_shared_non_temporal_threshold);
+#else
+  if (__glibc_likely (_environ != NULL))
+    {
+      char **runp = _environ;
+      char *envline;
+
+      while (*runp != NULL)
+        {
+          envline = *runp;
+          runp++;
+          size_t len = strcspn (envline, "=");
+
+          if (envline[len] != '=')
+            continue;
+
+          switch (len)
+            {
+            case 29:
+              if (!__builtin_expect (__libc_enable_secure, 0))
+                {
+                  if (memcmp (envline,
+                              "SHARED_NON_TEMPORAL_THRESHOLD", 29) == 0)
+                    __x86_shared_non_temporal_threshold = atoi (&envline[29]);
+                }
+              break;
+            default:
+              break;
+            }
+        }
+    }
+#endif
+}
+
+static void
+__attribute__((constructor))
+init_cacheinfo (void)
+{
+  __init_cacheinfo_impl (&GLRO(dl_x86_cpu_features));
 }
-- 
2.13.0.219.gdb65acc882-goog


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-23  3:19                           ` Erich Elsen
@ 2017-05-23 20:39                             ` Erich Elsen
  2017-05-23 20:46                               ` H.J. Lu
  2017-05-24 21:36                             ` H.J. Lu
  1 sibling, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-23 20:39 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

[-- Attachment #1: Type: text/plain, Size: 1657 bytes --]

I was also thinking that it might be nice to have a TUNABLE that sets
the implementation of memcpy directly.  It would be easier to do this
if memcpy.S was memcpy.c.  Attached is a patch that does the
conversion but doesn't add the tunables.  How would you feel about
this?  It has no runtime impact, probably increases the size slightly,
and makes the code easier to read / modify.

On Mon, May 22, 2017 at 8:19 PM, Erich Elsen <eriche@google.com> wrote:
> Here is the patch that slightly refactors how init_cacheinfo is called.
>
> On Mon, May 22, 2017 at 7:24 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>> On Mon, May 22, 2017 at 6:23 PM, Erich Elsen <eriche@google.com> wrote:
>>> I definitely think increasing the size in the case of processors with
>>> a large number of cores makes sense.  Hopefully with some testing we
>>> can confirm it is a net win and/or find a more empirical number.
>>>
>>> Thanks for that patch with the tunable support.  I've just put a
>>> similar patch in review for sharing right now.  It adds support in the
>>> case that HAVE_TUNABLES isn't defined like the similar code in arena.c
>>>  and also makes a minor change that turns init_cacheinfo into a
>>> init_cacheinfo_impl (a hidden callable).  init_cacheinfo is now a
>>> constructor that just calls the impl and passes the cpu_features
>>> struct.  This is useful in that it makes the code a bit more modular
>>> (something that we'll need to be able to test this internally).
>>
>> This sounds a good idea.  I'd also like to add tunable support in
>> init_cpu_features to turn on/off CPU features.   non_temporal_threshold
>> will be one of them.
>>
>>
>> --
>> H.J.

[-- Attachment #2: 0001-add-memcpy.c.patch --]
[-- Type: text/x-patch, Size: 3213 bytes --]

From a2957f5a0b21f9588e8756228b11b86f886b0f4c Mon Sep 17 00:00:00 2001
From: Erich Elsen <eriche@google.com>
Date: Tue, 23 May 2017 12:29:24 -0700
Subject: [PATCH] add memcpy.c

---
 sysdeps/x86_64/multiarch/memcpy.c | 70 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 70 insertions(+)
 create mode 100644 sysdeps/x86_64/multiarch/memcpy.c

diff --git a/sysdeps/x86_64/multiarch/memcpy.c b/sysdeps/x86_64/multiarch/memcpy.c
new file mode 100644
index 0000000000..b0ff8c71fd
--- /dev/null
+++ b/sysdeps/x86_64/multiarch/memcpy.c
@@ -0,0 +1,70 @@
+#include "cpu-features.h"
+#include "init-arch.h"
+#include "shlib-compat.h"
+#include <stdlib.h>
+
+typedef void * (*memcpy_fn)(void *, const void *, size_t);
+
+extern void * __memcpy_erms(void *dest, const void *src, size_t n);
+extern void * __memcpy_sse2_unaligned(void *dest, const void *src, size_t n);
+extern void * __memcpy_sse2_unaligned_erms(void *dest, const void *src, size_t n);
+extern void * __memcpy_ssse3(void *dest, const void *src, size_t n);
+extern void * __memcpy_ssse3_back(void *dest, const void *src, size_t n);
+extern void * __memcpy_avx_unaligned(void *dest, const void *src, size_t n);
+extern void * __memcpy_avx_unaligned_erms(void *dest, const void *src, size_t n);
+extern void * __memcpy_avx512_unaligned(void *dest, const void *src, size_t n);
+extern void * __memcpy_avx512_unaligned_erms(void *dest, const void *src, size_t n);
+
+/* Defined in cacheinfo.c */
+extern long int __x86_shared_cache_size attribute_hidden;
+extern long int __x86_shared_cache_size_half attribute_hidden;
+extern long int __x86_data_cache_size attribute_hidden;
+extern long int __x86_data_cache_size_half attribute_hidden;
+extern long int __x86_shared_non_temporal_threshold attribute_hidden;
+
+static void * select_memcpy_impl(void) {
+  const struct cpu_features* cpu_features_struct_p = __get_cpu_features ();
+
+  if (CPU_FEATURES_ARCH_P(cpu_features_struct_p, Prefer_ERMS)) {
+    return __memcpy_erms;
+  }
+
+  if (CPU_FEATURES_ARCH_P(cpu_features_struct_p, AVX512F_Usable)) {
+    if (CPU_FEATURES_ARCH_P(cpu_features_struct_p, Prefer_No_VZEROUPPER))
+      return __memcpy_avx512_unaligned_erms;
+    return __memcpy_avx512_unaligned;
+  }
+
+  if (CPU_FEATURES_ARCH_P(cpu_features_struct_p, AVX_Fast_Unaligned_Load)) {
+    if (CPU_FEATURES_CPU_P(cpu_features_struct_p, ERMS)) {
+      return __memcpy_avx_unaligned_erms;
+
+    }
+    return __memcpy_avx_unaligned;
+  }
+  else {
+    if (CPU_FEATURES_ARCH_P(cpu_features_struct_p, Fast_Unaligned_Copy)) {
+      if (CPU_FEATURES_CPU_P(cpu_features_struct_p, ERMS)) {
+        return __memcpy_sse2_unaligned_erms;
+
+      }
+      return __memcpy_sse2_unaligned;
+    }
+    else {
+      if (!CPU_FEATURES_CPU_P(cpu_features_struct_p, SSSE3)) {
+        return __memcpy_sse2_unaligned;
+
+      }
+      if (CPU_FEATURES_ARCH_P(cpu_features_struct_p, Fast_Copy_Backward)) {
+        return __memcpy_ssse3_back;
+
+      }
+      return __memcpy_ssse3;
+    }
+  }
+}
+
+void *__new_memcpy(void *dest, const void *src, size_t n)
+  __attribute__ ((ifunc ("select_memcpy_impl")));
+
+versioned_symbol(libc, __new_memcpy, memcpy, GLIBC_2_14);
-- 
2.13.0.219.gdb65acc882-goog


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-23 20:39                             ` Erich Elsen
@ 2017-05-23 20:46                               ` H.J. Lu
  2017-05-23 20:57                                 ` Erich Elsen
  0 siblings, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-23 20:46 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Tue, May 23, 2017 at 1:39 PM, Erich Elsen <eriche@google.com> wrote:
> I was also thinking that it might be nice to have a TUNABLE that sets
> the implementation of memcpy directly.  It would be easier to do this
> if memcpy.S was memcpy.c.  Attached is a patch that does the
> conversion but doesn't add the tunables.  How would you feel about
> this?  It has no runtime impact, probably increases the size slightly,
> and makes the code easier to read / modify.
>

It depends on how far you want to go.  We can add TUNABLE support
to each IFUNC implementation or we can add TUNABLE support to
cpu_features to update processor features.  I prefer latter.


-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-23 20:46                               ` H.J. Lu
@ 2017-05-23 20:57                                 ` Erich Elsen
  2017-05-23 22:08                                   ` H.J. Lu
  0 siblings, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-23 20:57 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

Maybe there's room for both?

Setting the cpu_features would affect everything; it would be useful
to be able to target only specific (and very important) routines.

On Tue, May 23, 2017 at 1:46 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Tue, May 23, 2017 at 1:39 PM, Erich Elsen <eriche@google.com> wrote:
>> I was also thinking that it might be nice to have a TUNABLE that sets
>> the implementation of memcpy directly.  It would be easier to do this
>> if memcpy.S was memcpy.c.  Attached is a patch that does the
>> conversion but doesn't add the tunables.  How would you feel about
>> this?  It has no runtime impact, probably increases the size slightly,
>> and makes the code easier to read / modify.
>>
>
> It depends on how far you want to go.  We can add TUNABLE support
> to each IFUNC implementation or we can add TUNABLE support to
> cpu_features to update processor features.  I prefer latter.
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-23 20:57                                 ` Erich Elsen
@ 2017-05-23 22:08                                   ` H.J. Lu
  2017-05-23 22:12                                     ` Erich Elsen
  0 siblings, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-23 22:08 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Tue, May 23, 2017 at 1:57 PM, Erich Elsen <eriche@google.com> wrote:
> Maybe there's room for both?
>
> Setting the cpu_features would affect everything; it would be useful
> to be able to target only specific (and very important) routines.

I prefer to do the cpu_features first.  If it turns out not
sufficient, we then do
the IFUNC implementation.

> On Tue, May 23, 2017 at 1:46 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>> On Tue, May 23, 2017 at 1:39 PM, Erich Elsen <eriche@google.com> wrote:
>>> I was also thinking that it might be nice to have a TUNABLE that sets
>>> the implementation of memcpy directly.  It would be easier to do this
>>> if memcpy.S was memcpy.c.  Attached is a patch that does the
>>> conversion but doesn't add the tunables.  How would you feel about
>>> this?  It has no runtime impact, probably increases the size slightly,
>>> and makes the code easier to read / modify.
>>>
>>
>> It depends on how far you want to go.  We can add TUNABLE support
>> to each IFUNC implementation or we can add TUNABLE support to
>> cpu_features to update processor features.  I prefer latter.
>>
>>
>> --
>> H.J.



-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-23 22:08                                   ` H.J. Lu
@ 2017-05-23 22:12                                     ` Erich Elsen
  2017-05-23 22:55                                       ` H.J. Lu
  0 siblings, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-23 22:12 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

Sounds good to me.  Even if tunables aren't added, does memcpy.S ->
memcpy.c seem reasonable?

On Tue, May 23, 2017 at 3:07 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Tue, May 23, 2017 at 1:57 PM, Erich Elsen <eriche@google.com> wrote:
>> Maybe there's room for both?
>>
>> Setting the cpu_features would affect everything; it would be useful
>> to be able to target only specific (and very important) routines.
>
> I prefer to do the cpu_features first.  If it turns out not
> sufficient, we then do
> the IFUNC implementation.
>
>> On Tue, May 23, 2017 at 1:46 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>> On Tue, May 23, 2017 at 1:39 PM, Erich Elsen <eriche@google.com> wrote:
>>>> I was also thinking that it might be nice to have a TUNABLE that sets
>>>> the implementation of memcpy directly.  It would be easier to do this
>>>> if memcpy.S was memcpy.c.  Attached is a patch that does the
>>>> conversion but doesn't add the tunables.  How would you feel about
>>>> this?  It has no runtime impact, probably increases the size slightly,
>>>> and makes the code easier to read / modify.
>>>>
>>>
>>> It depends on how far you want to go.  We can add TUNABLE support
>>> to each IFUNC implementation or we can add TUNABLE support to
>>> cpu_features to update processor features.  I prefer latter.
>>>
>>>
>>> --
>>> H.J.
>
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-23 22:12                                     ` Erich Elsen
@ 2017-05-23 22:55                                       ` H.J. Lu
  2017-05-24  0:56                                         ` Erich Elsen
  0 siblings, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-23 22:55 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Tue, May 23, 2017 at 3:12 PM, Erich Elsen <eriche@google.com> wrote:
> Sounds good to me.  Even if tunables aren't added, does memcpy.S ->
> memcpy.c seem reasonable?

I prefer not to do it for now.  We can revisit it later after tunable is added
to cpu_features.

BTW,  REP MOV is expected to have lower bandwidth on multi-socket
systems, but has the benefit of lower cache disruption throughout the
cache hierarchy.   This is trade off of between overall system throughput
and single program performance.


> On Tue, May 23, 2017 at 3:07 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>> On Tue, May 23, 2017 at 1:57 PM, Erich Elsen <eriche@google.com> wrote:
>>> Maybe there's room for both?
>>>
>>> Setting the cpu_features would affect everything; it would be useful
>>> to be able to target only specific (and very important) routines.
>>
>> I prefer to do the cpu_features first.  If it turns out not
>> sufficient, we then do
>> the IFUNC implementation.
>>
>>> On Tue, May 23, 2017 at 1:46 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>>> On Tue, May 23, 2017 at 1:39 PM, Erich Elsen <eriche@google.com> wrote:
>>>>> I was also thinking that it might be nice to have a TUNABLE that sets
>>>>> the implementation of memcpy directly.  It would be easier to do this
>>>>> if memcpy.S was memcpy.c.  Attached is a patch that does the
>>>>> conversion but doesn't add the tunables.  How would you feel about
>>>>> this?  It has no runtime impact, probably increases the size slightly,
>>>>> and makes the code easier to read / modify.
>>>>>
>>>>
>>>> It depends on how far you want to go.  We can add TUNABLE support
>>>> to each IFUNC implementation or we can add TUNABLE support to
>>>> cpu_features to update processor features.  I prefer latter.
>>>>
>>>>
>>>> --
>>>> H.J.
>>
>>
>>
>> --
>> H.J.



-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-23 22:55                                       ` H.J. Lu
@ 2017-05-24  0:56                                         ` Erich Elsen
  2017-05-24  3:42                                           ` H.J. Lu
  0 siblings, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-24  0:56 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

Ok.  Do you have any specific concerns?  It would help make it easier
for us to do the testing internally to switch to memcpy.c.

Interesting, thanks for the info.  More reason for being able to
select the implementation!

On Tue, May 23, 2017 at 3:55 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Tue, May 23, 2017 at 3:12 PM, Erich Elsen <eriche@google.com> wrote:
>> Sounds good to me.  Even if tunables aren't added, does memcpy.S ->
>> memcpy.c seem reasonable?
>
> I prefer not to do it for now.  We can revisit it later after tunable is added
> to cpu_features.
>
> BTW,  REP MOV is expected to have lower bandwidth on multi-socket
> systems, but has the benefit of lower cache disruption throughout the
> cache hierarchy.   This is trade off of between overall system throughput
> and single program performance.
>
>
>> On Tue, May 23, 2017 at 3:07 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>> On Tue, May 23, 2017 at 1:57 PM, Erich Elsen <eriche@google.com> wrote:
>>>> Maybe there's room for both?
>>>>
>>>> Setting the cpu_features would affect everything; it would be useful
>>>> to be able to target only specific (and very important) routines.
>>>
>>> I prefer to do the cpu_features first.  If it turns out not
>>> sufficient, we then do
>>> the IFUNC implementation.
>>>
>>>> On Tue, May 23, 2017 at 1:46 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>>>> On Tue, May 23, 2017 at 1:39 PM, Erich Elsen <eriche@google.com> wrote:
>>>>>> I was also thinking that it might be nice to have a TUNABLE that sets
>>>>>> the implementation of memcpy directly.  It would be easier to do this
>>>>>> if memcpy.S was memcpy.c.  Attached is a patch that does the
>>>>>> conversion but doesn't add the tunables.  How would you feel about
>>>>>> this?  It has no runtime impact, probably increases the size slightly,
>>>>>> and makes the code easier to read / modify.
>>>>>>
>>>>>
>>>>> It depends on how far you want to go.  We can add TUNABLE support
>>>>> to each IFUNC implementation or we can add TUNABLE support to
>>>>> cpu_features to update processor features.  I prefer latter.
>>>>>
>>>>>
>>>>> --
>>>>> H.J.
>>>
>>>
>>>
>>> --
>>> H.J.
>
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-24  0:56                                         ` Erich Elsen
@ 2017-05-24  3:42                                           ` H.J. Lu
  2017-05-24 21:03                                             ` Erich Elsen
  0 siblings, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-24  3:42 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Tue, May 23, 2017 at 5:56 PM, Erich Elsen <eriche@google.com> wrote:
> Ok.  Do you have any specific concerns?  It would help make it easier
> for us to do the testing internally to switch to memcpy.c.

We use libc_ifunc to implement IFUNC, like x86_64/multiarch/strstr.c. It may be
a good idea to switch to a different format and require all IFUNCs in
C for x86-64
if compilers with IFUNC attribute are required to build glibc. But this is
independent to tunables.

> Interesting, thanks for the info.  More reason for being able to
> select the implementation!
> On Tue, May 23, 2017 at 3:55 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>> On Tue, May 23, 2017 at 3:12 PM, Erich Elsen <eriche@google.com> wrote:
>>> Sounds good to me.  Even if tunables aren't added, does memcpy.S ->
>>> memcpy.c seem reasonable?
>>
>> I prefer not to do it for now.  We can revisit it later after tunable is added
>> to cpu_features.
>>
>> BTW,  REP MOV is expected to have lower bandwidth on multi-socket
>> systems, but has the benefit of lower cache disruption throughout the
>> cache hierarchy.   This is trade off of between overall system throughput
>> and single program performance.
>>
>>
>>> On Tue, May 23, 2017 at 3:07 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>>> On Tue, May 23, 2017 at 1:57 PM, Erich Elsen <eriche@google.com> wrote:
>>>>> Maybe there's room for both?
>>>>>
>>>>> Setting the cpu_features would affect everything; it would be useful
>>>>> to be able to target only specific (and very important) routines.
>>>>
>>>> I prefer to do the cpu_features first.  If it turns out not
>>>> sufficient, we then do
>>>> the IFUNC implementation.
>>>>
>>>>> On Tue, May 23, 2017 at 1:46 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>>>>> On Tue, May 23, 2017 at 1:39 PM, Erich Elsen <eriche@google.com> wrote:
>>>>>>> I was also thinking that it might be nice to have a TUNABLE that sets
>>>>>>> the implementation of memcpy directly.  It would be easier to do this
>>>>>>> if memcpy.S was memcpy.c.  Attached is a patch that does the
>>>>>>> conversion but doesn't add the tunables.  How would you feel about
>>>>>>> this?  It has no runtime impact, probably increases the size slightly,
>>>>>>> and makes the code easier to read / modify.
>>>>>>>
>>>>>>
>>>>>> It depends on how far you want to go.  We can add TUNABLE support
>>>>>> to each IFUNC implementation or we can add TUNABLE support to
>>>>>> cpu_features to update processor features.  I prefer latter.
>>>>>>
>>>>>>
>>>>>> --
>>>>>> H.J.
>>>>
>>>>
>>>>
>>>> --
>>>> H.J.
>>
>>
>>
>> --
>> H.J.



-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-24  3:42                                           ` H.J. Lu
@ 2017-05-24 21:03                                             ` Erich Elsen
  0 siblings, 0 replies; 31+ messages in thread
From: Erich Elsen @ 2017-05-24 21:03 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

Sorry, yes I meant independent of the tunables discussion.  Thanks for
pointing that macro out, I hadn't realized, but makes sense for
supporting older compilers that didn't have IFUNCs.

I see you added the original ifunc implementation back in 2009!
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40528

It seems like GCC 4.7 is needed to build now, so should be ok to
switch?  I'm happy to volunteer to do the conversions for the x86_64
routines if you think it makes sense.



On Tue, May 23, 2017 at 8:42 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Tue, May 23, 2017 at 5:56 PM, Erich Elsen <eriche@google.com> wrote:
>> Ok.  Do you have any specific concerns?  It would help make it easier
>> for us to do the testing internally to switch to memcpy.c.
>
> We use libc_ifunc to implement IFUNC, like x86_64/multiarch/strstr.c. It may be
> a good idea to switch to a different format and require all IFUNCs in
> C for x86-64
> if compilers with IFUNC attribute are required to build glibc. But this is
> independent to tunables.
>
>> Interesting, thanks for the info.  More reason for being able to
>> select the implementation!
>> On Tue, May 23, 2017 at 3:55 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>> On Tue, May 23, 2017 at 3:12 PM, Erich Elsen <eriche@google.com> wrote:
>>>> Sounds good to me.  Even if tunables aren't added, does memcpy.S ->
>>>> memcpy.c seem reasonable?
>>>
>>> I prefer not to do it for now.  We can revisit it later after tunable is added
>>> to cpu_features.
>>>
>>> BTW,  REP MOV is expected to have lower bandwidth on multi-socket
>>> systems, but has the benefit of lower cache disruption throughout the
>>> cache hierarchy.   This is trade off of between overall system throughput
>>> and single program performance.
>>>
>>>
>>>> On Tue, May 23, 2017 at 3:07 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>>>> On Tue, May 23, 2017 at 1:57 PM, Erich Elsen <eriche@google.com> wrote:
>>>>>> Maybe there's room for both?
>>>>>>
>>>>>> Setting the cpu_features would affect everything; it would be useful
>>>>>> to be able to target only specific (and very important) routines.
>>>>>
>>>>> I prefer to do the cpu_features first.  If it turns out not
>>>>> sufficient, we then do
>>>>> the IFUNC implementation.
>>>>>
>>>>>> On Tue, May 23, 2017 at 1:46 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>>>>>> On Tue, May 23, 2017 at 1:39 PM, Erich Elsen <eriche@google.com> wrote:
>>>>>>>> I was also thinking that it might be nice to have a TUNABLE that sets
>>>>>>>> the implementation of memcpy directly.  It would be easier to do this
>>>>>>>> if memcpy.S was memcpy.c.  Attached is a patch that does the
>>>>>>>> conversion but doesn't add the tunables.  How would you feel about
>>>>>>>> this?  It has no runtime impact, probably increases the size slightly,
>>>>>>>> and makes the code easier to read / modify.
>>>>>>>>
>>>>>>>
>>>>>>> It depends on how far you want to go.  We can add TUNABLE support
>>>>>>> to each IFUNC implementation or we can add TUNABLE support to
>>>>>>> cpu_features to update processor features.  I prefer latter.
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> H.J.
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> H.J.
>>>
>>>
>>>
>>> --
>>> H.J.
>
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-23  3:19                           ` Erich Elsen
  2017-05-23 20:39                             ` Erich Elsen
@ 2017-05-24 21:36                             ` H.J. Lu
  2017-05-25 21:23                               ` Erich Elsen
  1 sibling, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-24 21:36 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Mon, May 22, 2017 at 8:19 PM, Erich Elsen <eriche@google.com> wrote:
> Here is the patch that slightly refactors how init_cacheinfo is called.
>

Please take a look at hjl/tunables/master branch.  You can add
non_temporal_threshold support on top of it.


-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-24 21:36                             ` H.J. Lu
@ 2017-05-25 21:23                               ` Erich Elsen
  2017-05-25 21:57                                 ` Erich Elsen
  0 siblings, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-25 21:23 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

Ok, will do.

On Wed, May 24, 2017 at 2:36 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Mon, May 22, 2017 at 8:19 PM, Erich Elsen <eriche@google.com> wrote:
>> Here is the patch that slightly refactors how init_cacheinfo is called.
>>
>
> Please take a look at hjl/tunables/master branch.  You can add
> non_temporal_threshold support on top of it.
>
>
> --
> H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-25 21:23                               ` Erich Elsen
@ 2017-05-25 21:57                                 ` Erich Elsen
  2017-05-25 22:03                                   ` H.J. Lu
  0 siblings, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-25 21:57 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

[-- Attachment #1: Type: text/plain, Size: 736 bytes --]

It looks like you already added the non_temporal_threshold as part of
the cpu_features tunables?  Here's a small patch that allows the
cpu_features struct to be passed in.  This is useful if you need to be
able to call init_cacheinfo with cpu_features other than the global
ones.



On Thu, May 25, 2017 at 2:23 PM, Erich Elsen <eriche@google.com> wrote:
> Ok, will do.
>
> On Wed, May 24, 2017 at 2:36 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>> On Mon, May 22, 2017 at 8:19 PM, Erich Elsen <eriche@google.com> wrote:
>>> Here is the patch that slightly refactors how init_cacheinfo is called.
>>>
>>
>> Please take a look at hjl/tunables/master branch.  You can add
>> non_temporal_threshold support on top of it.
>>
>>
>> --
>> H.J.

[-- Attachment #2: 0001-init_cacheinfo-init_cacheinfo_impl-which-it-is-possi.patch --]
[-- Type: text/x-patch, Size: 1410 bytes --]

From e3c27309fa45a6c50d0ed0e541dd82c406d4485a Mon Sep 17 00:00:00 2001
From: Erich Elsen <eriche@google.com>
Date: Thu, 25 May 2017 14:49:15 -0700
Subject: [PATCH] init_cacheinfo -> init_cacheinfo_impl which it is possible to
 pass a cpu_features struct to for modularity.

---
 sysdeps/x86/cacheinfo.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c
index a46dd4dc30..d1fcc9fb4b 100644
--- a/sysdeps/x86/cacheinfo.c
+++ b/sysdeps/x86/cacheinfo.c
@@ -482,9 +482,9 @@ int __x86_prefetchw attribute_hidden;
 #endif
 
 
-static void
-__attribute__((constructor))
-init_cacheinfo (void)
+void
+attribute_hidden
+init_cacheinfo_impl (const struct cpu_features* cpu_features)
 {
   /* Find out what brand of processor.  */
   unsigned int eax;
@@ -496,7 +496,6 @@ init_cacheinfo (void)
   long int shared = -1;
   unsigned int level;
   unsigned int threads = 0;
-  const struct cpu_features *cpu_features = __get_cpu_features ();
   int max_cpuid = cpu_features->max_cpuid;
 
   if (cpu_features->kind == arch_kind_intel)
@@ -787,4 +786,10 @@ intel_bug_no_cache_info:
        : __x86_shared_cache_size * 6);
 }
 
+static void
+__attribute__((constructor))
+init_cacheinfo (void) {
+  const struct cpu_features *cpu_features = __get_cpu_features ();
+  init_cacheinfo_impl(cpu_features);
+}
 #endif
-- 
2.13.0.219.gdb65acc882-goog


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-25 21:57                                 ` Erich Elsen
@ 2017-05-25 22:03                                   ` H.J. Lu
  2017-05-27  0:31                                     ` Erich Elsen
  0 siblings, 1 reply; 31+ messages in thread
From: H.J. Lu @ 2017-05-25 22:03 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

On Thu, May 25, 2017 at 2:57 PM, Erich Elsen <eriche@google.com> wrote:
> It looks like you already added the non_temporal_threshold as part of
> the cpu_features tunables?  Here's a small patch that allows the

No, I didn't.  I only added cache info to CPU features.

> cpu_features struct to be passed in.  This is useful if you need to be
> able to call init_cacheinfo with cpu_features other than the global
> ones.

I need to see the complete working patch.

>
>
> On Thu, May 25, 2017 at 2:23 PM, Erich Elsen <eriche@google.com> wrote:
>> Ok, will do.
>>
>> On Wed, May 24, 2017 at 2:36 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>> On Mon, May 22, 2017 at 8:19 PM, Erich Elsen <eriche@google.com> wrote:
>>>> Here is the patch that slightly refactors how init_cacheinfo is called.
>>>>
>>>
>>> Please take a look at hjl/tunables/master branch.  You can add
>>> non_temporal_threshold support on top of it.
>>>
>>>
>>> --
>>> H.J.



-- 
H.J.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-25 22:03                                   ` H.J. Lu
@ 2017-05-27  0:31                                     ` Erich Elsen
  2017-05-27 21:35                                       ` H.J. Lu
  0 siblings, 1 reply; 31+ messages in thread
From: Erich Elsen @ 2017-05-27  0:31 UTC (permalink / raw)
  To: H.J. Lu; +Cc: Carlos O'Donell, GNU C Library

[-- Attachment #1: Type: text/plain, Size: 1108 bytes --]

Sorry for misinterpreting.  Here is the full patch.

On Thu, May 25, 2017 at 3:03 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Thu, May 25, 2017 at 2:57 PM, Erich Elsen <eriche@google.com> wrote:
>> It looks like you already added the non_temporal_threshold as part of
>> the cpu_features tunables?  Here's a small patch that allows the
>
> No, I didn't.  I only added cache info to CPU features.
>
>> cpu_features struct to be passed in.  This is useful if you need to be
>> able to call init_cacheinfo with cpu_features other than the global
>> ones.
>
> I need to see the complete working patch.
>
>>
>>
>> On Thu, May 25, 2017 at 2:23 PM, Erich Elsen <eriche@google.com> wrote:
>>> Ok, will do.
>>>
>>> On Wed, May 24, 2017 at 2:36 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
>>>> On Mon, May 22, 2017 at 8:19 PM, Erich Elsen <eriche@google.com> wrote:
>>>>> Here is the patch that slightly refactors how init_cacheinfo is called.
>>>>>
>>>>
>>>> Please take a look at hjl/tunables/master branch.  You can add
>>>> non_temporal_threshold support on top of it.
>>>>
>>>>
>>>> --
>>>> H.J.
>
>
>
> --
> H.J.

[-- Attachment #2: 0001-add-tunables-for-x86-cache-info.patch --]
[-- Type: text/x-patch, Size: 3534 bytes --]

From bdbc243d9da3f5d59dc495970ef9572e7c446e94 Mon Sep 17 00:00:00 2001
From: Erich Elsen <eriche@google.com>
Date: Fri, 26 May 2017 17:28:06 -0700
Subject: [PATCH 1/1] add tunables for x86 cache info

---
 sysdeps/x86/cacheinfo.c      | 60 +++++++++++++++++++++++++++++++++++++++++---
 sysdeps/x86/dl-tunables.list | 15 +++++++++++
 2 files changed, 71 insertions(+), 4 deletions(-)

diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c
index a46dd4dc30..ac98a951b0 100644
--- a/sysdeps/x86/cacheinfo.c
+++ b/sysdeps/x86/cacheinfo.c
@@ -25,6 +25,15 @@
 #include <cpuid.h>
 #include <init-arch.h>
 
+#if HAVE_TUNABLES
+# define TUNABLE_NAMESPACE x86
+# include <elf/dl-tunables.h>
+#else
+# include <string.h>
+# include <stdlib.h>
+extern char **_environ;
+#endif
+
 static const struct intel_02_cache_info
 {
   unsigned char idx;
@@ -482,9 +491,9 @@ int __x86_prefetchw attribute_hidden;
 #endif
 
 
-static void
-__attribute__((constructor))
-init_cacheinfo (void)
+void
+attribute_hidden
+init_cacheinfo_impl (const struct cpu_features* cpu_features)
 {
   /* Find out what brand of processor.  */
   unsigned int eax;
@@ -496,7 +505,6 @@ init_cacheinfo (void)
   long int shared = -1;
   unsigned int level;
   unsigned int threads = 0;
-  const struct cpu_features *cpu_features = __get_cpu_features ();
   int max_cpuid = cpu_features->max_cpuid;
 
   if (cpu_features->kind == arch_kind_intel)
@@ -787,4 +795,48 @@ intel_bug_no_cache_info:
        : __x86_shared_cache_size * 6);
 }
 
+static void
+update_cpufeature_cache_info(struct cpu_features* cpu_features)
+{
+#if HAVE_TUNABLES
+  TUNABLE_SET_VAL (non_temporal_threshold,
+                   &(cpu_features->cache.non_temporal_threshold));
+  TUNABLE_SET_VAL (data_size,
+                   &(cpu_features->cache.data_size));
+  TUNABLE_SET_VAL (shared_size,
+                   &(cpu_features->cache.shared_size));
+#else
+  if (__glibc_likely (_environ != NULL)
+      && !__builtin_expect (__libc_enable_secure, 0))
+    {
+      char **runp = _environ;
+      char *envline;
+
+      while (*runp != NULL)
+	{
+	  envline = *runp;
+	  if (!DEFAULT_MEMCMP (envline, "GLIBC_NON_TEMPORAL_THRESHOLD=", 29))
+	    cpu_features->cache.non_temporal_threshold = atoi (&envline[29]);
+	  else if (!DEFAULT_MEMCMP (envline, "GLIBC_DATA_SIZE=", 16))
+	    cpu_features->cache.data_size = atoi (&envline[16]);
+	  else if (!DEFAULT_MEMCMP (envline, "GLIBC_SHARED_SIZE=", 18))
+	    cpu_features->cache.shared_size = atoi (&envline[18]);
+
+	  runp++;
+	}
+    }
+#endif
+}
+
+static void
+__attribute__((constructor))
+init_cacheinfo (void)
+{
+  const struct cpu_features *cpu_features_const = __get_cpu_features ();
+  struct cpu_features cpu_features = *cpu_features_const;
+
+  update_cpufeature_cache_info (&cpu_features);
+
+  init_cacheinfo_impl (&cpu_features);
+}
 #endif
diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
index 0c9acc085c..136e455bcf 100644
--- a/sysdeps/x86/dl-tunables.list
+++ b/sysdeps/x86/dl-tunables.list
@@ -5,5 +5,20 @@ glibc {
       env_alias: GLIBC_IFUNC
       security_level: SXID_IGNORE
     }
+    non_temporal_threshold {
+      type: SIZE_T
+      env_alias: GLIBC_NON_TEMPORAL_THRESHOLD
+      security_level: SXID_IGNORE
+    }
+    data_size {
+      type: SIZE_T
+      env_alias: GLIBC_DATA_SIZE
+      security_level: SXID_IGNORE
+    }
+    shared_size {
+      type: SIZE_T
+      env_alias: GLIBC_SHARED_SIZE
+      security_level: SXID_IGNORE
+    }
   }
 }
-- 
2.13.0.219.gdb65acc882-goog


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: memcpy performance regressions 2.19 -> 2.24(5)
  2017-05-27  0:31                                     ` Erich Elsen
@ 2017-05-27 21:35                                       ` H.J. Lu
  0 siblings, 0 replies; 31+ messages in thread
From: H.J. Lu @ 2017-05-27 21:35 UTC (permalink / raw)
  To: Erich Elsen; +Cc: Carlos O'Donell, GNU C Library

[-- Attachment #1: Type: text/plain, Size: 260 bytes --]

On Fri, May 26, 2017 at 5:31 PM, Erich Elsen <eriche@google.com> wrote:
> Sorry for misinterpreting.  Here is the full patch.
>

This is what I have on hjl/tunables/master, which depends on

https://sourceware.org/ml/libc-alpha/2017-05/msg00573.html

-- 
H.J.

[-- Attachment #2: 0001-Add-TUNABLES-to-control-IFUNC-selection-and-cache-si.patch --]
[-- Type: text/x-patch, Size: 19257 bytes --]

From bd6d7b299575a88d69246c70fc3ad8a68235deb5 Mon Sep 17 00:00:00 2001
From: "H.J. Lu" <hjl.tools@gmail.com>
Date: Tue, 23 May 2017 20:22:13 -0700
Subject: [PATCH] Add TUNABLES to control IFUNC selection and cache sizes

The current IFUNC selection is based on microbenchmarks in glibc.  It
should give the best performance for most workloads.  But other choices
may have better performance for a particular workload or on the hardware
which wasn't available at the selection was made.  The environment
variable, GLIBC_IFUNC=-xxx,yyy,-zzz...., can be used to enable CPU/ARCH
feature yyy, disable CPU/ARCH feature yyy and zzz, where the feature
name is case-sensitive and has to match the ones in cpu-features.h.  It
can be used by glibc developers to override the IFUNC selection to tune
for a new processor or improve performance for a particular workload.
It isn't intended for normal end users.

NOTE: the IFUNC selection may change over time.  Please check all
multiarch implementations when experimenting.

2017-05-27  H.J. Lu  <hongjiu.lu@intel.com>
	    Erich Elsen  <eriche@google.com>

	* elf/dl-tunables.list: Add a "tune" namespace.
	* sysdeps/unix/sysv/linux/x86/dl-sysdep.c: New file.
	* sysdeps/x86/cpu-tunables.c: Likewise.
	* sysdeps/x86/cacheinfo.c (TUNABLE_NAMESPACE): New.
	Include <elf/dl-tunables.h> for TUNABLES is on.
	Include <string.h> and <stdlib.h> if TUNABLES is off.
	(__environ): New.
	(init_cacheinfo): Use TUNABLES to set data cache size, shared
	cache size and non temporal threshold if TUNABLES is on.  Search
	the environment strings to set data cache size, shared cache
	size and non temporal threshold if TUNABLES is off.
	* sysdeps/x86/cpu-features.c (TUNABLE_NAMESPACE): New.
	(DL_TUNABLE_CALLBACK (set_ifunc)): Likewise.
	Include <elf/dl-tunables.h> for TUNABLES is on.
	Include <string.h> and <unistd.h> if TUNABLES is off.
	(__environ): New.
	(_dl_x86_set_ifunc): Likewise.
	(init_cpu_features): Use TUNABLE_SET_VAL_WITH_CALLBACK if
	TUNABLES is on.  Call _dl_x86_set_ifunc for GLIBC_IFUNC= if
	TUNABLES is off.
	* sysdeps/x86/cpu-features.h (DEFAULT_MEMCMP): New.
---
 elf/dl-tunables.list                    |  22 +++
 sysdeps/unix/sysv/linux/x86/dl-sysdep.c |  21 +++
 sysdeps/x86/cacheinfo.c                 |  54 +++++-
 sysdeps/x86/cpu-features.c              |  36 ++++
 sysdeps/x86/cpu-features.h              |  12 ++
 sysdeps/x86/cpu-tunables.c              | 322 ++++++++++++++++++++++++++++++++
 6 files changed, 466 insertions(+), 1 deletion(-)
 create mode 100644 sysdeps/unix/sysv/linux/x86/dl-sysdep.c
 create mode 100644 sysdeps/x86/cpu-tunables.c

diff --git a/elf/dl-tunables.list b/elf/dl-tunables.list
index b9f1488..0401087 100644
--- a/elf/dl-tunables.list
+++ b/elf/dl-tunables.list
@@ -77,4 +77,26 @@ glibc {
       security_level: SXID_IGNORE
     }
   }
+  tune {
+    ifunc {
+      type: STRING
+      env_alias: GLIBC_IFUNC
+      security_level: SXID_IGNORE
+    }
+    non_temporal_threshold {
+      type: SIZE_T
+      env_alias: GLIBC_NON_TEMPORAL_THRESHOLD
+      security_level: SXID_IGNORE
+    }
+    data_cache_size {
+      type: SIZE_T
+      env_alias: GLIBC_DATA_CACHE_SIZE
+      security_level: SXID_IGNORE
+    }
+    shared_cache_size {
+      type: SIZE_T
+      env_alias: GLIBC_SHARED_CACHE_SIZE
+      security_level: SXID_IGNORE
+    }
+  }
 }
diff --git a/sysdeps/unix/sysv/linux/x86/dl-sysdep.c b/sysdeps/unix/sysv/linux/x86/dl-sysdep.c
new file mode 100644
index 0000000..64eb0d7
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/x86/dl-sysdep.c
@@ -0,0 +1,21 @@
+/* Operating system support for run-time dynamic linker.  X86 version.
+   Copyright (C) 2017 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <config.h>
+#include <sysdeps/x86/cpu-tunables.c>
+#include <sysdeps/unix/sysv/linux/dl-sysdep.c>
diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c
index 2d84af1..0a512ba 100644
--- a/sysdeps/x86/cacheinfo.c
+++ b/sysdeps/x86/cacheinfo.c
@@ -25,6 +25,15 @@
 #include <cpuid.h>
 #include <init-arch.h>
 
+#if HAVE_TUNABLES
+# define TUNABLE_NAMESPACE tune
+# include <elf/dl-tunables.h>
+#else
+# include <string.h>
+# include <stdlib.h>
+extern char **__environ;
+#endif
+
 static const struct intel_02_cache_info
 {
   unsigned char idx;
@@ -752,6 +761,43 @@ intel_bug_no_cache_info:
 #endif
     }
 
+  /* Data cache size for use in memory and string routines, typically
+     L1 size.  */
+  long int data_cache_size = 0;
+  /* Shared cache size for use in memory and string routines, typically
+     L2 or L3 size.  */
+  long int shared_cache_size = 0;
+  /* Threshold to use non temporal store.  */
+  long int non_temporal_threshold = 0;
+
+#if HAVE_TUNABLES
+  TUNABLE_SET_VAL (non_temporal_threshold, &non_temporal_threshold);
+  TUNABLE_SET_VAL (data_cache_size, &data_cache_size);
+  TUNABLE_SET_VAL (shared_cache_size, &shared_cache_size);
+#else
+  if (__glibc_likely (__environ != NULL)
+      && !__builtin_expect (__libc_enable_secure, 0))
+    {
+      char **runp = __environ;
+      char *envline;
+
+      while (*runp != NULL)
+	{
+	  envline = *runp;
+	  if (!memcmp (envline, "GLIBC_NON_TEMPORAL_THRESHOLD=", 29))
+	    non_temporal_threshold = strtol (&envline[29], NULL, 0);
+	  else if (!memcmp (envline, "GLIBC_DATA_CACHE_SIZE=", 22))
+	    data_cache_size = strtol (&envline[22], NULL, 0);
+	  else if (!memcmp (envline, "GLIBC_SHARED_CACHE_SIZE=", 24))
+	    shared_cache_size = strtol (&envline[24], NULL, 0);
+	  runp++;
+	}
+    }
+#endif
+
+  if (data_cache_size != 0)
+    data = data_cache_size;
+
   if (data > 0)
     {
       __x86_raw_data_cache_size_half = data / 2;
@@ -762,6 +808,9 @@ intel_bug_no_cache_info:
       __x86_data_cache_size = data;
     }
 
+  if (shared_cache_size != 0)
+    shared = shared_cache_size;
+
   if (shared > 0)
     {
       __x86_raw_shared_cache_size_half = shared / 2;
@@ -775,7 +824,10 @@ intel_bug_no_cache_info:
   /* The large memcpy micro benchmark in glibc shows that 6 times of
      shared cache size is the approximate value above which non-temporal
      store becomes faster.  */
-  __x86_shared_non_temporal_threshold = __x86_shared_cache_size * 6;
+  __x86_shared_non_temporal_threshold
+    = (non_temporal_threshold != 0
+       ? non_temporal_threshold
+       : __x86_shared_cache_size * 6);
 }
 
 #endif
diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c
index b481f50..97695a2 100644
--- a/sysdeps/x86/cpu-features.c
+++ b/sysdeps/x86/cpu-features.c
@@ -20,6 +20,19 @@
 #include <cpu-features.h>
 #include <dl-hwcap.h>
 
+#if HAVE_TUNABLES
+# define TUNABLE_NAMESPACE tune
+# include <elf/dl-tunables.h>
+
+extern void DL_TUNABLE_CALLBACK (set_ifunc) (tunable_val_t *)
+  attribute_hidden;
+#else
+# include <string.h>
+# include <unistd.h>
+extern char **__environ attribute_hidden;
+extern void _dl_x86_set_ifunc (const char *) attribute_hidden;
+#endif
+
 static void
 get_common_indeces (struct cpu_features *cpu_features,
 		    unsigned int *family, unsigned int *model,
@@ -312,6 +325,29 @@ no_cpuid:
   cpu_features->model = model;
   cpu_features->kind = kind;
 
+#if HAVE_TUNABLES
+  TUNABLE_SET_VAL_WITH_CALLBACK (ifunc, NULL, set_ifunc);
+#else
+  if (__glibc_likely (__environ != NULL)
+      && !__builtin_expect (__libc_enable_secure, 0))
+    {
+      char **runp = __environ;
+      char *envline;
+
+      while (*runp != NULL)
+	{
+	  envline = *runp;
+	  if (!DEFAULT_MEMCMP (envline, "GLIBC_IFUNC=",
+			       sizeof ("GLIBC_IFUNC=") - 1))
+	  {
+	    _dl_x86_set_ifunc (envline + sizeof ("GLIBC_IFUNC=") - 1);
+	    break;
+	  }
+	  runp++;
+	}
+    }
+#endif
+
 #if IS_IN (rtld)
   /* Reuse dl_platform, dl_hwcap and dl_hwcap_mask for x86.  */
   GLRO(dl_platform) = NULL;
diff --git a/sysdeps/x86/cpu-features.h b/sysdeps/x86/cpu-features.h
index 31c7c80..851d137 100644
--- a/sysdeps/x86/cpu-features.h
+++ b/sysdeps/x86/cpu-features.h
@@ -227,6 +227,18 @@ extern const struct cpu_features *__get_cpu_features (void)
 #  define __get_cpu_features()	(&GLRO(dl_x86_cpu_features))
 # endif
 
+/* We can't use IFUNC memcmp in init_cpu_features from libc.a since
+   IFUNC must be set up by init_cpu_features.  */
+# if defined USE_MULTIARCH && !defined SHARED
+#  ifdef __x86_64__
+#   define DEFAULT_MEMCMP	__memcmp_sse2
+#  else
+#   define DEFAULT_MEMCMP	__memcmp_ia32
+#  endif
+extern __typeof (memcmp) DEFAULT_MEMCMP;
+# else
+#  define DEFAULT_MEMCMP	memcmp
+# endif
 
 /* Only used directly in cpu-features.c.  */
 # define CPU_FEATURES_CPU_P(ptr, name) \
diff --git a/sysdeps/x86/cpu-tunables.c b/sysdeps/x86/cpu-tunables.c
new file mode 100644
index 0000000..b98f8e7
--- /dev/null
+++ b/sysdeps/x86/cpu-tunables.c
@@ -0,0 +1,322 @@
+/* CPU feature tuning.
+   This file is part of the GNU C Library.
+   Copyright (C) 2017 Free Software Foundation, Inc.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <string.h>
+#if HAVE_TUNABLES
+# define TUNABLE_NAMESPACE tune
+# include <elf/dl-tunables.h>
+#endif
+#include <cpu-features.h>
+#include <ldsodefs.h>
+
+#define CHECK_GLIBC_IFUNC_CPU_OFF(f, cpu_features, name, len)		\
+  _Static_assert (sizeof (#name) - 1 == len, #name " != " #len);	\
+  if (!DEFAULT_MEMCMP (f, #name, len))					\
+    {									\
+      cpu_features->cpuid[index_cpu_##name].reg_##name			\
+	&= ~bit_cpu_##name;						\
+      break;								\
+    }
+
+/* Disable an ARCH feature NAME.  We don't enable an ARCH feature which
+   isn't available.  */
+#define CHECK_GLIBC_IFUNC_ARCH_OFF(f, cpu_features, name, len)		\
+  _Static_assert (sizeof (#name) - 1 == len, #name " != " #len);	\
+  if (!DEFAULT_MEMCMP (f, #name, len))					\
+    {									\
+      cpu_features->feature[index_arch_##name]				\
+	&= ~bit_arch_##name;						\
+      break;								\
+    }
+
+/* Enable/disable an ARCH feature NAME.  */
+#define CHECK_GLIBC_IFUNC_ARCH_BOTH(f, cpu_features, name, disable,	\
+				    len)				\
+  _Static_assert (sizeof (#name) - 1 == len, #name " != " #len);	\
+  if (!DEFAULT_MEMCMP (f, #name, len))					\
+    {									\
+      if (disable)							\
+	cpu_features->feature[index_arch_##name]			\
+	  &= ~bit_arch_##name;						\
+      else								\
+	cpu_features->feature[index_arch_##name]			\
+	  |= bit_arch_##name;						\
+      break;								\
+    }
+
+/* Enable/disable an ARCH feature NAME.  Enable an ARCH feature only
+   if the ARCH feature NEED is also enabled.  */
+#define CHECK_GLIBC_IFUNC_ARCH_NEED_ARCH_BOTH(f, cpu_features, name,	\
+					      need, disable, len)	\
+  _Static_assert (sizeof (#name) - 1 == len, #name " != " #len);	\
+  if (!DEFAULT_MEMCMP (f, #name, len))					\
+    {									\
+      if (disable)							\
+	cpu_features->feature[index_arch_##name]			\
+	  &= ~bit_arch_##name;						\
+      else if (CPU_FEATURES_ARCH_P (cpu_features, need))		\
+	cpu_features->feature[index_arch_##name]			\
+	  |= bit_arch_##name;						\
+      break;								\
+    }
+
+/* Enable/disable an ARCH feature NAME.  Enable an ARCH feature only
+   if the CPU feature NEED is also enabled.  */
+#define CHECK_GLIBC_IFUNC_ARCH_NEED_CPU_BOTH(f, cpu_features, name,	\
+					     need, disable, len)	\
+  _Static_assert (sizeof (#name) - 1 == len, #name " != " #len);	\
+  if (!DEFAULT_MEMCMP (f, #name, len))					\
+    {									\
+      if (disable)							\
+	cpu_features->feature[index_arch_##name]			\
+	  &= ~bit_arch_##name;						\
+      else if (CPU_FEATURES_CPU_P (cpu_features, need))			\
+	cpu_features->feature[index_arch_##name]			\
+	  |= bit_arch_##name;						\
+      break;								\
+    }
+
+#if HAVE_TUNABLES
+static
+#else
+attribute_hidden
+#endif
+void
+_dl_x86_set_ifunc (const char *p)
+{
+  /* The current IFUNC selection is based on microbenchmarks in glibc.
+     It should give the best performance for most workloads.  But other
+     choices may have better performance for a particular workload or on
+     the hardware which wasn't available when the selection was made.
+     The environment variable, GLIBC_IFUNC=-xxx,yyy,-zzz...., can be
+     used to enable CPU/ARCH feature yyy, disable CPU/ARCH feature yyy
+     and zzz, where the feature name is case-sensitive and has to match
+     the ones in cpu-features.h.  It can be used by glibc developers to
+     tune for a new processor or override the IFUNC selection to improve
+     performance for a particular workload.
+
+     Since all CPU/ARCH features are hardware optimizations without
+     security implication, except for Prefer_MAP_32BIT_EXEC, which can
+     only be disabled, we check GLIBC_IFUNC for programs, including
+     set*id ones.
+
+     NOTE: the IFUNC selection may change over time.  Please check all
+     multiarch implementations when experimenting.  */
+
+  struct cpu_features *cpu_features = &GLRO(dl_x86_cpu_features);
+  const char *end = p + strlen (p);
+  size_t len;
+
+  do
+    {
+      const char *c, *n;
+      bool disable;
+      size_t nl;
+
+      for (c = p; *c != ','; c++)
+	if (c >= end)
+	  break;
+
+      len = c - p;
+      disable = *p == '-';
+      if (disable)
+	{
+	  n = p + 1;
+	  nl = len - 1;
+	}
+      else
+	{
+	  n = p;
+	  nl = len;
+	}
+      switch (nl)
+	{
+	default:
+	  break;
+	case 3:
+	  if (disable)
+	    {
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX, 3);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, CX8, 3);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, FMA, 3);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, HTT, 3);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, RTM, 3);
+	    }
+	  break;
+	case 4:
+	  if (disable)
+	    {
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX2, 4);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, BMI1, 4);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, BMI2, 4);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, CMOV, 4);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, ERMS, 4);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, FMA4, 4);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, SSE2, 4);
+	      CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, I586, 4);
+	      CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, I686, 4);
+	    }
+	  break;
+	case 5:
+	  if (disable)
+	    {
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, LZCNT, 5);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, MOVBE, 5);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, SSSE3, 5);
+	    }
+	  break;
+	case 6:
+	  if (disable)
+	    {
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, POPCNT, 6);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, SSE4_1, 6);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, SSE4_2, 6);
+	    }
+	  break;
+	case 7:
+	  if (disable)
+	    {
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX512F, 7);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, OSXSAVE, 7);
+	    }
+	  break;
+	case 8:
+	  if (disable)
+	    {
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX512CD, 8);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX512BW, 8);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX512DQ, 8);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX512ER, 8);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX512PF, 8);
+	      CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX512VL, 8);
+	    }
+	  CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, Slow_BSF,
+				       disable, 8);
+	  break;
+	case 10:
+	  if (disable)
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, AVX_Usable,
+					  10);
+	      CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, FMA_Usable,
+					  10);
+	    }
+	  break;
+	case 11:
+	  if (disable)
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, AVX2_Usable,
+					  11);
+	      CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, FMA4_Usable,
+					  11);
+	    }
+	  CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, Prefer_ERMS,
+				       disable, 11);
+	  CHECK_GLIBC_IFUNC_ARCH_NEED_CPU_BOTH (n, cpu_features,
+						Slow_SSE4_2, SSE4_2,
+						disable, 11);
+	  break;
+	case 14:
+	  if (disable)
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features,
+					  AVX512F_Usable, 14);
+	    }
+	  break;
+	case 15:
+	  if (disable)
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features,
+					  AVX512DQ_Usable, 15);
+	    }
+	  CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, Fast_Rep_String,
+				       disable, 15);
+	  break;
+	case 16:
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_NEED_ARCH_BOTH
+		(n, cpu_features, Prefer_No_AVX512, AVX512F_Usable,
+		 disable, 16);
+	    }
+	  break;
+	case 18:
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features,
+					   Fast_Copy_Backward, disable,
+					   18);
+	    }
+	  break;
+	case 19:
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features,
+					   Fast_Unaligned_Load, disable,
+					   19);
+	      CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features,
+					   Fast_Unaligned_Copy, disable,
+					   19);
+	    }
+	  break;
+	case 20:
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_NEED_ARCH_BOTH
+		(n, cpu_features, Prefer_No_VZEROUPPER, AVX_Usable,
+		 disable, 20);
+	    }
+	  break;
+	case 21:
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features,
+					   Prefer_MAP_32BIT_EXEC, disable,
+					   21);
+	    }
+	  break;
+	case 23:
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_NEED_ARCH_BOTH
+		(n, cpu_features, AVX_Fast_Unaligned_Load, AVX_Usable,
+		 disable, 23);
+	    }
+	  break;
+	case 26:
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_NEED_CPU_BOTH
+		(n, cpu_features, Prefer_PMINUB_for_stringop, SSE2,
+		 disable, 26);
+	    }
+	  break;
+	case 27:
+	    {
+	      CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features,
+					   Use_dl_runtime_resolve_slow,
+					   disable, 27);
+	    }
+	  break;
+	}
+      p += len + 1;
+    }
+  while (p < end);
+}
+
+#if HAVE_TUNABLES
+attribute_hidden
+void
+DL_TUNABLE_CALLBACK (set_ifunc) (tunable_val_t *valp)
+{
+  _dl_x86_set_ifunc (valp->strval);
+}
+#endif
-- 
2.9.4


^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2017-05-27 21:35 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-05 17:09 memcpy performance regressions 2.19 -> 2.24(5) Erich Elsen
2017-05-05 18:09 ` Carlos O'Donell
2017-05-06  0:57   ` Erich Elsen
2017-05-06 15:41     ` H.J. Lu
2017-05-09 23:48       ` Erich Elsen
2017-05-10 17:33         ` H.J. Lu
2017-05-11  2:17           ` Carlos O'Donell
2017-05-12 19:47             ` Erich Elsen
     [not found]             ` <CAOVZoAPp3_T+ourRkNFXHfCSQUOMFn4iBBm9j50==h=VJcGSzw@mail.gmail.com>
2017-05-12 20:21               ` H.J. Lu
2017-05-12 21:21                 ` H.J. Lu
2017-05-18 20:59                   ` Erich Elsen
2017-05-22 19:17                     ` H.J. Lu
2017-05-22 20:22                       ` H.J. Lu
2017-05-23  1:23                       ` Erich Elsen
2017-05-23  2:25                         ` H.J. Lu
2017-05-23  3:19                           ` Erich Elsen
2017-05-23 20:39                             ` Erich Elsen
2017-05-23 20:46                               ` H.J. Lu
2017-05-23 20:57                                 ` Erich Elsen
2017-05-23 22:08                                   ` H.J. Lu
2017-05-23 22:12                                     ` Erich Elsen
2017-05-23 22:55                                       ` H.J. Lu
2017-05-24  0:56                                         ` Erich Elsen
2017-05-24  3:42                                           ` H.J. Lu
2017-05-24 21:03                                             ` Erich Elsen
2017-05-24 21:36                             ` H.J. Lu
2017-05-25 21:23                               ` Erich Elsen
2017-05-25 21:57                                 ` Erich Elsen
2017-05-25 22:03                                   ` H.J. Lu
2017-05-27  0:31                                     ` Erich Elsen
2017-05-27 21:35                                       ` H.J. Lu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).