public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
@ 2017-10-05 14:55 Wilco Dijkstra
  2017-10-05 21:39 ` Victor Rodriguez
  0 siblings, 1 reply; 15+ messages in thread
From: Wilco Dijkstra @ 2017-10-05 14:55 UTC (permalink / raw)
  To: vm.rod25; +Cc: Carlos O'Donell, siddhesh, nd, libc-alpha

Victor Rodriguez wrote:

> Quick question , do you think it might be good idea to add this test
> into the prhonix glibc bench :
>
> https://openbenchmarking.org/test/pts/glibc-bench
> https://openbenchmarking.org/innhold/cac2836cd5dbb8ae279f8a5e7b0896272e82dc76
>
> If so, let me know so I can work on add it

Currently this seems to run:

ffsll
ffs
pthread_once
ffsll
tanh
sqrt

So half(!) of the tests are find first set bit, which is not frequently used. When used it is inlined as a builtin so the performance of the GLIBC call (which is unoptimized for all but a few targets) doesn't matter at all. For some targets it ends up as a single instruction, so it really measures the overhead of the plt call... Sqrt is similar, it's just a plt call + sqrt throughput comparison. I'm not sure about the usefulness of pthread_once.

tanh is reasonable - however I would suggest changing to the powf benchmark instead as GLIBC has an optimized generic implementation and now runs an actual trace (instead of repeating the same input again and again). Running that against older GLIBCs or other C libraries would show great gains!

Overall I think the idea of running GLIBC benchmarks elsewhere is great, and improving it and adding memcpy benchmarks would be useful. But like with all benchmarking it's important to understand what you're trying to measure and why in order to get meaningful results. So I think the first step is to decide what the goal of this benchmark is.

For example, is it tracking performance of frequently used performance critical functions across GLIBC versions or other C libraries? Or about comparing target-specific optimizations of rarely used functions?

Wilco

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-10-05 14:55 [PATCH 1/2] benchtests: Memory walking benchmark for memcpy Wilco Dijkstra
@ 2017-10-05 21:39 ` Victor Rodriguez
  0 siblings, 0 replies; 15+ messages in thread
From: Victor Rodriguez @ 2017-10-05 21:39 UTC (permalink / raw)
  To: Wilco Dijkstra; +Cc: Carlos O'Donell, siddhesh, nd, libc-alpha

On Thu, Oct 5, 2017 at 9:55 AM, Wilco Dijkstra <Wilco.Dijkstra@arm.com> wrote:
> Victor Rodriguez wrote:
>
>> Quick question , do you think it might be good idea to add this test
>> into the prhonix glibc bench :
>>
>> https://openbenchmarking.org/test/pts/glibc-bench
>> https://openbenchmarking.org/innhold/cac2836cd5dbb8ae279f8a5e7b0896272e82dc76
>>
>> If so, let me know so I can work on add it
>
> Currently this seems to run:
>
> ffsll
> ffs
> pthread_once
> ffsll
> tanh
> sqrt
>
> So half(!) of the tests are find first set bit, which is not frequently used. When used it is inlined as a builtin so the performance of the GLIBC call (which is unoptimized for all but a few targets) doesn't matter at all. For some targets it ends up as a single instruction, so it really measures the overhead of the plt call... Sqrt is similar, it's just a plt call + sqrt throughput comparison. I'm not sure about the usefulness of pthread_once.
>
> tanh is reasonable - however I would suggest changing to the powf benchmark instead as GLIBC has an optimized generic implementation and now runs an actual trace (instead of repeating the same input again and again). Running that against older GLIBCs or other C libraries would show great gains!
>
> Overall I think the idea of running GLIBC benchmarks elsewhere is great, and improving it and adding memcpy benchmarks would be useful. But like with all benchmarking it's important to understand what you're trying to measure and why in order to get meaningful results. So I think the first step is to decide what the goal of this benchmark is.
>
Yes, you are right, we need to do a deep analysis of what we want to
have on the glibc bench

I agree with all the feedback you provide. I am going to start a new
thread specific for Glibc bench

Now that the GLIBC bench is getting more attraction, is time to improve it

> For example, is it tracking performance of frequently used performance critical functions across GLIBC versions or other C libraries? Or about comparing target-specific optimizations of rarely used functions?
>
The idea is to compare the performance of critical functions across
GLIBC functions

> Wilco

Thanks a lot for the interest and feedback

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-10-04 23:12             ` Victor Rodriguez
  2017-10-05  3:20               ` Carlos O'Donell
@ 2017-10-05  4:58               ` Siddhesh Poyarekar
  1 sibling, 0 replies; 15+ messages in thread
From: Siddhesh Poyarekar @ 2017-10-05  4:58 UTC (permalink / raw)
  To: Victor Rodriguez, Carlos O'Donell; +Cc: GNU C Library

On Thursday 05 October 2017 04:42 AM, Victor Rodriguez wrote:
> The section: " Recent Results With This Test " shows that it has been
> used to measure things like :
> 
> Linux 4.14-rc1 vs. Linux 4.13 Kernel Benchmarks
> https://openbenchmarking.org/result/1709186-TY-LINUX414R23
> 
> as well as other core CPU systems
> 
> So in my humble opinion, i think is getting a lot of attraction
> 
> There is still work that need to be done but is good to have a way to
> measure the performance with Phoronix framework

That's great!  I would recommend keeping the benchmark in sync because
the glibc benchtests are still evolving.  This is especially true for
string benchmarks because we are only beginning to some serious research
on what makes sense for measurements and what doesn't.

Siddhesh

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-10-04 22:19       ` Carlos O'Donell
  2017-10-04 22:45         ` Victor Rodriguez
@ 2017-10-05  4:55         ` Siddhesh Poyarekar
  1 sibling, 0 replies; 15+ messages in thread
From: Siddhesh Poyarekar @ 2017-10-05  4:55 UTC (permalink / raw)
  To: Carlos O'Donell, libc-alpha

On Thursday 05 October 2017 03:49 AM, Carlos O'Donell wrote:
> As the subsystem maintainer I defer to your choice here. I don't have a
> strong opinion, other than a desire for conformity of measurements to
> avoid confusion. If I could say anything, consider the consumer and make
> sure the data is tagged such that a consumer can determine if it is time
> or throughput.

OK, I'll take the conservative route and stick to measuring time here
instead of rate.  If I feel strongly enough about it I'll start a
separate discussion on making all data routines (i.e. string/memory
routines) rate based so that there's no confusion.

Siddhesh

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-10-04 23:12             ` Victor Rodriguez
@ 2017-10-05  3:20               ` Carlos O'Donell
  2017-10-05  4:58               ` Siddhesh Poyarekar
  1 sibling, 0 replies; 15+ messages in thread
From: Carlos O'Donell @ 2017-10-05  3:20 UTC (permalink / raw)
  To: Victor Rodriguez; +Cc: siddhesh, GNU C Library

On 10/04/2017 04:12 PM, Victor Rodriguez wrote:
> On Wed, Oct 4, 2017 at 5:49 PM, Carlos O'Donell <carlos@redhat.com> wrote:
>> On 10/04/2017 03:45 PM, Victor Rodriguez wrote:
>>> On Wed, Oct 4, 2017 at 5:19 PM, Carlos O'Donell <carlos@redhat.com> wrote:
>>>> On 10/03/2017 11:53 PM, Siddhesh Poyarekar wrote:
>>>>> On Friday 22 September 2017 05:29 AM, Siddhesh Poyarekar wrote:
>>>>>> On Thursday 21 September 2017 11:59 PM, Carlos O'Donell wrote:
>>>>>>> I like the idea, and the point that the other benchmark eventually degrades
>>>>>>> into measuring L1 performance an interesting insight.
>>>>>>>
>>>>>>> I do not like that it produces total data rate not time taken per execution.
>>>>>>> Why the change? If time taken per execution was OK before, why not here?
>>>>>>
>>>>>> That is because it seems more natural to express string function
>>>>>> performance by the rate at which it processes data than the time it
>>>>>> takes to execute.  It also makes comparison across sizes a bit
>>>>>> interesting, i.e. the data rate for processing 1MB 32 bytes at a time vs
>>>>>> 128 bytes at a time.
>>>>>>
>>>>>> The fact that "twice as fast" sounds better than "takes half the time"
>>>>>> is an added bonus :)
>>>>>
>>>>> Carlos, do you think this is a reasonable enough explanation?  I'll fix
>>>>> up the output in a subsequent patch so that it has a 'throughput'
>>>>> property that the post-processing scripts can read without needing the
>>>>> additional argument in 2/2.
>>>>
>>>> As the subsystem maintainer I defer to your choice here. I don't have a
>>>> strong opinion, other than a desire for conformity of measurements to
>>>> avoid confusion. If I could say anything, consider the consumer and make
>>>> sure the data is tagged such that a consumer can determine if it is time
>>>> or throughput.
>>>>
>>>> --
>>>> Cheers,
>>>> Carlos.
>>>
>>> Quick question , do you think it might be good idea to add this test
>>> into the prhonix glibc bench :
>>>
>>> https://openbenchmarking.org/test/pts/glibc-bench
>>> https://openbenchmarking.org/innhold/cac2836cd5dbb8ae279f8a5e7b0896272e82dc76
>>>
>>> If so, let me know so I can work on add it
>>
>> As a volunteer I appreciated any work you may wish to do for the project.
>>
>> Certainly, if you find it valuable to keep the pts/glibc-bench in sync
>> with glibc benchtests/ then it sounds like a good idea to update it
>> regularly based on the glibc changes.
> 
> Sure, happy to help comunity
>>
>> What is your impression of how pts/glibc-bench is being used?
>>
> 
> The section: " Recent Results With This Test " shows that it has been
> used to measure things like :
> 
> Linux 4.14-rc1 vs. Linux 4.13 Kernel Benchmarks
> https://openbenchmarking.org/result/1709186-TY-LINUX414R23
> 
> as well as other core CPU systems
> 
> So in my humble opinion, i think is getting a lot of attraction
> 
> There is still work that need to be done but is good to have a way to
> measure the performance with Phoronix framework

That sounds great!

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-10-04 22:50           ` Carlos O'Donell
@ 2017-10-04 23:12             ` Victor Rodriguez
  2017-10-05  3:20               ` Carlos O'Donell
  2017-10-05  4:58               ` Siddhesh Poyarekar
  0 siblings, 2 replies; 15+ messages in thread
From: Victor Rodriguez @ 2017-10-04 23:12 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: siddhesh, GNU C Library

On Wed, Oct 4, 2017 at 5:49 PM, Carlos O'Donell <carlos@redhat.com> wrote:
> On 10/04/2017 03:45 PM, Victor Rodriguez wrote:
>> On Wed, Oct 4, 2017 at 5:19 PM, Carlos O'Donell <carlos@redhat.com> wrote:
>>> On 10/03/2017 11:53 PM, Siddhesh Poyarekar wrote:
>>>> On Friday 22 September 2017 05:29 AM, Siddhesh Poyarekar wrote:
>>>>> On Thursday 21 September 2017 11:59 PM, Carlos O'Donell wrote:
>>>>>> I like the idea, and the point that the other benchmark eventually degrades
>>>>>> into measuring L1 performance an interesting insight.
>>>>>>
>>>>>> I do not like that it produces total data rate not time taken per execution.
>>>>>> Why the change? If time taken per execution was OK before, why not here?
>>>>>
>>>>> That is because it seems more natural to express string function
>>>>> performance by the rate at which it processes data than the time it
>>>>> takes to execute.  It also makes comparison across sizes a bit
>>>>> interesting, i.e. the data rate for processing 1MB 32 bytes at a time vs
>>>>> 128 bytes at a time.
>>>>>
>>>>> The fact that "twice as fast" sounds better than "takes half the time"
>>>>> is an added bonus :)
>>>>
>>>> Carlos, do you think this is a reasonable enough explanation?  I'll fix
>>>> up the output in a subsequent patch so that it has a 'throughput'
>>>> property that the post-processing scripts can read without needing the
>>>> additional argument in 2/2.
>>>
>>> As the subsystem maintainer I defer to your choice here. I don't have a
>>> strong opinion, other than a desire for conformity of measurements to
>>> avoid confusion. If I could say anything, consider the consumer and make
>>> sure the data is tagged such that a consumer can determine if it is time
>>> or throughput.
>>>
>>> --
>>> Cheers,
>>> Carlos.
>>
>> Quick question , do you think it might be good idea to add this test
>> into the prhonix glibc bench :
>>
>> https://openbenchmarking.org/test/pts/glibc-bench
>> https://openbenchmarking.org/innhold/cac2836cd5dbb8ae279f8a5e7b0896272e82dc76
>>
>> If so, let me know so I can work on add it
>
> As a volunteer I appreciated any work you may wish to do for the project.
>
> Certainly, if you find it valuable to keep the pts/glibc-bench in sync
> with glibc benchtests/ then it sounds like a good idea to update it
> regularly based on the glibc changes.

Sure, happy to help comunity
>
> What is your impression of how pts/glibc-bench is being used?
>

The section: " Recent Results With This Test " shows that it has been
used to measure things like :

Linux 4.14-rc1 vs. Linux 4.13 Kernel Benchmarks
https://openbenchmarking.org/result/1709186-TY-LINUX414R23

as well as other core CPU systems

So in my humble opinion, i think is getting a lot of attraction

There is still work that need to be done but is good to have a way to
measure the performance with Phoronix framework

Regards

Victor R
> --
> Cheers,
> Carlos.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-10-04 22:45         ` Victor Rodriguez
@ 2017-10-04 22:50           ` Carlos O'Donell
  2017-10-04 23:12             ` Victor Rodriguez
  0 siblings, 1 reply; 15+ messages in thread
From: Carlos O'Donell @ 2017-10-04 22:50 UTC (permalink / raw)
  To: Victor Rodriguez; +Cc: siddhesh, GNU C Library

On 10/04/2017 03:45 PM, Victor Rodriguez wrote:
> On Wed, Oct 4, 2017 at 5:19 PM, Carlos O'Donell <carlos@redhat.com> wrote:
>> On 10/03/2017 11:53 PM, Siddhesh Poyarekar wrote:
>>> On Friday 22 September 2017 05:29 AM, Siddhesh Poyarekar wrote:
>>>> On Thursday 21 September 2017 11:59 PM, Carlos O'Donell wrote:
>>>>> I like the idea, and the point that the other benchmark eventually degrades
>>>>> into measuring L1 performance an interesting insight.
>>>>>
>>>>> I do not like that it produces total data rate not time taken per execution.
>>>>> Why the change? If time taken per execution was OK before, why not here?
>>>>
>>>> That is because it seems more natural to express string function
>>>> performance by the rate at which it processes data than the time it
>>>> takes to execute.  It also makes comparison across sizes a bit
>>>> interesting, i.e. the data rate for processing 1MB 32 bytes at a time vs
>>>> 128 bytes at a time.
>>>>
>>>> The fact that "twice as fast" sounds better than "takes half the time"
>>>> is an added bonus :)
>>>
>>> Carlos, do you think this is a reasonable enough explanation?  I'll fix
>>> up the output in a subsequent patch so that it has a 'throughput'
>>> property that the post-processing scripts can read without needing the
>>> additional argument in 2/2.
>>
>> As the subsystem maintainer I defer to your choice here. I don't have a
>> strong opinion, other than a desire for conformity of measurements to
>> avoid confusion. If I could say anything, consider the consumer and make
>> sure the data is tagged such that a consumer can determine if it is time
>> or throughput.
>>
>> --
>> Cheers,
>> Carlos.
> 
> Quick question , do you think it might be good idea to add this test
> into the prhonix glibc bench :
> 
> https://openbenchmarking.org/test/pts/glibc-bench
> https://openbenchmarking.org/innhold/cac2836cd5dbb8ae279f8a5e7b0896272e82dc76
> 
> If so, let me know so I can work on add it

As a volunteer I appreciated any work you may wish to do for the project.

Certainly, if you find it valuable to keep the pts/glibc-bench in sync
with glibc benchtests/ then it sounds like a good idea to update it 
regularly based on the glibc changes.

What is your impression of how pts/glibc-bench is being used?

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-10-04 22:19       ` Carlos O'Donell
@ 2017-10-04 22:45         ` Victor Rodriguez
  2017-10-04 22:50           ` Carlos O'Donell
  2017-10-05  4:55         ` Siddhesh Poyarekar
  1 sibling, 1 reply; 15+ messages in thread
From: Victor Rodriguez @ 2017-10-04 22:45 UTC (permalink / raw)
  To: Carlos O'Donell; +Cc: siddhesh, GNU C Library

On Wed, Oct 4, 2017 at 5:19 PM, Carlos O'Donell <carlos@redhat.com> wrote:
> On 10/03/2017 11:53 PM, Siddhesh Poyarekar wrote:
>> On Friday 22 September 2017 05:29 AM, Siddhesh Poyarekar wrote:
>>> On Thursday 21 September 2017 11:59 PM, Carlos O'Donell wrote:
>>>> I like the idea, and the point that the other benchmark eventually degrades
>>>> into measuring L1 performance an interesting insight.
>>>>
>>>> I do not like that it produces total data rate not time taken per execution.
>>>> Why the change? If time taken per execution was OK before, why not here?
>>>
>>> That is because it seems more natural to express string function
>>> performance by the rate at which it processes data than the time it
>>> takes to execute.  It also makes comparison across sizes a bit
>>> interesting, i.e. the data rate for processing 1MB 32 bytes at a time vs
>>> 128 bytes at a time.
>>>
>>> The fact that "twice as fast" sounds better than "takes half the time"
>>> is an added bonus :)
>>
>> Carlos, do you think this is a reasonable enough explanation?  I'll fix
>> up the output in a subsequent patch so that it has a 'throughput'
>> property that the post-processing scripts can read without needing the
>> additional argument in 2/2.
>
> As the subsystem maintainer I defer to your choice here. I don't have a
> strong opinion, other than a desire for conformity of measurements to
> avoid confusion. If I could say anything, consider the consumer and make
> sure the data is tagged such that a consumer can determine if it is time
> or throughput.
>
> --
> Cheers,
> Carlos.

Quick question , do you think it might be good idea to add this test
into the prhonix glibc bench :

https://openbenchmarking.org/test/pts/glibc-bench
https://openbenchmarking.org/innhold/cac2836cd5dbb8ae279f8a5e7b0896272e82dc76

If so, let me know so I can work on add it

regards

Victor Rodriguez

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-10-04  6:53     ` Siddhesh Poyarekar
@ 2017-10-04 22:19       ` Carlos O'Donell
  2017-10-04 22:45         ` Victor Rodriguez
  2017-10-05  4:55         ` Siddhesh Poyarekar
  0 siblings, 2 replies; 15+ messages in thread
From: Carlos O'Donell @ 2017-10-04 22:19 UTC (permalink / raw)
  To: siddhesh, libc-alpha

On 10/03/2017 11:53 PM, Siddhesh Poyarekar wrote:
> On Friday 22 September 2017 05:29 AM, Siddhesh Poyarekar wrote:
>> On Thursday 21 September 2017 11:59 PM, Carlos O'Donell wrote:
>>> I like the idea, and the point that the other benchmark eventually degrades
>>> into measuring L1 performance an interesting insight.
>>>
>>> I do not like that it produces total data rate not time taken per execution.
>>> Why the change? If time taken per execution was OK before, why not here?
>>
>> That is because it seems more natural to express string function
>> performance by the rate at which it processes data than the time it
>> takes to execute.  It also makes comparison across sizes a bit
>> interesting, i.e. the data rate for processing 1MB 32 bytes at a time vs
>> 128 bytes at a time.
>>
>> The fact that "twice as fast" sounds better than "takes half the time"
>> is an added bonus :)
> 
> Carlos, do you think this is a reasonable enough explanation?  I'll fix
> up the output in a subsequent patch so that it has a 'throughput'
> property that the post-processing scripts can read without needing the
> additional argument in 2/2.

As the subsystem maintainer I defer to your choice here. I don't have a
strong opinion, other than a desire for conformity of measurements to
avoid confusion. If I could say anything, consider the consumer and make
sure the data is tagged such that a consumer can determine if it is time
or throughput.

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-09-22  0:00   ` Siddhesh Poyarekar
@ 2017-10-04  6:53     ` Siddhesh Poyarekar
  2017-10-04 22:19       ` Carlos O'Donell
  0 siblings, 1 reply; 15+ messages in thread
From: Siddhesh Poyarekar @ 2017-10-04  6:53 UTC (permalink / raw)
  To: Carlos O'Donell, libc-alpha

On Friday 22 September 2017 05:29 AM, Siddhesh Poyarekar wrote:
> On Thursday 21 September 2017 11:59 PM, Carlos O'Donell wrote:
>> I like the idea, and the point that the other benchmark eventually degrades
>> into measuring L1 performance an interesting insight.
>>
>> I do not like that it produces total data rate not time taken per execution.
>> Why the change? If time taken per execution was OK before, why not here?
> 
> That is because it seems more natural to express string function
> performance by the rate at which it processes data than the time it
> takes to execute.  It also makes comparison across sizes a bit
> interesting, i.e. the data rate for processing 1MB 32 bytes at a time vs
> 128 bytes at a time.
> 
> The fact that "twice as fast" sounds better than "takes half the time"
> is an added bonus :)

Carlos, do you think this is a reasonable enough explanation?  I'll fix
up the output in a subsequent patch so that it has a 'throughput'
property that the post-processing scripts can read without needing the
additional argument in 2/2.

Siddhesh

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-09-21 18:29 ` Carlos O'Donell
@ 2017-09-22  0:00   ` Siddhesh Poyarekar
  2017-10-04  6:53     ` Siddhesh Poyarekar
  0 siblings, 1 reply; 15+ messages in thread
From: Siddhesh Poyarekar @ 2017-09-22  0:00 UTC (permalink / raw)
  To: Carlos O'Donell, libc-alpha

On Thursday 21 September 2017 11:59 PM, Carlos O'Donell wrote:
> I like the idea, and the point that the other benchmark eventually degrades
> into measuring L1 performance an interesting insight.
> 
> I do not like that it produces total data rate not time taken per execution.
> Why the change? If time taken per execution was OK before, why not here?

That is because it seems more natural to express string function
performance by the rate at which it processes data than the time it
takes to execute.  It also makes comparison across sizes a bit
interesting, i.e. the data rate for processing 1MB 32 bytes at a time vs
128 bytes at a time.

The fact that "twice as fast" sounds better than "takes half the time"
is an added bonus :)

Siddhesh

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-09-18 17:40 Siddhesh Poyarekar
  2017-09-21  6:29 ` Siddhesh Poyarekar
@ 2017-09-21 18:29 ` Carlos O'Donell
  2017-09-22  0:00   ` Siddhesh Poyarekar
  1 sibling, 1 reply; 15+ messages in thread
From: Carlos O'Donell @ 2017-09-21 18:29 UTC (permalink / raw)
  To: Siddhesh Poyarekar, libc-alpha

On 09/18/2017 11:40 AM, Siddhesh Poyarekar wrote:
> This benchmark is an attempt to eliminate cache effects from string
> benchmarks.  The benchmark walks both ways through a large memory area
> and copies different sizes of memory and alignments one at a time
> instead of looping around in the same memory area.  This is a good
> metric to have alongside the other memcpy benchmarks, especially for
> larger sizes where the likelihood of the call being done only once is
> pretty high.
> 
> The benchmark is unlike other string benchmarks in that it prints the
> total data rate achieved during a walk across the memory and not the
> time taken per execution.
> 
> 	* benchtests/bench-memcpy-walk.c: New file.
> 	* benchtests/Makefile (string-benchset): Add it.
I like the idea, and the point that the other benchmark eventually degrades
into measuring L1 performance an interesting insight.

I do not like that it produces total data rate not time taken per execution.
Why the change? If time taken per execution was OK before, why not here?

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-09-21  6:29 ` Siddhesh Poyarekar
@ 2017-09-21  7:41   ` Rajalakshmi Srinivasaraghavan
  0 siblings, 0 replies; 15+ messages in thread
From: Rajalakshmi Srinivasaraghavan @ 2017-09-21  7:41 UTC (permalink / raw)
  To: libc-alpha



On 09/21/2017 11:59 AM, Siddhesh Poyarekar wrote:
> The benchmark is unlike other string benchmarks in that it prints the
> total data rate achieved during a walk across the memory and not the
> time taken per execution.
> 
> 	* benchtests/bench-memcpy-walk.c: New file.
> 	* benchtests/Makefile (string-benchset): Add it.

LGTM.

-- 
Thanks
Rajalakshmi S

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
  2017-09-18 17:40 Siddhesh Poyarekar
@ 2017-09-21  6:29 ` Siddhesh Poyarekar
  2017-09-21  7:41   ` Rajalakshmi Srinivasaraghavan
  2017-09-21 18:29 ` Carlos O'Donell
  1 sibling, 1 reply; 15+ messages in thread
From: Siddhesh Poyarekar @ 2017-09-21  6:29 UTC (permalink / raw)
  To: libc-alpha

Ping, any comments on this new benchmark?

Siddhesh

On Monday 18 September 2017 11:10 PM, Siddhesh Poyarekar wrote:
> This benchmark is an attempt to eliminate cache effects from string
> benchmarks.  The benchmark walks both ways through a large memory area
> and copies different sizes of memory and alignments one at a time
> instead of looping around in the same memory area.  This is a good
> metric to have alongside the other memcpy benchmarks, especially for
> larger sizes where the likelihood of the call being done only once is
> pretty high.
> 
> The benchmark is unlike other string benchmarks in that it prints the
> total data rate achieved during a walk across the memory and not the
> time taken per execution.
> 
> 	* benchtests/bench-memcpy-walk.c: New file.
> 	* benchtests/Makefile (string-benchset): Add it.
> 
> ---
>  benchtests/Makefile            |   3 +-
>  benchtests/bench-memcpy-walk.c | 126 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 128 insertions(+), 1 deletion(-)
>  create mode 100644 benchtests/bench-memcpy-walk.c
> 
> diff --git a/benchtests/Makefile b/benchtests/Makefile
> index a0c3470..fbdeadf 100644
> --- a/benchtests/Makefile
> +++ b/benchtests/Makefile
> @@ -37,7 +37,8 @@ string-benchset := bcopy bzero memccpy memchr memcmp memcpy memmem memmove \
>  		   strcat strchr strchrnul strcmp strcpy strcspn strlen \
>  		   strncasecmp strncat strncmp strncpy strnlen strpbrk strrchr \
>  		   strspn strstr strcpy_chk stpcpy_chk memrchr strsep strtok \
> -		   strcoll memcpy-large memcpy-random memmove-large memset-large
> +		   strcoll memcpy-large memcpy-random memmove-large memset-large \
> +		   memcpy-walk
>  
>  # Build and run locale-dependent benchmarks only if we're building natively.
>  ifeq (no,$(cross-compiling))
> diff --git a/benchtests/bench-memcpy-walk.c b/benchtests/bench-memcpy-walk.c
> new file mode 100644
> index 0000000..df6aa33
> --- /dev/null
> +++ b/benchtests/bench-memcpy-walk.c
> @@ -0,0 +1,126 @@
> +/* Measure memcpy function combined throughput for different alignments.
> +   Copyright (C) 2017 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +/* This microbenchmark measures the throughput of memcpy for various sizes from
> +   1 byte to 32MiB, doubling every iteration and then misaligning by 0-15
> +   bytes.  The copies are done from source to destination and then back and the
> +   source walks forward across the array and the destination walks backward by
> +   one byte each, thus measuring misaligned accesses as well.  The idea is to
> +   avoid caching effects by copying a different string and far enough from each
> +   other, walking in different directions so that we can measure prefetcher
> +   efficiency (software or hardware) more closely than with a loop copying the
> +   same data over and over, which eventually only gives us L1 cache
> +   performance.  */
> +
> +#ifndef MEMCPY_RESULT
> +# define MEMCPY_RESULT(dst, len) dst
> +# define START_SIZE 1
> +# define MIN_PAGE_SIZE (getpagesize () + 32 * 1024 * 1024)
> +# define TEST_MAIN
> +# define TEST_NAME "memcpy"
> +# define TIMEOUT (20 * 60)
> +# include "bench-string.h"
> +
> +IMPL (memcpy, 1)
> +#endif
> +
> +#include "json-lib.h"
> +
> +typedef char *(*proto_t) (char *, const char *, size_t);
> +
> +static void
> +do_one_test (json_ctx_t *json_ctx, impl_t *impl, char *dst, char *src,
> +	     size_t len)
> +{
> +  size_t i, iters = MIN_PAGE_SIZE;
> +  timing_t start, stop, cur;
> +
> +  char *dst_end = dst + MIN_PAGE_SIZE - len;
> +  char *src_end = src + MIN_PAGE_SIZE - len;
> +
> +  TIMING_NOW (start);
> +  /* Copy the entire buffer back and forth, LEN at a time.  */
> +  for (i = 0; i < iters && dst_end >= dst && src <= src_end; src++, dst_end--)
> +    {
> +      CALL (impl, dst_end, src, len);
> +      CALL (impl, src, dst_end, len);
> +      i += (len << 1);
> +    }
> +  TIMING_NOW (stop);
> +
> +  TIMING_DIFF (cur, start, stop);
> +
> +  json_element_double (json_ctx, (double) iters / (double) cur);
> +}
> +
> +static void
> +do_test (json_ctx_t *json_ctx, size_t len)
> +{
> +  json_element_object_begin (json_ctx);
> +  json_attr_uint (json_ctx, "length", (double) len);
> +  json_array_begin (json_ctx, "timings");
> +
> +  FOR_EACH_IMPL (impl, 0)
> +    do_one_test (json_ctx, impl, (char *) buf2, (char *) buf1, len);
> +
> +  json_array_end (json_ctx);
> +  json_element_object_end (json_ctx);
> +}
> +
> +int
> +test_main (void)
> +{
> +  json_ctx_t json_ctx;
> +  size_t i;
> +
> +  test_init ();
> +
> +  json_init (&json_ctx, 0, stdout);
> +
> +  json_document_begin (&json_ctx);
> +  json_attr_string (&json_ctx, "timing_type", TIMING_TYPE);
> +
> +  json_attr_object_begin (&json_ctx, "functions");
> +  json_attr_object_begin (&json_ctx, "memcpy");
> +  json_attr_string (&json_ctx, "bench-variant", "throughput");

I've changed this to "walk" since this may not be the only throughput
benchmark.

> +
> +  json_array_begin (&json_ctx, "ifuncs");
> +  FOR_EACH_IMPL (impl, 0)
> +    json_element_string (&json_ctx, impl->name);
> +  json_array_end (&json_ctx);
> +
> +  json_array_begin (&json_ctx, "results");
> +  for (i = START_SIZE; i <= MIN_PAGE_SIZE; i <<= 1)
> +    {
> +      /* Test length alignments from 0-16 bytes.  */
> +      for (int j = 0; j < 8; j++)
> +	{
> +	  do_test (&json_ctx, i + j);
> +	  do_test (&json_ctx, i + 16 - j);
> +	}
> +    }
> +
> +  json_array_end (&json_ctx);
> +  json_attr_object_end (&json_ctx);
> +  json_attr_object_end (&json_ctx);
> +  json_document_end (&json_ctx);
> +
> +  return ret;
> +}
> +
> +#include <support/test-driver.c>
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/2] benchtests: Memory walking benchmark for memcpy
@ 2017-09-18 17:40 Siddhesh Poyarekar
  2017-09-21  6:29 ` Siddhesh Poyarekar
  2017-09-21 18:29 ` Carlos O'Donell
  0 siblings, 2 replies; 15+ messages in thread
From: Siddhesh Poyarekar @ 2017-09-18 17:40 UTC (permalink / raw)
  To: libc-alpha

This benchmark is an attempt to eliminate cache effects from string
benchmarks.  The benchmark walks both ways through a large memory area
and copies different sizes of memory and alignments one at a time
instead of looping around in the same memory area.  This is a good
metric to have alongside the other memcpy benchmarks, especially for
larger sizes where the likelihood of the call being done only once is
pretty high.

The benchmark is unlike other string benchmarks in that it prints the
total data rate achieved during a walk across the memory and not the
time taken per execution.

	* benchtests/bench-memcpy-walk.c: New file.
	* benchtests/Makefile (string-benchset): Add it.

---
 benchtests/Makefile            |   3 +-
 benchtests/bench-memcpy-walk.c | 126 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 128 insertions(+), 1 deletion(-)
 create mode 100644 benchtests/bench-memcpy-walk.c

diff --git a/benchtests/Makefile b/benchtests/Makefile
index a0c3470..fbdeadf 100644
--- a/benchtests/Makefile
+++ b/benchtests/Makefile
@@ -37,7 +37,8 @@ string-benchset := bcopy bzero memccpy memchr memcmp memcpy memmem memmove \
 		   strcat strchr strchrnul strcmp strcpy strcspn strlen \
 		   strncasecmp strncat strncmp strncpy strnlen strpbrk strrchr \
 		   strspn strstr strcpy_chk stpcpy_chk memrchr strsep strtok \
-		   strcoll memcpy-large memcpy-random memmove-large memset-large
+		   strcoll memcpy-large memcpy-random memmove-large memset-large \
+		   memcpy-walk
 
 # Build and run locale-dependent benchmarks only if we're building natively.
 ifeq (no,$(cross-compiling))
diff --git a/benchtests/bench-memcpy-walk.c b/benchtests/bench-memcpy-walk.c
new file mode 100644
index 0000000..df6aa33
--- /dev/null
+++ b/benchtests/bench-memcpy-walk.c
@@ -0,0 +1,126 @@
+/* Measure memcpy function combined throughput for different alignments.
+   Copyright (C) 2017 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+/* This microbenchmark measures the throughput of memcpy for various sizes from
+   1 byte to 32MiB, doubling every iteration and then misaligning by 0-15
+   bytes.  The copies are done from source to destination and then back and the
+   source walks forward across the array and the destination walks backward by
+   one byte each, thus measuring misaligned accesses as well.  The idea is to
+   avoid caching effects by copying a different string and far enough from each
+   other, walking in different directions so that we can measure prefetcher
+   efficiency (software or hardware) more closely than with a loop copying the
+   same data over and over, which eventually only gives us L1 cache
+   performance.  */
+
+#ifndef MEMCPY_RESULT
+# define MEMCPY_RESULT(dst, len) dst
+# define START_SIZE 1
+# define MIN_PAGE_SIZE (getpagesize () + 32 * 1024 * 1024)
+# define TEST_MAIN
+# define TEST_NAME "memcpy"
+# define TIMEOUT (20 * 60)
+# include "bench-string.h"
+
+IMPL (memcpy, 1)
+#endif
+
+#include "json-lib.h"
+
+typedef char *(*proto_t) (char *, const char *, size_t);
+
+static void
+do_one_test (json_ctx_t *json_ctx, impl_t *impl, char *dst, char *src,
+	     size_t len)
+{
+  size_t i, iters = MIN_PAGE_SIZE;
+  timing_t start, stop, cur;
+
+  char *dst_end = dst + MIN_PAGE_SIZE - len;
+  char *src_end = src + MIN_PAGE_SIZE - len;
+
+  TIMING_NOW (start);
+  /* Copy the entire buffer back and forth, LEN at a time.  */
+  for (i = 0; i < iters && dst_end >= dst && src <= src_end; src++, dst_end--)
+    {
+      CALL (impl, dst_end, src, len);
+      CALL (impl, src, dst_end, len);
+      i += (len << 1);
+    }
+  TIMING_NOW (stop);
+
+  TIMING_DIFF (cur, start, stop);
+
+  json_element_double (json_ctx, (double) iters / (double) cur);
+}
+
+static void
+do_test (json_ctx_t *json_ctx, size_t len)
+{
+  json_element_object_begin (json_ctx);
+  json_attr_uint (json_ctx, "length", (double) len);
+  json_array_begin (json_ctx, "timings");
+
+  FOR_EACH_IMPL (impl, 0)
+    do_one_test (json_ctx, impl, (char *) buf2, (char *) buf1, len);
+
+  json_array_end (json_ctx);
+  json_element_object_end (json_ctx);
+}
+
+int
+test_main (void)
+{
+  json_ctx_t json_ctx;
+  size_t i;
+
+  test_init ();
+
+  json_init (&json_ctx, 0, stdout);
+
+  json_document_begin (&json_ctx);
+  json_attr_string (&json_ctx, "timing_type", TIMING_TYPE);
+
+  json_attr_object_begin (&json_ctx, "functions");
+  json_attr_object_begin (&json_ctx, "memcpy");
+  json_attr_string (&json_ctx, "bench-variant", "throughput");
+
+  json_array_begin (&json_ctx, "ifuncs");
+  FOR_EACH_IMPL (impl, 0)
+    json_element_string (&json_ctx, impl->name);
+  json_array_end (&json_ctx);
+
+  json_array_begin (&json_ctx, "results");
+  for (i = START_SIZE; i <= MIN_PAGE_SIZE; i <<= 1)
+    {
+      /* Test length alignments from 0-16 bytes.  */
+      for (int j = 0; j < 8; j++)
+	{
+	  do_test (&json_ctx, i + j);
+	  do_test (&json_ctx, i + 16 - j);
+	}
+    }
+
+  json_array_end (&json_ctx);
+  json_attr_object_end (&json_ctx);
+  json_attr_object_end (&json_ctx);
+  json_document_end (&json_ctx);
+
+  return ret;
+}
+
+#include <support/test-driver.c>
-- 
2.7.4

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-10-05 21:39 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-05 14:55 [PATCH 1/2] benchtests: Memory walking benchmark for memcpy Wilco Dijkstra
2017-10-05 21:39 ` Victor Rodriguez
  -- strict thread matches above, loose matches on Subject: below --
2017-09-18 17:40 Siddhesh Poyarekar
2017-09-21  6:29 ` Siddhesh Poyarekar
2017-09-21  7:41   ` Rajalakshmi Srinivasaraghavan
2017-09-21 18:29 ` Carlos O'Donell
2017-09-22  0:00   ` Siddhesh Poyarekar
2017-10-04  6:53     ` Siddhesh Poyarekar
2017-10-04 22:19       ` Carlos O'Donell
2017-10-04 22:45         ` Victor Rodriguez
2017-10-04 22:50           ` Carlos O'Donell
2017-10-04 23:12             ` Victor Rodriguez
2017-10-05  3:20               ` Carlos O'Donell
2017-10-05  4:58               ` Siddhesh Poyarekar
2017-10-05  4:55         ` Siddhesh Poyarekar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).