public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
@ 2019-10-21 14:25 Zhangxuelei (Derek)
  2019-10-22  9:50 ` Yikun Jiang
  2019-10-29 14:34 ` Wilco Dijkstra
  0 siblings, 2 replies; 18+ messages in thread
From: Zhangxuelei (Derek) @ 2019-10-21 14:25 UTC (permalink / raw)
  To: Wilco Dijkstra, Yikun Jiang
  Cc: libc-alpha, nd, Siddhesh Poyarekar, jiangyikun, Szabolcs Nagy

Hi Wilco, thaks for your rely and suggestion.

> So this makes it highly desirable to improve the generic versions
> of string functions.

We completely agree, we also like to contribute our changes in to generic version, because the most of our changes is based on generic version.

And we had some misunderstanding, we thought the ifunc is the general implenments in glibc. :)

However, there are two type patches:
1. The improvement based on generic version. There is no doubt that, we should contribute it into generic version.
2. Kunpeng specific implement, just like the memcpy patch, it is used to solve the specific of Kunpeng CPU, so we hope we can add it in ifunc to enbale this kind of patch.

In addition, is there any other work to cover if we contribute as generic version?

> Note that memchr_strlen significantly outperforms the fastest strlen
> on sizes larger than 256, so I don't think that using uminv to test
> for zeroes is the fastest approach.

Indeedly, but memchr_strlen really has poor performance before 256 bytes, and if we mix this method into current version, we may need a length count and judge it more than 256 bytes or not in each loop, is this way cheap? And we think small size is more important for strlen.

Finally, we will submit other generic implenments as soon as possible, and it would be good if you could review this two patches firstly:)

[1]. memrchr: it's already submited as generic version. see link:
https://sourceware.org/ml/libc-alpha/2019-10/msg00526.html

[2]. memcpy/memmove: it's the specific kunpeng 
https://sourceware.org/ml/libc-alpha/2019-10/msg00522.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-21 14:25 [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor Zhangxuelei (Derek)
@ 2019-10-22  9:50 ` Yikun Jiang
  2019-10-24 14:57   ` Carlos O'Donell
  2019-10-29 14:34 ` Wilco Dijkstra
  1 sibling, 1 reply; 18+ messages in thread
From: Yikun Jiang @ 2019-10-22  9:50 UTC (permalink / raw)
  To: Zhangxuelei (Derek)
  Cc: Wilco Dijkstra, libc-alpha, nd, Siddhesh Poyarekar, jiangyikun,
	Szabolcs Nagy

> Finally, we will submit other generic implenments as soon as possible, and it would be good if you could review this two patches firstly:)

All patches have been submitted and updated:

[1]. [PATCH] aarch64: Optimized implementation of memrchr
https://sourceware.org/ml/libc-alpha/2019-10/msg00526.html

[2]. [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
https://sourceware.org/ml/libc-alpha/2019-10/msg00522.html

[3] [PATCH v2] aarch64: Optimized implementation of memcmp
https://sourceware.org/ml/libc-alpha/2019-10/msg00637.html

[4] [PATCH v2] aarch64: Optimized implementation of strcpy
https://sourceware.org/ml/libc-alpha/2019-10/msg00639.html

[5] [PATCH v2] aarch64: Optimized implementation of strnlen
https://sourceware.org/ml/libc-alpha/2019-10/msg00640.html

[6] [PATCH v2] aarch64: Optimized strlen for strlen_asimd
https://sourceware.org/ml/libc-alpha/2019-10/msg00641.html


Regards,
Yikun
----------------------------------------
Jiang Yikun(Kero)
Mail: yikunkero@gmail.com

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-22  9:50 ` Yikun Jiang
@ 2019-10-24 14:57   ` Carlos O'Donell
  2019-10-26  9:57     ` Florian Weimer
  2019-10-29  1:20     ` Carlos O'Donell
  0 siblings, 2 replies; 18+ messages in thread
From: Carlos O'Donell @ 2019-10-24 14:57 UTC (permalink / raw)
  To: Yikun Jiang, Zhangxuelei (Derek)
  Cc: Wilco Dijkstra, libc-alpha, nd, Siddhesh Poyarekar, jiangyikun,
	Szabolcs Nagy

On 10/22/19 5:50 AM, Yikun Jiang wrote:
>> Finally, we will submit other generic implenments as soon as possible, and it would be good if you could review this two patches firstly:)
> 
> All patches have been submitted and updated:
> 
> [1]. [PATCH] aarch64: Optimized implementation of memrchr
> https://sourceware.org/ml/libc-alpha/2019-10/msg00526.html
> 
> [2]. [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
> https://sourceware.org/ml/libc-alpha/2019-10/msg00522.html
> 
> [3] [PATCH v2] aarch64: Optimized implementation of memcmp
> https://sourceware.org/ml/libc-alpha/2019-10/msg00637.html
> 
> [4] [PATCH v2] aarch64: Optimized implementation of strcpy
> https://sourceware.org/ml/libc-alpha/2019-10/msg00639.html
> 
> [5] [PATCH v2] aarch64: Optimized implementation of strnlen
> https://sourceware.org/ml/libc-alpha/2019-10/msg00640.html
> 
> [6] [PATCH v2] aarch64: Optimized strlen for strlen_asimd
> https://sourceware.org/ml/libc-alpha/2019-10/msg00641.html

What is the current status of Huawei's copyright assignment
with the FSF?

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-24 14:57   ` Carlos O'Donell
@ 2019-10-26  9:57     ` Florian Weimer
  2019-10-26 13:40       ` Carlos O'Donell
  2019-10-29  1:20     ` Carlos O'Donell
  1 sibling, 1 reply; 18+ messages in thread
From: Florian Weimer @ 2019-10-26  9:57 UTC (permalink / raw)
  To: Carlos O'Donell
  Cc: Yikun Jiang, Zhangxuelei (Derek),
	Wilco Dijkstra, libc-alpha, nd, Siddhesh Poyarekar, jiangyikun,
	Szabolcs Nagy

* Carlos O'Donell:

> On 10/22/19 5:50 AM, Yikun Jiang wrote:
>>> Finally, we will submit other generic implenments as soon as
>> possible, and it would be good if you could review this two patches
>> firstly:)
>> 
>> All patches have been submitted and updated:
>> 
>> [1]. [PATCH] aarch64: Optimized implementation of memrchr
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00526.html
>> 
>> [2]. [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for
>> Kunpeng processor
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00522.html
>> 
>> [3] [PATCH v2] aarch64: Optimized implementation of memcmp
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00637.html
>> 
>> [4] [PATCH v2] aarch64: Optimized implementation of strcpy
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00639.html
>> 
>> [5] [PATCH v2] aarch64: Optimized implementation of strnlen
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00640.html
>> 
>> [6] [PATCH v2] aarch64: Optimized strlen for strlen_asimd
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00641.html
>
> What is the current status of Huawei's copyright assignment
> with the FSF?

Has there been a reply to this question?  If yes, it didn't make it to
the list.  Thanks.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-26  9:57     ` Florian Weimer
@ 2019-10-26 13:40       ` Carlos O'Donell
  0 siblings, 0 replies; 18+ messages in thread
From: Carlos O'Donell @ 2019-10-26 13:40 UTC (permalink / raw)
  To: Florian Weimer
  Cc: Yikun Jiang, Zhangxuelei (Derek),
	Wilco Dijkstra, libc-alpha, nd, Siddhesh Poyarekar, jiangyikun,
	Szabolcs Nagy

On 10/26/19 5:57 AM, Florian Weimer wrote:
> * Carlos O'Donell:
> 
>> On 10/22/19 5:50 AM, Yikun Jiang wrote:
>>>> Finally, we will submit other generic implenments as soon as
>>> possible, and it would be good if you could review this two patches
>>> firstly:)
>>>
>>> All patches have been submitted and updated:
>>>
>>> [1]. [PATCH] aarch64: Optimized implementation of memrchr
>>> https://sourceware.org/ml/libc-alpha/2019-10/msg00526.html
>>>
>>> [2]. [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for
>>> Kunpeng processor
>>> https://sourceware.org/ml/libc-alpha/2019-10/msg00522.html
>>>
>>> [3] [PATCH v2] aarch64: Optimized implementation of memcmp
>>> https://sourceware.org/ml/libc-alpha/2019-10/msg00637.html
>>>
>>> [4] [PATCH v2] aarch64: Optimized implementation of strcpy
>>> https://sourceware.org/ml/libc-alpha/2019-10/msg00639.html
>>>
>>> [5] [PATCH v2] aarch64: Optimized implementation of strnlen
>>> https://sourceware.org/ml/libc-alpha/2019-10/msg00640.html
>>>
>>> [6] [PATCH v2] aarch64: Optimized strlen for strlen_asimd
>>> https://sourceware.org/ml/libc-alpha/2019-10/msg00641.html
>>
>> What is the current status of Huawei's copyright assignment
>> with the FSF?
> 
> Has there been a reply to this question?  If yes, it didn't make it to
> the list.  Thanks.

I've asked fsf-records@gnu.org to clarify the pending status.

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-24 14:57   ` Carlos O'Donell
  2019-10-26  9:57     ` Florian Weimer
@ 2019-10-29  1:20     ` Carlos O'Donell
  1 sibling, 0 replies; 18+ messages in thread
From: Carlos O'Donell @ 2019-10-29  1:20 UTC (permalink / raw)
  To: Yikun Jiang, Zhangxuelei (Derek),
	Siddhesh Poyarekar, Szabolcs Nagy, Florian Weimer, DJ Delorie
  Cc: Wilco Dijkstra, libc-alpha, nd, jiangyikun

On 10/24/19 10:50 AM, Carlos O'Donell wrote:
> On 10/22/19 5:50 AM, Yikun Jiang wrote:
>>> Finally, we will submit other generic implenments as soon as possible, and it would be good if you could review this two patches firstly:)
>>
>> All patches have been submitted and updated:
>>
>> [1]. [PATCH] aarch64: Optimized implementation of memrchr
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00526.html
>>
>> [2]. [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00522.html
>>
>> [3] [PATCH v2] aarch64: Optimized implementation of memcmp
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00637.html
>>
>> [4] [PATCH v2] aarch64: Optimized implementation of strcpy
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00639.html
>>
>> [5] [PATCH v2] aarch64: Optimized implementation of strnlen
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00640.html
>>
>> [6] [PATCH v2] aarch64: Optimized strlen for strlen_asimd
>> https://sourceware.org/ml/libc-alpha/2019-10/msg00641.html
> 
> What is the current status of Huawei's copyright assignment
> with the FSF?
> 

At present Huawei does not have an employer disclaimer.

If any of this work can be claimed by Huawei then it cannot be
integrated into glibc.

The FSF is following up with Huawei about this. There was some
confusion over exactly what kind of disclaimer, personal vs.
employer, was being requested.

My sincerest apologies for the delay as we sort this out.

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-21 14:25 [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor Zhangxuelei (Derek)
  2019-10-22  9:50 ` Yikun Jiang
@ 2019-10-29 14:34 ` Wilco Dijkstra
  1 sibling, 0 replies; 18+ messages in thread
From: Wilco Dijkstra @ 2019-10-29 14:34 UTC (permalink / raw)
  To: Zhangxuelei (Derek), Yikun Jiang
  Cc: libc-alpha, nd, Siddhesh Poyarekar, jiangyikun, Szabolcs Nagy

Hi Derek,

>> Note that memchr_strlen significantly outperforms the fastest strlen
>> on sizes larger than 256, so I don't think that using uminv to test
>> for zeroes is the fastest approach.
>
> Indeedly, but memchr_strlen really has poor performance before 256 bytes,

Well that means memchr can be sped up for small sizes. While it is more
complex than strlen, it shouldn't be significantly slower.

> and if we mix this method into current version, we may need a length count
> and judge it more than 256 bytes or not in each loop, is this way cheap?

That may be possible, eg. by unrolling the first 64-128 bytes and using a loop
optimized for throughput for anything larger (on the assumption that if a
string is larger than 128, it is likely much larger).

However my point was that while the uminv sequence is simple and small, it's not
the fastest, so ultimately we need to find an alternative sequence which works
better for all the generic string functions which search for a character (strlen, strnlen,
memchr, memrchr, rawmemchr, strchr, strnchr, strchrnul, strcpy, strncpy).

> And we think small size is more important for strlen.

Absolutely, handling small cases quickly is essential for all string functions.

Wilco

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-30  6:42   ` Yikun Jiang
@ 2019-11-01 12:55     ` Carlos O'Donell
  0 siblings, 0 replies; 18+ messages in thread
From: Carlos O'Donell @ 2019-11-01 12:55 UTC (permalink / raw)
  To: Yikun Jiang
  Cc: Zhangxuelei (Derek),
	Siddhesh Poyarekar, Szabolcs Nagy, Florian Weimer, DJ Delorie,
	Wilco Dijkstra, libc-alpha, jiangyikun

On 10/30/19 2:42 AM, Yikun Jiang wrote:
> Hi, Carlos or someone can reply, : ) Sorry to bother you guys in here.
> 
> And @Craig mentioned the corporate copyright assignment from
> copyright-clerk@fsf.org, but no more detail information in follow up.
> 
> Looks like we have two choice:
> a. employer disclaimer + individual assignment
> b. corporate assignment
> 
> We are employees of Huawei, and we have achieved the company's
> approval to contribute the code in GNU Glibc.
> 
> and we have already signed the individual assignment with FSF. (Yikun
> Jiang(1435635), Xuelei Zhang(1436346)).

Having individual assignments with the FSF is very good.

> Could you tell us which the copyright assignment should we use? What
> should we do for the next step?

You have a choice. The choice you choose depends on what Huawei,
as a company, wishes to do.

a. Employer disclaimer
- The employer signs an agreement with the FSF saying they disclaim
  the ownership of the work you are doing on glibc.

b. corporate assignment
- The employer signs an assignment to assign copyright of employee
  work to the FSF for work done on glibc.

They are legally distinct processes, and this decision should be
taken, usually, to Huawei's legal representative to decide which
is best for the company.

You should ask the FSF for copies of both of these documents that
need to be signed by your employer, such that you can take them
into any kind of meeting you will have internally.

Next steps:
- Get documents from the FSF.
- Have an internal meeting at Huawei with company legal representatives
  and explain the choices, and get support for one.
- Work with the FSF to complete one of the two documents.

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-29  3:26 ` Carlos O'Donell
@ 2019-10-30  6:42   ` Yikun Jiang
  2019-11-01 12:55     ` Carlos O'Donell
  0 siblings, 1 reply; 18+ messages in thread
From: Yikun Jiang @ 2019-10-30  6:42 UTC (permalink / raw)
  To: Carlos O'Donell
  Cc: Zhangxuelei (Derek),
	Siddhesh Poyarekar, Szabolcs Nagy, Florian Weimer, DJ Delorie,
	Wilco Dijkstra, libc-alpha, jiangyikun

Hi, Carlos or someone can reply, : ) Sorry to bother you guys in here.

And @Craig mentioned the corporate copyright assignment from
copyright-clerk@fsf.org, but no more detail information in follow up.

Looks like we have two choice:
a. employer disclaimer + individual assignment
b. corporate assignment

We are employees of Huawei, and we have achieved the company's
approval to contribute the code in GNU Glibc.

and we have already signed the individual assignment with FSF. (Yikun
Jiang(1435635), Xuelei Zhang(1436346)).

Could you tell us which the copyright assignment should we use? What
should we do for the next step?

Regards,
Yikun

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-29  3:22 Zhangxuelei (Derek)
@ 2019-10-29  3:26 ` Carlos O'Donell
  2019-10-30  6:42   ` Yikun Jiang
  0 siblings, 1 reply; 18+ messages in thread
From: Carlos O'Donell @ 2019-10-29  3:26 UTC (permalink / raw)
  To: Zhangxuelei (Derek),
	Yikun Jiang, Siddhesh Poyarekar, Szabolcs Nagy, Florian Weimer,
	DJ Delorie
  Cc: Wilco Dijkstra, libc-alpha, nd, jiangyikun

On 10/28/19 11:22 PM, Zhangxuelei (Derek) wrote:
> Hi Carlos,
> 
>> At present Huawei does not have an employer disclaimer.
>>
>> If any of this work can be claimed by Huawei then it cannot be 
>> integrated into glibc.
>>
>> The FSF is following up with Huawei about this. There was some 
>> confusion over exactly what kind of disclaimer, personal vs.
>> employer, was being requested.
>>
>> My sincerest apologies for the delay as we sort this out.
> 
> We have receveid and replied the mail from @Craig, it seems we need request corporate assignment.
> 
> I will sync our results ASAP after @Craig replied.
> 
> Thanks for your attention and reminder.

Thank you very much for the prompt reply to the FSF!

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
@ 2019-10-29  3:22 Zhangxuelei (Derek)
  2019-10-29  3:26 ` Carlos O'Donell
  0 siblings, 1 reply; 18+ messages in thread
From: Zhangxuelei (Derek) @ 2019-10-29  3:22 UTC (permalink / raw)
  To: Carlos O'Donell, Yikun Jiang, Siddhesh Poyarekar,
	Szabolcs Nagy, Florian Weimer, DJ Delorie
  Cc: Wilco Dijkstra, libc-alpha, nd, jiangyikun

Hi Carlos,

> At present Huawei does not have an employer disclaimer.
>
> If any of this work can be claimed by Huawei then it cannot be 
> integrated into glibc.
>
> The FSF is following up with Huawei about this. There was some 
> confusion over exactly what kind of disclaimer, personal vs.
> employer, was being requested.
>
> My sincerest apologies for the delay as we sort this out.

We have receveid and replied the mail from @Craig, it seems we need request corporate assignment.

I will sync our results ASAP after @Craig replied.

Thanks for your attention and reminder.

-
Regards
Xuelei

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
@ 2019-10-26 13:46 Zhangxuelei (Derek)
  0 siblings, 0 replies; 18+ messages in thread
From: Zhangxuelei (Derek) @ 2019-10-26 13:46 UTC (permalink / raw)
  To: Wilco Dijkstra, libc-alpha, nd, siddhesh, jiangyikun, yikunkero; +Cc: nd

Hi Wilco, sorry for the delay in replying because we tested lot after modifing memmove.

> In order to select the right memmove implementation,
> multiarch/memmove.c needs similar changes as multiarch/memcpy.c.

That's true, we missed this patch and will submit it next patch.

> Also since the memmove entry sequence does both check for medium
> and large cases, the full overlap check should be done in both.
> Currently only sizes 96-512 benefit, not the move_long case:

Yes, we will add full overlap check in move_long case next patch.

And what confusing us now is that, we removed dst_unaligned code in memcpy according to the previous comments, which did not affect performance after testing in memcpy cases. But in the case when uses memmove function and enters the memcpy part, unaligned cases is significantly slower than aligned case according to the results of the first half part of memmove-walk as shown in the bottom. So do you think we should still remove dst_unaligned code? 

We analyse the reason is of more judgement in the begin of memmove and may weak processor ability to handle this case, and so dst_unaligned make difference.

> Well it looks the dst_unaligned code (which deals with a specific
> issue on ThunderX2) ...

I remember you memtioned the specific issue on ThunderX2 before, could you tell us more about it?

Function: memmove
Variant: walk
                    __memmove_thunderx	__memmove_thunderx2	__memmove_falkor	__memmove_kunpeng2	__memmove_generic
========================================================================================================================
      length=128:        33.99 (-73.69%)	       18.65 (  4.67%)	       17.75 (  9.29%)	       19.21 (  1.80%)	       19.57	
      length=129:        35.41 (  2.08%)	       37.43 ( -3.51%)	       35.87 (  0.79%)	       34.71 (  4.01%)	       36.16	
      length=256:        45.55 (-37.95%)	       32.61 (  1.23%)	       35.59 ( -7.79%)	       32.95 (  0.20%)	       33.02	
      length=257:        66.36 (  4.20%)	       69.50 ( -0.33%)	       68.03 (  1.80%)	       68.53 (  1.08%)	       69.27	
      length=512:        82.77 (-34.10%)	       65.67 ( -6.41%)	       65.61 ( -6.30%)	       60.13 (  2.57%)	       61.72	
      length=513:       146.19 (  3.90%)	      132.98 ( 12.59%)	      132.28 ( 13.05%)	      151.50 (  0.41%)	      152.12	
     length=1024:       155.75 (-26.13%)	      142.74 (-15.60%)	      126.97 ( -2.83%)	      121.58 (  1.53%)	      123.48	
     length=1025:       289.15 (  4.72%)	      318.71 ( -5.02%)	      262.97 ( 13.35%)	      307.00 ( -1.16%)	      303.48	
     length=2048:       298.85 (-22.16%)	      233.98 (  4.35%)	      249.71 ( -2.08%)	      245.37 ( -0.30%)	      244.63	
     length=2049:       409.46 ( 14.62%)	      399.08 ( 16.78%)	      508.64 ( -6.07%)	      465.79 (  2.87%)	      479.54	
     length=4096:       543.10 (-11.30%)	      445.35 (  8.73%)	      491.40 ( -0.71%)	      435.61 ( 10.73%)	      487.95	
     length=4097:       680.95 ( 18.96%)	      593.99 ( 29.31%)	      990.52 (-17.89%)	      882.91 ( -5.08%)	      840.23	
     length=8192:      1047.46 ( -8.01%)	      867.03 ( 10.59%)	      977.80 ( -0.83%)	      850.57 ( 12.29%)	      969.74	
     length=8193:      1224.46 ( 21.97%)	      979.34 ( 37.59%)	     1981.71 (-26.29%)	     1714.96 ( -9.29%)	     1569.12	
    length=16384:      2055.73 ( -5.42%)	     1701.01 ( 12.77%)	     1944.38 (  0.29%)	     1683.51 ( 13.67%)	     1950.11	
    length=16385:      2314.62 ( 23.38%)	     1774.44 ( 41.26%)	     3967.45 (-31.34%)	     3385.52 (-12.07%)	     3020.82	
    length=32768:      5153.99 (-32.25%)	     3426.50 ( 12.08%)	     3875.16 (  0.56%)	     3338.91 ( 14.32%)	     3897.16	
    length=32769:      5343.41 (  9.64%)	     3375.50 ( 42.92%)	     7925.06 (-34.01%)	     6716.28 (-13.57%)	     5913.72	
    length=65536:     10361.70 (-35.90%)	     6768.32 ( 11.23%)	     7759.75 ( -1.78%)	     6658.73 ( 12.66%)	     7624.32	
    length=65537:     10284.00 ( 12.00%)	     6528.85 ( 44.13%)	    15844.40 (-35.58%)	    13437.90 (-14.98%)	    11686.80	
   length=131072:     20539.30 (-34.71%)	    13672.50 ( 10.33%)	    15567.10 ( -2.10%)	    13325.60 ( 12.60%)	    15247.50	
   length=131073:     20868.20 ( 10.97%)	    12807.80 ( 45.36%)	    31605.90 (-34.83%)	    26788.20 (-14.28%)	    23440.70	
   length=262144:     41304.50 (-35.25%)	    26883.30 ( 11.97%)	    31038.70 ( -1.63%)	    26533.40 ( 13.12%)	    30539.40	
   length=262145:     41157.90 ( 12.84%)	    25568.20 ( 45.85%)	    63229.00 (-33.90%)	    53525.00 (-13.35%)	    47220.50	
   length=524288:     81777.00 (-32.88%)	    54133.00 ( 12.04%)	    61853.30 ( -0.51%)	    52869.40 ( 14.09%)	    61542.20	
   length=524289:     81986.90 ( 14.71%)	    50562.00 ( 47.40%)	   126255.00 (-31.33%)	   105969.00 (-10.23%)	    96132.70	
  length=1048576:    163628.00 (-33.00%)	   107776.00 ( 12.00%)	   123819.00 ( -1.00%)	   105831.00 ( 14.00%)	   123170.00	
  length=1048577:    177503.00 ( 12.00%)	    98680.60 ( 51.09%)	   253068.00 (-26.00%)	   211155.00 ( -5.00%)	   201763.00	
  length=2097152:    336756.00 (-34.00%)	   224097.00 ( 11.00%)	   254575.00 ( -1.00%)	   219864.00 ( 13.00%)	   253124.00	
  length=2097153:    373590.00 (  9.00%)	   214822.00 ( 48.00%)	   506479.00 (-23.00%)	   426299.00 ( -3.00%)	   414899.00	
  length=4194304:    662606.00 (-35.00%)	   437195.00 ( 11.00%)	   497288.00 ( -2.00%)	   427614.00 ( 13.00%)	   491729.00	
  length=4194305:    697910.00 (  9.00%)	   417656.00 ( 45.00%)	  1020670.00 (-32.62%)	   856051.00 (-12.00%)	   769599.00	
  length=8388608:   1307990.00 (-34.88%)	   852030.00 ( 12.00%)	   983092.00 ( -2.00%)	   834918.00 ( 13.00%)	   969712.00	
  length=8388609:   1416420.00 (  8.70%)	   821262.00 ( 47.06%)	  2030660.00 (-30.89%)	  1708360.00 (-10.11%)	  1551450.00	
 length=16777216:   2586380.00 (-33.02%)	  1702120.00 ( 12.46%)	  1970000.00 ( -1.32%)	  1676900.00 ( 13.76%)	  1944360.00	
 length=16777217:   2796060.00 ( 13.29%)	  1627720.00 ( 49.52%)	  4079100.00 (-26.51%)	  3410640.00 ( -5.77%)	  3224440.00	
 length=33554432:   5241680.00 (-33.96%)	  3488860.00 ( 10.84%)	  4890730.00 (-24.99%)	  3474630.00 ( 11.20%)	  3912900.00	
 length=33554433:   5666550.00 ( 14.71%)	  3357520.00 ( 49.46%)	  8039630.00 (-21.01%)	  6824230.00 ( -2.72%)	  6643780.00

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-26 13:22 Zhangxuelei (Derek)
@ 2019-10-26 13:40 ` Carlos O'Donell
  0 siblings, 0 replies; 18+ messages in thread
From: Carlos O'Donell @ 2019-10-26 13:40 UTC (permalink / raw)
  To: Zhangxuelei (Derek), Florian Weimer, libc-alpha
  Cc: Yikun Jiang, Wilco Dijkstra, nd, Siddhesh Poyarekar, jiangyikun,
	Szabolcs Nagy

On 10/26/19 9:22 AM, Zhangxuelei (Derek) wrote:
> Hi Florian,
> 
>> Has there been a reply to this question?  
>> If yes, it didn't make it to the list.  Thanks.
> 
> Yes, I replied 2 days ago as follow.
> 
>>> What is the current status of Huawei's copyright assignment with the 
>>> FSF?
> 
>> Yes, we have already signed the copyright assignment with FSF at 09 
>> Oct. 2019. My CLA ID is 1436346.

This hasn't yet appeared in the global list of copyright assignments.

I have sent the FSF Records clerk a question to verify and get the list
updated.

Thank you for your patience.

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
@ 2019-10-26 13:22 Zhangxuelei (Derek)
  2019-10-26 13:40 ` Carlos O'Donell
  0 siblings, 1 reply; 18+ messages in thread
From: Zhangxuelei (Derek) @ 2019-10-26 13:22 UTC (permalink / raw)
  To: Florian Weimer, Carlos O'Donell, libc-alpha
  Cc: Yikun Jiang, Wilco Dijkstra, nd, Siddhesh Poyarekar, jiangyikun,
	Szabolcs Nagy

Hi Florian,

> Has there been a reply to this question?  
> If yes, it didn't make it to the list.  Thanks.

Yes, I replied 2 days ago as follow.

>> What is the current status of Huawei's copyright assignment with the 
>> FSF?

> Yes, we have already signed the copyright assignment with FSF at 09 
> Oct. 2019. My CLA ID is 1436346.

Cheers,
Xuelei

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-17 13:16 Xuelei Zhang
  2019-10-17 14:57 ` Yikun Jiang
@ 2019-10-22 18:29 ` Wilco Dijkstra
  1 sibling, 0 replies; 18+ messages in thread
From: Wilco Dijkstra @ 2019-10-22 18:29 UTC (permalink / raw)
  To: Xuelei Zhang, libc-alpha, nd, siddhesh, jiangyikun, yikunkero; +Cc: nd

Hi Xuelei,

In order to select the right memmove implementation, multiarch/memmove.c needs
similar changes as multiarch/memcpy.c.

Also since the memmove entry sequence does both check for medium and large cases, the
full overlap check should be done in both. Currently only sizes 96-512 benefit, not the
move_long case:

+       /* long move: more than 512 bytes align the dstend */
+       .p2align 4
+L(move_long):
+1:
+       add     srcend, src, count
+       add     dstend, dstin, count

This should do the same as move_middle:

+L(move_middle):
+       cbz     tmp1, 3f

Wilco

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-17 14:57 ` Yikun Jiang
@ 2019-10-18 15:50   ` Wilco Dijkstra
  0 siblings, 0 replies; 18+ messages in thread
From: Wilco Dijkstra @ 2019-10-18 15:50 UTC (permalink / raw)
  To: Yikun Jiang, Xuelei Zhang
  Cc: libc-alpha, nd, Siddhesh Poyarekar, jiangyikun, Szabolcs Nagy

Hi Yikun,

>> Btw do you have any plans to post other string functions that you can discuss here? If so, would these
>> add more ifuncs or improve the generic versions?
>
> Yes, memcmp, strlen, strnlen, strcpy, memrchr will be included, we will summited the patch and test results as soon as possible.
>
>We have submitted the patches of string functions, see below:

Thanks, that makes it easier to discuss in more detail. So in almost all cases these
patches add new ifuncs. There are general issues with ifuncs which make adding
lots of similar ifuncs a bad idea. The key problem is that ifuncs are not used inside
GLIBC itself. For example the strstr implementation benefits from a fast memcmp
but it always uses the generic memcmp, so it won't get any gains from the Kunpeng
optimized one.

So this makes it highly desirable to improve the generic versions of string functions.
From what I see, all of the changes are fairly simple and generic improvements, so
can be done easily to the generic versions. I think it would be a very bad idea to add
lots of ifunc variants which are almost identical to existing versions and differ in
minor details like unrolling.

For example strlen and memcmp add unrolling to existing code. Note that memchr_strlen
significantly outperforms the fastest strlen on sizes larger than 256, so I don't think that
using uminv to test for zeroes is the fastest approach.

Cheers,
Wilco

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
  2019-10-17 13:16 Xuelei Zhang
@ 2019-10-17 14:57 ` Yikun Jiang
  2019-10-18 15:50   ` Wilco Dijkstra
  2019-10-22 18:29 ` Wilco Dijkstra
  1 sibling, 1 reply; 18+ messages in thread
From: Yikun Jiang @ 2019-10-17 14:57 UTC (permalink / raw)
  To: Xuelei Zhang
  Cc: libc-alpha, nd, Siddhesh Poyarekar, Wilco.Dijkstra, jiangyikun

> Btw do you have any plans to post other string functions that you can discuss here? If so, would these
> add more ifuncs or improve the generic versions?

> Yes, memcmp, strlen, strnlen, strcpy, memrchr will be included, we will summited the patch and test results as soon as possible.

We have submitted the patches of string functions, see below:

[PATCH] aarch64: Optimized strnlen for Kunpeng processor
https://sourceware.org/ml/libc-alpha/2019-10/msg00528.html

[PATCH] aarch64: Optimized strlen for Kunpeng processor
https://sourceware.org/ml/libc-alpha/2019-10/msg00527.html

[PATCH] aarch64: Optimized implementation of memrchr
https://sourceware.org/ml/libc-alpha/2019-10/msg00526.html

[PATCH] aarch64: Optimized strcpy for Kunpeng processor.
https://sourceware.org/ml/libc-alpha/2019-10/msg00525.html

[PATCH] aarch64: Optimized memcmp for Kunpeng processor.
https://sourceware.org/ml/libc-alpha/2019-10/msg00524.html

Xuelei Zhang <zhangxuelei4@huawei.com> 于2019年10月17日周四 下午9:16写道:
>
> This is an optimized implementation of the memcpy and memmove on the
> Huawei Kunpeng processor.
>
> Based on the prefetch mechanism on Kunpeng arch, branch to handle 96
> to 2K bytes in memcpy is written without prfm instruction. Hence,
> memcpy has an optimization effect above 128 bytes, 18% improvement
> for copies above 2K bytes, and 38% for larger bytes, such as 32M
> bytes around.
>
> And for memmove, there are two main changes: i) Q register is used
> instead of  X register. ii) dst address is aligned instead of src
> address aligned to improve store operation. Hence, memmove
> implementation also has improvement above 128 bytes, that about 30%
> for 2k to 8M bytes, and about 50% for 32M or more.
> ---
>  sysdeps/aarch64/multiarch/Makefile          |   2 +-
>  sysdeps/aarch64/multiarch/ifunc-impl-list.c |   4 +-
>  sysdeps/aarch64/multiarch/memcpy.c          |   5 +-
>  sysdeps/aarch64/multiarch/memcpy_kunpeng.S  | 468 ++++++++++++++++++++++++++++
>  4 files changed, 476 insertions(+), 3 deletions(-)
>  create mode 100644 sysdeps/aarch64/multiarch/memcpy_kunpeng.S
>
> diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile
> index 4150b89a90..37ed49982d 100644
> --- a/sysdeps/aarch64/multiarch/Makefile
> +++ b/sysdeps/aarch64/multiarch/Makefile
> @@ -1,6 +1,6 @@
>  ifeq ($(subdir),string)
>  sysdep_routines += memcpy_generic memcpy_thunderx memcpy_thunderx2 \
> -                  memcpy_falkor memmove_falkor \
> +                  memcpy_falkor memcpy_kunpeng memmove_falkor \
>                    memset_generic memset_falkor memset_emag \
>                    memchr_generic memchr_nosimd \
>                    strlen_generic strlen_asimd
> diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
> index be13b916e5..dbbe19096a 100644
> --- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c
> +++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
> @@ -25,7 +25,7 @@
>  #include <stdio.h>
>
>  /* Maximum number of IFUNC implementations.  */
> -#define MAX_IFUNC      4
> +#define MAX_IFUNC      5
>
>  size_t
>  __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
> @@ -42,11 +42,13 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
>               IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_thunderx)
>               IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_thunderx2)
>               IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_falkor)
> +             IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_kunpeng)
>               IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_generic))
>    IFUNC_IMPL (i, name, memmove,
>               IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_thunderx)
>               IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_thunderx2)
>               IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_falkor)
> +             IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_kunpeng)
>               IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_generic))
>    IFUNC_IMPL (i, name, memset,
>               /* Enable this on non-falkor processors too so that other cores
> diff --git a/sysdeps/aarch64/multiarch/memcpy.c b/sysdeps/aarch64/multiarch/memcpy.c
> index 13796f987f..b5929e2718 100644
> --- a/sysdeps/aarch64/multiarch/memcpy.c
> +++ b/sysdeps/aarch64/multiarch/memcpy.c
> @@ -32,9 +32,12 @@ extern __typeof (__redirect_memcpy) __memcpy_generic attribute_hidden;
>  extern __typeof (__redirect_memcpy) __memcpy_thunderx attribute_hidden;
>  extern __typeof (__redirect_memcpy) __memcpy_thunderx2 attribute_hidden;
>  extern __typeof (__redirect_memcpy) __memcpy_falkor attribute_hidden;
> +extern __typeof (__redirect_memcpy) __memcpy_kunpeng attribute_hidden;
>
>  libc_ifunc (__libc_memcpy,
> -            (IS_THUNDERX (midr)
> +            IS_KUNPENG(midr)
> +           ?__memcpy_kunpeng
> +           : (IS_THUNDERX (midr)
>              ? __memcpy_thunderx
>              : (IS_FALKOR (midr) || IS_PHECDA (midr) || IS_ARES (midr)
>                 ? __memcpy_falkor
> diff --git a/sysdeps/aarch64/multiarch/memcpy_kunpeng.S b/sysdeps/aarch64/multiarch/memcpy_kunpeng.S
> new file mode 100644
> index 0000000000..385f282224
> --- /dev/null
> +++ b/sysdeps/aarch64/multiarch/memcpy_kunpeng.S
> @@ -0,0 +1,468 @@
> +/* Optimized memcpy and memmove for Huawei Kunpeng processor.
> +   Copyright (C) 2018-2019 Free Software Foundation, Inc.
> +
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <https://www.gnu.org/licenses/>.  */
> +
> +#include <sysdep.h>
> +
> +/* Assumptions:
> + *
> + * ARMv8-a, AArch64, unaligned accesses.
> + *
> + */
> +
> +#define dstin  x0
> +#define src    x1
> +#define count  x2
> +#define dst    x3
> +#define srcend x4
> +#define dstend x5
> +#define tmp2   x6
> +#define tmp3   x7
> +#define tmp3w   w7
> +#define A_l    x6
> +#define A_lw   w6
> +#define A_h    x7
> +#define A_hw   w7
> +#define B_l    x8
> +#define B_lw   w8
> +#define B_h    x9
> +#define C_l    x10
> +#define C_h    x11
> +#define D_l    x12
> +#define D_h    x13
> +#define E_l    src
> +#define E_h    count
> +#define F_l    srcend
> +#define F_h    dst
> +#define G_l    count
> +#define G_h    dst
> +#define tmp1   x14
> +
> +#define A_q    q0
> +#define B_q    q1
> +#define C_q    q2
> +#define D_q    q3
> +#define E_q    q4
> +#define F_q    q5
> +#define G_q    q6
> +#define H_q    q7
> +#define I_q    q16
> +#define J_q    q17
> +
> +#define A_v    v0
> +#define B_v    v1
> +#define C_v    v2
> +#define D_v    v3
> +#define E_v    v4
> +#define F_v    v5
> +#define G_v    v6
> +#define H_v    v7
> +#define I_v    v16
> +#define J_v    v17
> +
> +#ifndef MEMMOVE
> +# define MEMMOVE memmove
> +#endif
> +#ifndef MEMCPY
> +# define MEMCPY memcpy
> +#endif
> +
> +#if IS_IN (libc)
> +
> +#undef MEMCPY
> +#define MEMCPY __memcpy_kunpeng
> +#undef MEMMOVE
> +#define MEMMOVE __memmove_kunpeng
> +
> +
> +/* Overlapping large forward memmoves use a loop that copies backwards.
> +   Otherwise memcpy is used. Small moves branch to memcopy16 directly.
> +   The longer memcpy cases fall through to the memcpy head.
> +*/
> +
> +ENTRY_ALIGN (MEMMOVE, 6)
> +
> +       DELOUSE (0)
> +       DELOUSE (1)
> +       DELOUSE (2)
> +
> +       sub     tmp1, dstin, src
> +       cmp     count, 512
> +       ccmp    tmp1, count, 2, hi
> +       b.lo    L(move_long)
> +       cmp     count, 96
> +       ccmp    tmp1, count, 2, hi
> +       b.lo    L(move_middle)
> +
> +END (MEMMOVE)
> +libc_hidden_builtin_def (MEMMOVE)
> +
> +
> +/* Copies are split into 4 main cases: small copies of up to 16 bytes,
> +   medium copies of 17..96 bytes which are fully unrolled. Long copies
> +   of 97..2048 align dst address without prefetching. Large copies
> +   of more than 2048 bytes align the destination and use load-and-merge
> +   approach in the case src and dst addresses are unaligned not evenly,
> +   so that, actual loads and stores are always aligned.
> +   Large copies use the loops processing 64 bytes per iteration for
> +   unaligned case and 128 bytes per iteration for aligned ones.
> +*/
> +
> +#define MEMCPY_PREFETCH_LDR 640
> +
> +       .p2align 4
> +ENTRY (MEMCPY)
> +
> +       DELOUSE (0)
> +       DELOUSE (1)
> +       DELOUSE (2)
> +
> +       add     srcend, src, count
> +       cmp     count, 16
> +       b.ls    L(memcopy16)
> +       add     dstend, dstin, count
> +       cmp     count, 96
> +       b.hi    L(memcopy_long)
> +
> +       /* Medium copies: 17..96 bytes.  */
> +       ldr     A_q, [src], #16
> +       and     tmp1, src, 15
> +       ldr     E_q, [srcend, -16]
> +       cmp     count, 64
> +       b.gt    L(memcpy_copy96)
> +       cmp     count, 48
> +       b.le    L(bytes_17_to_48)
> +       /* 49..64 bytes */
> +       ldp     B_q, C_q, [src]
> +       str     E_q, [dstend, -16]
> +       stp     A_q, B_q, [dstin]
> +       str     C_q, [dstin, 32]
> +       ret
> +
> +L(bytes_17_to_48):
> +       /* 17..48 bytes*/
> +       cmp     count, 32
> +       b.gt    L(bytes_32_to_48)
> +       /* 17..32 bytes*/
> +       str     A_q, [dstin]
> +       str     E_q, [dstend, -16]
> +       ret
> +
> +L(bytes_32_to_48):
> +       /* 32..48 */
> +       ldr     B_q, [src]
> +       str     A_q, [dstin]
> +       str     E_q, [dstend, -16]
> +       str     B_q, [dstin, 16]
> +       ret
> +
> +       .p2align 4
> +       /* Small copies: 0..16 bytes.  */
> +L(memcopy16):
> +       cmp     count, 8
> +       b.lo    L(bytes_0_to_8)
> +       ldr     A_l, [src]
> +       ldr     A_h, [srcend, -8]
> +       add     dstend, dstin, count
> +       str     A_l, [dstin]
> +       str     A_h, [dstend, -8]
> +       ret
> +       .p2align 4
> +
> +L(bytes_0_to_8):
> +       tbz     count, 2, L(bytes_0_to_3)
> +       ldr     A_lw, [src]
> +       ldr     A_hw, [srcend, -4]
> +       add     dstend, dstin, count
> +       str     A_lw, [dstin]
> +       str     A_hw, [dstend, -4]
> +       ret
> +
> +       /* Copy 0..3 bytes.  Use a branchless sequence that copies the same
> +          byte 3 times if count==1, or the 2nd byte twice if count==2.  */
> +L(bytes_0_to_3):
> +       cbz     count, 1f
> +       lsr     tmp1, count, 1
> +       ldrb    A_lw, [src]
> +       ldrb    A_hw, [srcend, -1]
> +       add     dstend, dstin, count
> +       ldrb    B_lw, [src, tmp1]
> +       strb    B_lw, [dstin, tmp1]
> +       strb    A_hw, [dstend, -1]
> +       strb    A_lw, [dstin]
> +1:
> +       ret
> +
> +       .p2align 4
> +
> +L(memcpy_copy96):
> +       /* Copying 65..96 bytes. A_q (first 16 bytes) and
> +          E_q(last 16 bytes) are already loaded. The size
> +          is large enough to benefit from aligned loads */
> +       bic     src, src, 15
> +       ldp     B_q, C_q, [src]
> +       /* Loaded 64 bytes, second 16-bytes chunk can be
> +          overlapping with the first chunk by tmp1 bytes.
> +          Stored 16 bytes. */
> +       sub     dst, dstin, tmp1
> +       add     count, count, tmp1
> +       /* The range of count being [65..96] becomes [65..111]
> +          after tmp [0..15] gets added to it,
> +          count now is <bytes-left-to-load>+48 */
> +       cmp     count, 80
> +       b.gt    L(copy96_medium)
> +       ldr     D_q, [src, 32]
> +       stp     B_q, C_q, [dst, 16]
> +       str     D_q, [dst, 48]
> +       str     A_q, [dstin]
> +       str     E_q, [dstend, -16]
> +       ret
> +
> +       .p2align 4
> +L(copy96_medium):
> +       ldp     D_q, G_q, [src, 32]
> +       cmp     count, 96
> +       b.gt    L(copy96_large)
> +       stp     B_q, C_q, [dst, 16]
> +       stp     D_q, G_q, [dst, 48]
> +       str     A_q, [dstin]
> +       str     E_q, [dstend, -16]
> +       ret
> +
> +L(copy96_large):
> +       ldr     F_q, [src, 64]
> +       str     B_q, [dst, 16]
> +       stp     C_q, D_q, [dst, 32]
> +       stp     G_q, F_q, [dst, 64]
> +       str     A_q, [dstin]
> +       str     E_q, [dstend, -16]
> +       ret
> +
> +       .p2align 4
> +L(memcopy_long):
> +       cmp count, 2048
> +       b.ls L(copy2048_large)
> +       ldr     A_q, [src], #16
> +       and     tmp1, src, 15
> +       bic     src, src, 15
> +       ldp     B_q, C_q, [src], #32
> +       sub     dst, dstin, tmp1
> +       add     count, count, tmp1
> +       add     dst, dst, 16
> +       ldp     D_q, E_q, [src], #32
> +       str     A_q, [dstin]
> +
> +       /* Already loaded 64+16 bytes. Check if at
> +          least 64 more bytes left */
> +       subs    count, count, 64+64+16
> +       b.lt    L(loop128_exit0)
> +       cmp     count, MEMCPY_PREFETCH_LDR + 64 + 32
> +       b.lt    L(loop128)
> +       sub     count, count, MEMCPY_PREFETCH_LDR + 64 + 32
> +
> +       .p2align 4
> +
> +L(loop128_prefetch):
> +       prfm    pldl1strm, [src, MEMCPY_PREFETCH_LDR]
> +       ldp     F_q, G_q, [src], #32
> +       stp     B_q, C_q, [dst], #32
> +       ldp     H_q, I_q, [src], #32
> +       prfm    pldl1strm, [src, MEMCPY_PREFETCH_LDR]
> +       ldp     B_q, C_q, [src], #32
> +       stp     D_q, E_q, [dst], #32
> +       ldp     D_q, E_q, [src], #32
> +       stp     F_q, G_q, [dst], #32
> +       stp     H_q, I_q, [dst], #32
> +       subs    count, count, 128
> +       b.ge    L(loop128_prefetch)
> +
> +       add     count, count, MEMCPY_PREFETCH_LDR + 64 + 32
> +       .p2align 4
> +L(loop128):
> +       ldp     F_q, G_q, [src], #32
> +       ldp     H_q, I_q, [src], #32
> +       stp     B_q, C_q, [dst], #32
> +       stp     D_q, E_q, [dst], #32
> +       subs    count, count, 64
> +       b.lt    L(loop128_exit1)
> +       ldp     B_q, C_q, [src], #32
> +       ldp     D_q, E_q, [src], #32
> +       stp     F_q, G_q, [dst], #32
> +       stp     H_q, I_q, [dst], #32
> +       subs    count, count, 64
> +       b.ge    L(loop128)
> +L(loop128_exit0):
> +       ldp     F_q, G_q, [srcend, -64]
> +       ldp     H_q, I_q, [srcend, -32]
> +       stp     B_q, C_q, [dst], #32
> +       stp     D_q, E_q, [dst]
> +       stp     F_q, G_q, [dstend, -64]
> +       stp     H_q, I_q, [dstend, -32]
> +       ret
> +L(loop128_exit1):
> +       ldp     B_q, C_q, [srcend, -64]
> +       ldp     D_q, E_q, [srcend, -32]
> +       stp     F_q, G_q, [dst], #32
> +       stp     H_q, I_q, [dst]
> +       stp     B_q, C_q, [dstend, -64]
> +       stp     D_q, E_q, [dstend, -32]
> +       ret
> +
> +       /* long copies: 96..2048 bytes */
> +L(copy2048_large):
> +       and     tmp1, dstin, 15
> +       bic     dst, dstin, 15
> +       ldp     D_l, D_h, [src]
> +       sub     src, src, tmp1
> +       add     count, count, tmp1      /* Count is now 16 too large.  */
> +       ldp     A_l, A_h, [src, 16]
> +       stp     D_l, D_h, [dstin]
> +       ldp     B_l, B_h, [src, 32]
> +       ldp     C_l, C_h, [src, 48]
> +       ldp     D_l, D_h, [src, 64]!
> +       subs    count, count, 128 + 16  /* Test and readjust count.  */
> +       b.ls    L(last64)
> +
> +L(loop64):
> +       stp     A_l, A_h, [dst, 16]
> +       ldp     A_l, A_h, [src, 16]
> +       stp     B_l, B_h, [dst, 32]
> +       ldp     B_l, B_h, [src, 32]
> +       stp     C_l, C_h, [dst, 48]
> +       ldp     C_l, C_h, [src, 48]
> +       stp     D_l, D_h, [dst, 64]
> +       ldp     D_l, D_h, [src, 64]
> +       add dst, dst, 64
> +       add src, src, 64
> +       subs    count, count, 64
> +       b.hi    L(loop64)
> +
> +       /* Write the last full set of 64 bytes.  The remainder is at most 64
> +          bytes, so it is safe to always copy 64 bytes from the end even if
> +          there is just 1 byte left.  */
> +L(last64):
> +       ldp     E_l, E_h, [srcend, -64]
> +       stp     A_l, A_h, [dst, 16]
> +       ldp     A_l, A_h, [srcend, -48]
> +       stp     B_l, B_h, [dst, 32]
> +       ldp     B_l, B_h, [srcend, -32]
> +       stp     C_l, C_h, [dst, 48]
> +       ldp     C_l, C_h, [srcend, -16]
> +       stp     D_l, D_h, [dst, 64]
> +       stp     E_l, E_h, [dstend, -64]
> +       stp     A_l, A_h, [dstend, -48]
> +       stp     B_l, B_h, [dstend, -32]
> +       stp     C_l, C_h, [dstend, -16]
> +       ret
> +
> +       /* long move: more than 512 bytes align the dstend */
> +       .p2align 4
> +L(move_long):
> +1:
> +       add     srcend, src, count
> +       add     dstend, dstin, count
> +
> +       and     tmp1, dstend, 15
> +       ldr     D_q, [srcend, -16]
> +       sub     srcend, srcend, tmp1
> +       sub     count, count, tmp1
> +       ldp     A_q, B_q, [srcend, -32]
> +       str     D_q, [dstend, -16]
> +       ldp     C_q, D_q, [srcend, -64]!
> +       sub     dstend, dstend, tmp1
> +       subs    count, count, 128
> +       b.ls    2f
> +
> +       .p2align 4
> +1:
> +       subs    count, count, 64
> +       stp     A_q, B_q, [dstend, -32]
> +       ldp     A_q, B_q, [srcend, -32]
> +       stp     C_q, D_q, [dstend, -64]!
> +       ldp     C_q, D_q, [srcend, -64]!
> +       b.hi    1b
> +
> +       /* Write the last full set of 64 bytes.  The remainder is at most 64
> +          bytes, so it is safe to always copy 64 bytes from the start even if
> +          there is just 1 byte left.  */
> +2:
> +       ldp     E_q, F_q, [src, 32]
> +       ldp     G_q, H_q, [src]
> +       stp     A_q, B_q, [dstend, -32]
> +       stp     C_q, D_q, [dstend, -64]
> +       stp     E_q, F_q, [dstin, 32]
> +       stp     G_q, H_q, [dstin]
> +3:     ret
> +
> +       /* midlle move: 96..512 bytes */
> +       .p2align 4
> +L(move_middle):
> +       cbz     tmp1, 3f
> +       add     srcend, src, count
> +       prfm    PLDL1STRM, [srcend, -64]
> +       add     dstend, dstin, count
> +       and     tmp1, dstend, 15
> +       ldr D_q, [srcend, -16]
> +       sub     srcend, srcend, tmp1
> +       sub     count, count, tmp1
> +       ldr     A_q, [srcend, -16]
> +       str     D_q, [dstend, -16]
> +       ldr     B_q, [srcend, -32]
> +       ldr     C_q, [srcend, -48]
> +       ldr     D_q, [srcend, -64]!
> +       sub     dstend, dstend, tmp1
> +       subs    count, count, 128
> +       b.ls    2f
> +
> +1:
> +       str     A_q, [dstend, -16]
> +       ldr     A_q, [srcend, -16]
> +       str     B_q, [dstend, -32]
> +       ldr     B_q, [srcend, -32]
> +       str     C_q, [dstend, -48]
> +       ldr     C_q, [srcend, -48]
> +       str     D_q, [dstend, -64]!
> +       ldr     D_q, [srcend, -64]!
> +       subs    count, count, 64
> +       b.hi    1b
> +
> +       /* Write the last full set of 64 bytes.  The remainder is at most 64
> +          bytes, so it is safe to always copy 64 bytes from the start even if
> +          there is just 1 byte left.  */
> +2:
> +       ldr     G_q, [src, 48]
> +       str     A_q, [dstend, -16]
> +       ldr     A_q, [src, 32]
> +       str     B_q, [dstend, -32]
> +       ldr     B_q, [src, 16]
> +       str     C_q, [dstend, -48]
> +       ldr     C_q, [src]
> +       str     D_q, [dstend, -64]
> +       str     G_q, [dstin, 48]
> +       str     A_q, [dstin, 32]
> +       str     B_q, [dstin, 16]
> +       str     C_q, [dstin]
> +3:     ret
> +
> +
> +END (MEMCPY)
> +       .section        .rodata
> +       .p2align        4
> +
> +libc_hidden_builtin_def (MEMCPY)
> +#endif
> --
> 2.14.1.windows.1
>
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor
@ 2019-10-17 13:16 Xuelei Zhang
  2019-10-17 14:57 ` Yikun Jiang
  2019-10-22 18:29 ` Wilco Dijkstra
  0 siblings, 2 replies; 18+ messages in thread
From: Xuelei Zhang @ 2019-10-17 13:16 UTC (permalink / raw)
  To: libc-alpha, nd, siddhesh, Wilco.Dijkstra, jiangyikun, yikunkero

This is an optimized implementation of the memcpy and memmove on the
Huawei Kunpeng processor.

Based on the prefetch mechanism on Kunpeng arch, branch to handle 96
to 2K bytes in memcpy is written without prfm instruction. Hence,
memcpy has an optimization effect above 128 bytes, 18% improvement
for copies above 2K bytes, and 38% for larger bytes, such as 32M
bytes around.

And for memmove, there are two main changes: i) Q register is used
instead of  X register. ii) dst address is aligned instead of src
address aligned to improve store operation. Hence, memmove
implementation also has improvement above 128 bytes, that about 30%
for 2k to 8M bytes, and about 50% for 32M or more.
---
 sysdeps/aarch64/multiarch/Makefile          |   2 +-
 sysdeps/aarch64/multiarch/ifunc-impl-list.c |   4 +-
 sysdeps/aarch64/multiarch/memcpy.c          |   5 +-
 sysdeps/aarch64/multiarch/memcpy_kunpeng.S  | 468 ++++++++++++++++++++++++++++
 4 files changed, 476 insertions(+), 3 deletions(-)
 create mode 100644 sysdeps/aarch64/multiarch/memcpy_kunpeng.S

diff --git a/sysdeps/aarch64/multiarch/Makefile b/sysdeps/aarch64/multiarch/Makefile
index 4150b89a90..37ed49982d 100644
--- a/sysdeps/aarch64/multiarch/Makefile
+++ b/sysdeps/aarch64/multiarch/Makefile
@@ -1,6 +1,6 @@
 ifeq ($(subdir),string)
 sysdep_routines += memcpy_generic memcpy_thunderx memcpy_thunderx2 \
-		   memcpy_falkor memmove_falkor \
+		   memcpy_falkor memcpy_kunpeng memmove_falkor \
 		   memset_generic memset_falkor memset_emag \
 		   memchr_generic memchr_nosimd \
 		   strlen_generic strlen_asimd
diff --git a/sysdeps/aarch64/multiarch/ifunc-impl-list.c b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
index be13b916e5..dbbe19096a 100644
--- a/sysdeps/aarch64/multiarch/ifunc-impl-list.c
+++ b/sysdeps/aarch64/multiarch/ifunc-impl-list.c
@@ -25,7 +25,7 @@
 #include <stdio.h>
 
 /* Maximum number of IFUNC implementations.  */
-#define MAX_IFUNC	4
+#define MAX_IFUNC	5
 
 size_t
 __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
@@ -42,11 +42,13 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_thunderx)
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_thunderx2)
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_falkor)
+	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_kunpeng)
 	      IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_generic))
   IFUNC_IMPL (i, name, memmove,
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_thunderx)
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_thunderx2)
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_falkor)
+	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_kunpeng)
 	      IFUNC_IMPL_ADD (array, i, memmove, 1, __memmove_generic))
   IFUNC_IMPL (i, name, memset,
 	      /* Enable this on non-falkor processors too so that other cores
diff --git a/sysdeps/aarch64/multiarch/memcpy.c b/sysdeps/aarch64/multiarch/memcpy.c
index 13796f987f..b5929e2718 100644
--- a/sysdeps/aarch64/multiarch/memcpy.c
+++ b/sysdeps/aarch64/multiarch/memcpy.c
@@ -32,9 +32,12 @@ extern __typeof (__redirect_memcpy) __memcpy_generic attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_thunderx attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_thunderx2 attribute_hidden;
 extern __typeof (__redirect_memcpy) __memcpy_falkor attribute_hidden;
+extern __typeof (__redirect_memcpy) __memcpy_kunpeng attribute_hidden;
 
 libc_ifunc (__libc_memcpy,
-            (IS_THUNDERX (midr)
+            IS_KUNPENG(midr)
+	    ?__memcpy_kunpeng
+	    : (IS_THUNDERX (midr)
 	     ? __memcpy_thunderx
 	     : (IS_FALKOR (midr) || IS_PHECDA (midr) || IS_ARES (midr)
 		? __memcpy_falkor
diff --git a/sysdeps/aarch64/multiarch/memcpy_kunpeng.S b/sysdeps/aarch64/multiarch/memcpy_kunpeng.S
new file mode 100644
index 0000000000..385f282224
--- /dev/null
+++ b/sysdeps/aarch64/multiarch/memcpy_kunpeng.S
@@ -0,0 +1,468 @@
+/* Optimized memcpy and memmove for Huawei Kunpeng processor.
+   Copyright (C) 2018-2019 Free Software Foundation, Inc.
+
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <https://www.gnu.org/licenses/>.  */
+
+#include <sysdep.h>
+
+/* Assumptions:
+ *
+ * ARMv8-a, AArch64, unaligned accesses.
+ *
+ */
+
+#define dstin	x0
+#define src	x1
+#define count	x2
+#define dst	x3
+#define srcend	x4
+#define dstend	x5
+#define tmp2	x6
+#define tmp3	x7
+#define tmp3w   w7
+#define A_l	x6
+#define A_lw	w6
+#define A_h	x7
+#define A_hw	w7
+#define B_l	x8
+#define B_lw	w8
+#define B_h	x9
+#define C_l	x10
+#define C_h	x11
+#define D_l	x12
+#define D_h	x13
+#define E_l	src
+#define E_h	count
+#define F_l	srcend
+#define F_h	dst
+#define G_l	count
+#define G_h	dst
+#define tmp1	x14
+
+#define A_q	q0
+#define B_q	q1
+#define C_q	q2
+#define D_q	q3
+#define E_q	q4
+#define F_q	q5
+#define G_q	q6
+#define H_q	q7
+#define I_q	q16
+#define J_q	q17
+
+#define A_v	v0
+#define B_v	v1
+#define C_v	v2
+#define D_v	v3
+#define E_v	v4
+#define F_v	v5
+#define G_v	v6
+#define H_v	v7
+#define I_v	v16
+#define J_v	v17
+
+#ifndef MEMMOVE
+# define MEMMOVE memmove
+#endif
+#ifndef MEMCPY
+# define MEMCPY memcpy
+#endif
+
+#if IS_IN (libc)
+
+#undef MEMCPY
+#define MEMCPY __memcpy_kunpeng
+#undef MEMMOVE
+#define MEMMOVE __memmove_kunpeng
+
+
+/* Overlapping large forward memmoves use a loop that copies backwards.
+   Otherwise memcpy is used. Small moves branch to memcopy16 directly.
+   The longer memcpy cases fall through to the memcpy head.
+*/
+
+ENTRY_ALIGN (MEMMOVE, 6)
+
+	DELOUSE (0)
+	DELOUSE (1)
+	DELOUSE (2)
+
+	sub	tmp1, dstin, src
+	cmp	count, 512
+	ccmp	tmp1, count, 2, hi
+	b.lo	L(move_long)
+	cmp	count, 96
+	ccmp	tmp1, count, 2, hi
+	b.lo	L(move_middle)
+
+END (MEMMOVE)
+libc_hidden_builtin_def (MEMMOVE)
+
+
+/* Copies are split into 4 main cases: small copies of up to 16 bytes,
+   medium copies of 17..96 bytes which are fully unrolled. Long copies
+   of 97..2048 align dst address without prefetching. Large copies
+   of more than 2048 bytes align the destination and use load-and-merge
+   approach in the case src and dst addresses are unaligned not evenly,
+   so that, actual loads and stores are always aligned.
+   Large copies use the loops processing 64 bytes per iteration for
+   unaligned case and 128 bytes per iteration for aligned ones.
+*/
+
+#define MEMCPY_PREFETCH_LDR 640
+
+	.p2align 4
+ENTRY (MEMCPY)
+
+	DELOUSE (0)
+	DELOUSE (1)
+	DELOUSE (2)
+
+	add	srcend, src, count
+	cmp	count, 16
+	b.ls	L(memcopy16)
+	add	dstend, dstin, count
+	cmp	count, 96
+	b.hi	L(memcopy_long)
+
+	/* Medium copies: 17..96 bytes.  */
+	ldr	A_q, [src], #16
+	and	tmp1, src, 15
+	ldr	E_q, [srcend, -16]
+	cmp	count, 64
+	b.gt	L(memcpy_copy96)
+	cmp	count, 48
+	b.le	L(bytes_17_to_48)
+	/* 49..64 bytes */
+	ldp	B_q, C_q, [src]
+	str	E_q, [dstend, -16]
+	stp	A_q, B_q, [dstin]
+	str	C_q, [dstin, 32]
+	ret
+
+L(bytes_17_to_48):
+	/* 17..48 bytes*/
+	cmp	count, 32
+	b.gt	L(bytes_32_to_48)
+	/* 17..32 bytes*/
+	str	A_q, [dstin]
+	str	E_q, [dstend, -16]
+	ret
+
+L(bytes_32_to_48):
+	/* 32..48 */
+	ldr	B_q, [src]
+	str	A_q, [dstin]
+	str	E_q, [dstend, -16]
+	str	B_q, [dstin, 16]
+	ret
+
+	.p2align 4
+	/* Small copies: 0..16 bytes.  */
+L(memcopy16):
+	cmp	count, 8
+	b.lo	L(bytes_0_to_8)
+	ldr	A_l, [src]
+	ldr	A_h, [srcend, -8]
+	add	dstend, dstin, count
+	str	A_l, [dstin]
+	str	A_h, [dstend, -8]
+	ret
+	.p2align 4
+
+L(bytes_0_to_8):
+	tbz	count, 2, L(bytes_0_to_3)
+	ldr	A_lw, [src]
+	ldr	A_hw, [srcend, -4]
+	add	dstend, dstin, count
+	str	A_lw, [dstin]
+	str	A_hw, [dstend, -4]
+	ret
+
+	/* Copy 0..3 bytes.  Use a branchless sequence that copies the same
+	   byte 3 times if count==1, or the 2nd byte twice if count==2.  */
+L(bytes_0_to_3):
+	cbz	count, 1f
+	lsr	tmp1, count, 1
+	ldrb	A_lw, [src]
+	ldrb	A_hw, [srcend, -1]
+	add	dstend, dstin, count
+	ldrb	B_lw, [src, tmp1]
+	strb	B_lw, [dstin, tmp1]
+	strb	A_hw, [dstend, -1]
+	strb	A_lw, [dstin]
+1:
+	ret
+
+	.p2align 4
+
+L(memcpy_copy96):
+	/* Copying 65..96 bytes. A_q (first 16 bytes) and
+	   E_q(last 16 bytes) are already loaded. The size
+	   is large enough to benefit from aligned loads */
+	bic	src, src, 15
+	ldp	B_q, C_q, [src]
+	/* Loaded 64 bytes, second 16-bytes chunk can be
+	   overlapping with the first chunk by tmp1 bytes.
+	   Stored 16 bytes. */
+	sub	dst, dstin, tmp1
+	add	count, count, tmp1
+	/* The range of count being [65..96] becomes [65..111]
+	   after tmp [0..15] gets added to it,
+	   count now is <bytes-left-to-load>+48 */
+	cmp	count, 80
+	b.gt	L(copy96_medium)
+	ldr	D_q, [src, 32]
+	stp	B_q, C_q, [dst, 16]
+	str	D_q, [dst, 48]
+	str	A_q, [dstin]
+	str	E_q, [dstend, -16]
+	ret
+
+	.p2align 4
+L(copy96_medium):
+	ldp	D_q, G_q, [src, 32]
+	cmp	count, 96
+	b.gt	L(copy96_large)
+	stp	B_q, C_q, [dst, 16]
+	stp	D_q, G_q, [dst, 48]
+	str	A_q, [dstin]
+	str	E_q, [dstend, -16]
+	ret
+
+L(copy96_large):
+	ldr	F_q, [src, 64]
+	str	B_q, [dst, 16]
+	stp	C_q, D_q, [dst, 32]
+	stp	G_q, F_q, [dst, 64]
+	str	A_q, [dstin]
+	str	E_q, [dstend, -16]
+	ret
+
+	.p2align 4
+L(memcopy_long):
+	cmp count, 2048
+	b.ls L(copy2048_large)
+	ldr	A_q, [src], #16
+	and	tmp1, src, 15
+	bic	src, src, 15
+	ldp	B_q, C_q, [src], #32
+	sub	dst, dstin, tmp1
+	add	count, count, tmp1
+	add	dst, dst, 16
+	ldp	D_q, E_q, [src], #32
+	str	A_q, [dstin]
+
+	/* Already loaded 64+16 bytes. Check if at
+	   least 64 more bytes left */
+	subs	count, count, 64+64+16
+	b.lt	L(loop128_exit0)
+	cmp	count, MEMCPY_PREFETCH_LDR + 64 + 32
+	b.lt	L(loop128)
+	sub	count, count, MEMCPY_PREFETCH_LDR + 64 + 32
+
+	.p2align 4
+
+L(loop128_prefetch):
+	prfm	pldl1strm, [src, MEMCPY_PREFETCH_LDR]
+	ldp	F_q, G_q, [src], #32
+	stp	B_q, C_q, [dst], #32
+	ldp	H_q, I_q, [src], #32
+	prfm	pldl1strm, [src, MEMCPY_PREFETCH_LDR]
+	ldp	B_q, C_q, [src], #32
+	stp	D_q, E_q, [dst], #32
+	ldp	D_q, E_q, [src], #32
+	stp	F_q, G_q, [dst], #32
+	stp	H_q, I_q, [dst], #32
+	subs	count, count, 128
+	b.ge	L(loop128_prefetch)
+
+	add	count, count, MEMCPY_PREFETCH_LDR + 64 + 32
+	.p2align 4
+L(loop128):
+	ldp	F_q, G_q, [src], #32
+	ldp	H_q, I_q, [src], #32
+	stp	B_q, C_q, [dst], #32
+	stp	D_q, E_q, [dst], #32
+	subs	count, count, 64
+	b.lt	L(loop128_exit1)
+	ldp	B_q, C_q, [src], #32
+	ldp	D_q, E_q, [src], #32
+	stp	F_q, G_q, [dst], #32
+	stp	H_q, I_q, [dst], #32
+	subs	count, count, 64
+	b.ge	L(loop128)
+L(loop128_exit0):
+	ldp	F_q, G_q, [srcend, -64]
+	ldp	H_q, I_q, [srcend, -32]
+	stp	B_q, C_q, [dst], #32
+	stp	D_q, E_q, [dst]
+	stp	F_q, G_q, [dstend, -64]
+	stp	H_q, I_q, [dstend, -32]
+	ret
+L(loop128_exit1):
+	ldp	B_q, C_q, [srcend, -64]
+	ldp	D_q, E_q, [srcend, -32]
+	stp	F_q, G_q, [dst], #32
+	stp	H_q, I_q, [dst]
+	stp	B_q, C_q, [dstend, -64]
+	stp	D_q, E_q, [dstend, -32]
+	ret
+	
+	/* long copies: 96..2048 bytes */
+L(copy2048_large):
+	and	tmp1, dstin, 15
+	bic	dst, dstin, 15
+	ldp	D_l, D_h, [src]
+	sub	src, src, tmp1
+	add	count, count, tmp1	/* Count is now 16 too large.  */
+	ldp	A_l, A_h, [src, 16]
+	stp	D_l, D_h, [dstin]
+	ldp	B_l, B_h, [src, 32]
+	ldp	C_l, C_h, [src, 48]
+	ldp	D_l, D_h, [src, 64]!
+	subs	count, count, 128 + 16	/* Test and readjust count.  */
+	b.ls	L(last64)
+
+L(loop64):
+	stp	A_l, A_h, [dst, 16]
+	ldp	A_l, A_h, [src, 16]
+	stp	B_l, B_h, [dst, 32]
+	ldp	B_l, B_h, [src, 32]
+	stp	C_l, C_h, [dst, 48]
+	ldp	C_l, C_h, [src, 48]
+	stp	D_l, D_h, [dst, 64]
+	ldp	D_l, D_h, [src, 64]
+	add dst, dst, 64
+	add src, src, 64
+	subs	count, count, 64
+	b.hi	L(loop64)
+
+	/* Write the last full set of 64 bytes.  The remainder is at most 64
+	   bytes, so it is safe to always copy 64 bytes from the end even if
+	   there is just 1 byte left.  */
+L(last64):
+	ldp	E_l, E_h, [srcend, -64]
+	stp	A_l, A_h, [dst, 16]
+	ldp	A_l, A_h, [srcend, -48]
+	stp	B_l, B_h, [dst, 32]
+	ldp	B_l, B_h, [srcend, -32]
+	stp	C_l, C_h, [dst, 48]
+	ldp	C_l, C_h, [srcend, -16]
+	stp	D_l, D_h, [dst, 64]
+	stp	E_l, E_h, [dstend, -64]
+	stp	A_l, A_h, [dstend, -48]
+	stp	B_l, B_h, [dstend, -32]
+	stp	C_l, C_h, [dstend, -16]
+	ret
+
+	/* long move: more than 512 bytes align the dstend */
+	.p2align 4
+L(move_long):
+1:
+	add	srcend, src, count
+	add	dstend, dstin, count
+
+	and	tmp1, dstend, 15
+	ldr	D_q, [srcend, -16]
+	sub	srcend, srcend, tmp1
+	sub	count, count, tmp1
+	ldp	A_q, B_q, [srcend, -32]
+	str	D_q, [dstend, -16]
+	ldp	C_q, D_q, [srcend, -64]!
+	sub	dstend, dstend, tmp1
+	subs	count, count, 128
+	b.ls	2f
+
+	.p2align 4
+1:
+	subs	count, count, 64
+	stp	A_q, B_q, [dstend, -32]
+	ldp	A_q, B_q, [srcend, -32]
+	stp	C_q, D_q, [dstend, -64]!
+	ldp	C_q, D_q, [srcend, -64]!
+	b.hi	1b
+
+	/* Write the last full set of 64 bytes.  The remainder is at most 64
+	   bytes, so it is safe to always copy 64 bytes from the start even if
+	   there is just 1 byte left.  */
+2:
+	ldp	E_q, F_q, [src, 32]
+	ldp	G_q, H_q, [src]
+	stp	A_q, B_q, [dstend, -32]
+	stp	C_q, D_q, [dstend, -64]
+	stp	E_q, F_q, [dstin, 32]
+	stp	G_q, H_q, [dstin]
+3:	ret
+
+	/* midlle move: 96..512 bytes */
+	.p2align 4
+L(move_middle):
+	cbz	tmp1, 3f
+	add	srcend, src, count
+	prfm	PLDL1STRM, [srcend, -64]
+	add	dstend, dstin, count
+	and	tmp1, dstend, 15
+	ldr D_q, [srcend, -16]
+	sub	srcend, srcend, tmp1
+	sub	count, count, tmp1
+	ldr	A_q, [srcend, -16]
+	str	D_q, [dstend, -16]
+	ldr	B_q, [srcend, -32]
+	ldr	C_q, [srcend, -48]
+	ldr	D_q, [srcend, -64]!
+	sub	dstend, dstend, tmp1
+	subs	count, count, 128
+	b.ls	2f
+
+1:
+	str	A_q, [dstend, -16]
+	ldr	A_q, [srcend, -16]
+	str	B_q, [dstend, -32]
+	ldr	B_q, [srcend, -32]
+	str	C_q, [dstend, -48]
+	ldr	C_q, [srcend, -48]
+	str	D_q, [dstend, -64]!
+	ldr	D_q, [srcend, -64]!
+	subs	count, count, 64
+	b.hi	1b
+
+	/* Write the last full set of 64 bytes.  The remainder is at most 64
+	   bytes, so it is safe to always copy 64 bytes from the start even if
+	   there is just 1 byte left.  */
+2:
+	ldr	G_q, [src, 48]
+	str	A_q, [dstend, -16]
+	ldr	A_q, [src, 32]
+	str	B_q, [dstend, -32]
+	ldr	B_q, [src, 16]
+	str	C_q, [dstend, -48]
+	ldr	C_q, [src]
+	str	D_q, [dstend, -64]
+	str	G_q, [dstin, 48]
+	str	A_q, [dstin, 32]
+	str	B_q, [dstin, 16]
+	str	C_q, [dstin]
+3:	ret
+
+
+END (MEMCPY)
+	.section	.rodata
+	.p2align	4
+
+libc_hidden_builtin_def (MEMCPY)
+#endif
-- 
2.14.1.windows.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2019-11-01 12:55 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-21 14:25 [PATCH v2 2/2] aarch64: Optimized memcpy and memmove for Kunpeng processor Zhangxuelei (Derek)
2019-10-22  9:50 ` Yikun Jiang
2019-10-24 14:57   ` Carlos O'Donell
2019-10-26  9:57     ` Florian Weimer
2019-10-26 13:40       ` Carlos O'Donell
2019-10-29  1:20     ` Carlos O'Donell
2019-10-29 14:34 ` Wilco Dijkstra
  -- strict thread matches above, loose matches on Subject: below --
2019-10-29  3:22 Zhangxuelei (Derek)
2019-10-29  3:26 ` Carlos O'Donell
2019-10-30  6:42   ` Yikun Jiang
2019-11-01 12:55     ` Carlos O'Donell
2019-10-26 13:46 Zhangxuelei (Derek)
2019-10-26 13:22 Zhangxuelei (Derek)
2019-10-26 13:40 ` Carlos O'Donell
2019-10-17 13:16 Xuelei Zhang
2019-10-17 14:57 ` Yikun Jiang
2019-10-18 15:50   ` Wilco Dijkstra
2019-10-22 18:29 ` Wilco Dijkstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).