public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [PATCH x86_64] Fix for wrong selector in x86_64/multiarch/memcpy.S BZ #18880
@ 2016-03-03  3:42 Pawar, Amit
  2016-03-03 13:14 ` H.J. Lu
  0 siblings, 1 reply; 4+ messages in thread
From: Pawar, Amit @ 2016-03-03  3:42 UTC (permalink / raw)
  To: libc-alpha

[-- Attachment #1: Type: text/plain, Size: 110 bytes --]

PFA patch containing fix for BZ #18880 as per the suggestion. If OK please commit it.

Thanks,
Amit Pawar

[-- Attachment #2: 0001-x86_64-fixing-and-updating-memcpy-IFUNC-selection-or.patch --]
[-- Type: application/octet-stream, Size: 1877 bytes --]

From 03461383d422a84d42a43ea5a8e9f02980e64beb Mon Sep 17 00:00:00 2001
From: Amit Pawar <Amit.Pawar@amd.com>
Date: Thu, 3 Mar 2016 08:55:47 +0530
Subject: [PATCH] x86_64 fixing and updating memcpy IFUNC selection order.

As per the bug report fixing and updating memcpy IFUNC selection order.
Removing non required Slow_BSF and including Fast_Copy_Backward for generic
case.

	[BZ #18880]
	* sysdeps/x86_64/multiarch/mempcpy.S:
	Selection order updated. Slow_bsf is removed and ssse3_back is included.
---
 ChangeLog                         |  6 ++++++
 sysdeps/x86_64/multiarch/memcpy.S | 19 ++++++++++---------
 2 files changed, 16 insertions(+), 9 deletions(-)

diff --git a/ChangeLog b/ChangeLog
index 787fef1..1d09b3a 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,9 @@
+2016-03-03  Amit Pawar  <amit.pawar@amd.com>
+
+	[BZ #18880]
+	* sysdeps/x86_64/multiarch/mempcpy.S:
+	Selection order updated. Slow_bsf is removed and ssse3_back is included.
+
 2016-03-01  H.J. Lu  <hongjiu.lu@intel.com>
 
 	* sysdeps/x86_64/_mcount.S (C_LABEL(_mcount)): Call
diff --git a/sysdeps/x86_64/multiarch/memcpy.S b/sysdeps/x86_64/multiarch/memcpy.S
index 64a1bcd..79fd9ff 100644
--- a/sysdeps/x86_64/multiarch/memcpy.S
+++ b/sysdeps/x86_64/multiarch/memcpy.S
@@ -40,17 +40,18 @@ ENTRY(__new_memcpy)
 #endif
 1:	leaq	__memcpy_avx_unaligned(%rip), %rax
 	HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load)
-	jz 2f
-	ret
-2:	leaq	__memcpy_sse2(%rip), %rax
-	HAS_ARCH_FEATURE (Slow_BSF)
-	jnz	3f
+	jnz	2f
 	leaq	__memcpy_sse2_unaligned(%rip), %rax
-	ret
-3:	HAS_CPU_FEATURE (SSSE3)
-	jz 4f
+	HAS_ARCH_FEATURE (Fast_Unaligned_Load)
+	jnz	2f
+	leaq	__memcpy_sse2(%rip), %rax
+	HAS_CPU_FEATURE (SSSE3)
+	jz	2f
+	leaq    __memcpy_ssse3_back(%rip), %rax
+	HAS_CPU_FEATURE (Fast_Copy_Backward)
+	jnz	2f
 	leaq    __memcpy_ssse3(%rip), %rax
-4:	ret
+2:	ret
 END(__new_memcpy)
 
 # undef ENTRY
-- 
2.1.4


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH x86_64] Fix for wrong selector in x86_64/multiarch/memcpy.S BZ #18880
  2016-03-03  3:42 [PATCH x86_64] Fix for wrong selector in x86_64/multiarch/memcpy.S BZ #18880 Pawar, Amit
@ 2016-03-03 13:14 ` H.J. Lu
  2016-03-03 17:04   ` Pawar, Amit
  0 siblings, 1 reply; 4+ messages in thread
From: H.J. Lu @ 2016-03-03 13:14 UTC (permalink / raw)
  To: Pawar, Amit; +Cc: libc-alpha

On Wed, Mar 2, 2016 at 7:42 PM, Pawar, Amit <Amit.Pawar@amd.com> wrote:
> PFA patch containing fix for BZ #18880 as per the suggestion. If OK please commit it.
>

Change looks good.  If you can't commit it yourself, please improve commit
log:

1. Don't add your ChangLog entry in ChangeLog directly since other
people may change ChangeLog.
2. In ChangeLog entry, describe what you did, like check
Fast_Unaligned_Load instead of Slow_BSF and check Fast_Copy_Backward
for __memcpy_ssse3_back.


-- 
H.J.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [PATCH x86_64] Fix for wrong selector in x86_64/multiarch/memcpy.S BZ #18880
  2016-03-03 13:14 ` H.J. Lu
@ 2016-03-03 17:04   ` Pawar, Amit
  2016-03-04 16:07     ` H.J. Lu
  0 siblings, 1 reply; 4+ messages in thread
From: Pawar, Amit @ 2016-03-03 17:04 UTC (permalink / raw)
  To: H.J. Lu; +Cc: libc-alpha

[-- Attachment #1: Type: text/plain, Size: 552 bytes --]

>Change looks good.  If you can't commit it yourself, please improve commit
>log:
>
>1. Don't add your ChangLog entry in ChangeLog directly since other people may change ChangeLog.
>2. In ChangeLog entry, describe what you did, like check Fast_Unaligned_Load instead of Slow_BSF and >check Fast_Copy_Backward for __memcpy_ssse3_back.

As per your suggestion, I have fixed the patch with improved commit log and also providing separate ChangeLog patch. If OK please commit it else let me know for any required changes.

Thanks,
Amit Pawar



[-- Attachment #2: 0001-x86_64-fixing-and-updating-memcpy-IFUNC-selection-or.patch --]
[-- Type: application/octet-stream, Size: 1844 bytes --]

From 0c880df7214510c0368e1b8b58093c337e058e09 Mon Sep 17 00:00:00 2001
From: Amit Pawar <Amit.Pawar@amd.com>
Date: Thu, 3 Mar 2016 22:24:21 +0530
Subject: [PATCH] x86_64 fixing and updating memcpy IFUNC selection order.

This fix improves MEMCPY IFUNC selection order. Existing selection order is
updated with following selection order for memcpy.
1. __memcpy_avx_unaligned if AVX_Fast_Unaligned_Load bit is set.
2. __memcpy_sse2_unaligned if Fast_Unaligned_Load bit is set.
3. __memcpy_sse2 if SSSE3 isn't available.
4. __memcpy_ssse3_back if Fast_Copy_Backward bit it set.
5. __memcpy_ssse3

Fast_Unaligned_Load check is included instead of Slow_BSF and also check for
Fast_Copy_Backward to enable __memcpy_ssse3_back selection.

	[BZ #18880]
	* sysdeps/x86_64/multiarch/memcpy.S: Added check for Fast_Unaligned_Load
	instead of Slow_BSF and also check for Fast_Copy_Backward to enable
	__memcpy_ssse3_back.
---
 sysdeps/x86_64/multiarch/memcpy.S | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/sysdeps/x86_64/multiarch/memcpy.S b/sysdeps/x86_64/multiarch/memcpy.S
index 64a1bcd..79fd9ff 100644
--- a/sysdeps/x86_64/multiarch/memcpy.S
+++ b/sysdeps/x86_64/multiarch/memcpy.S
@@ -40,17 +40,18 @@ ENTRY(__new_memcpy)
 #endif
 1:	leaq	__memcpy_avx_unaligned(%rip), %rax
 	HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load)
-	jz 2f
-	ret
-2:	leaq	__memcpy_sse2(%rip), %rax
-	HAS_ARCH_FEATURE (Slow_BSF)
-	jnz	3f
+	jnz	2f
 	leaq	__memcpy_sse2_unaligned(%rip), %rax
-	ret
-3:	HAS_CPU_FEATURE (SSSE3)
-	jz 4f
+	HAS_ARCH_FEATURE (Fast_Unaligned_Load)
+	jnz	2f
+	leaq	__memcpy_sse2(%rip), %rax
+	HAS_CPU_FEATURE (SSSE3)
+	jz	2f
+	leaq    __memcpy_ssse3_back(%rip), %rax
+	HAS_CPU_FEATURE (Fast_Copy_Backward)
+	jnz	2f
 	leaq    __memcpy_ssse3(%rip), %rax
-4:	ret
+2:	ret
 END(__new_memcpy)
 
 # undef ENTRY
-- 
2.1.4


[-- Attachment #3: ChangeLog.patch --]
[-- Type: application/octet-stream, Size: 428 bytes --]

diff --git a/ChangeLog b/ChangeLog
index a31f95a..e4cdcf4 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,9 @@
+2016-03-03  Amit Pawar  <amit.pawar@amd.com>
+
+	* sysdeps/x86_64/multiarch/memcpy.S: Added check for Fast_Unaligned_Load
+	instead of Slow_BSF and also check for Fast_Copy_Backward to enable
+	__memcpy_ssse3_back.
+
 2016-03-03  H.J. Lu  <hongjiu.lu@intel.com>
 
 	* gmon/Makefile (noprof): Add $(sysdep_noprof).

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH x86_64] Fix for wrong selector in x86_64/multiarch/memcpy.S BZ #18880
  2016-03-03 17:04   ` Pawar, Amit
@ 2016-03-04 16:07     ` H.J. Lu
  0 siblings, 0 replies; 4+ messages in thread
From: H.J. Lu @ 2016-03-04 16:07 UTC (permalink / raw)
  To: Pawar, Amit; +Cc: libc-alpha

[-- Attachment #1: Type: text/plain, Size: 680 bytes --]

On Thu, Mar 3, 2016 at 9:04 AM, Pawar, Amit <Amit.Pawar@amd.com> wrote:
>>Change looks good.  If you can't commit it yourself, please improve commit
>>log:
>>
>>1. Don't add your ChangLog entry in ChangeLog directly since other people may change ChangeLog.
>>2. In ChangeLog entry, describe what you did, like check Fast_Unaligned_Load instead of Slow_BSF and >check Fast_Copy_Backward for __memcpy_ssse3_back.
>
> As per your suggestion, I have fixed the patch with improved commit log and also providing separate ChangeLog patch. If OK please commit it else let me know for any required changes.
>
> Thanks,
> Amit Pawar
>
>

This is the patch I am going to check in.

-- 
H.J.

[-- Attachment #2: 0001-x86-64-Fix-memcpy-IFUNC-selection.patch --]
[-- Type: text/x-patch, Size: 2044 bytes --]

From 2b4fee345d53eb8fc81461f2aefae74e9f3604ae Mon Sep 17 00:00:00 2001
From: Amit Pawar <Amit.Pawar@amd.com>
Date: Thu, 3 Mar 2016 22:24:21 +0530
Subject: [PATCH] x86-64: Fix memcpy IFUNC selection

Chek Fast_Unaligned_Load, instead of Slow_BSF, and also check for
Fast_Copy_Backward to enable __memcpy_ssse3_back.  Existing selection
order is updated with following selection order:

1. __memcpy_avx_unaligned if AVX_Fast_Unaligned_Load bit is set.
2. __memcpy_sse2_unaligned if Fast_Unaligned_Load bit is set.
3. __memcpy_sse2 if SSSE3 isn't available.
4. __memcpy_ssse3_back if Fast_Copy_Backward bit it set.
5. __memcpy_ssse3

	[BZ #18880]
	* sysdeps/x86_64/multiarch/memcpy.S: Check Fast_Unaligned_Load
	instead of Slow_BSF and also check for Fast_Copy_Backward to
	enable __memcpy_ssse3_back.
---
 sysdeps/x86_64/multiarch/memcpy.S | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/sysdeps/x86_64/multiarch/memcpy.S b/sysdeps/x86_64/multiarch/memcpy.S
index 64a1bcd..8882590 100644
--- a/sysdeps/x86_64/multiarch/memcpy.S
+++ b/sysdeps/x86_64/multiarch/memcpy.S
@@ -35,22 +35,23 @@ ENTRY(__new_memcpy)
 	jz	1f
 	HAS_ARCH_FEATURE (Prefer_No_VZEROUPPER)
 	jz	1f
-	leaq    __memcpy_avx512_no_vzeroupper(%rip), %rax
+	lea    __memcpy_avx512_no_vzeroupper(%rip), %RAX_LP
 	ret
 #endif
-1:	leaq	__memcpy_avx_unaligned(%rip), %rax
+1:	lea	__memcpy_avx_unaligned(%rip), %RAX_LP
 	HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load)
-	jz 2f
-	ret
-2:	leaq	__memcpy_sse2(%rip), %rax
-	HAS_ARCH_FEATURE (Slow_BSF)
-	jnz	3f
-	leaq	__memcpy_sse2_unaligned(%rip), %rax
-	ret
-3:	HAS_CPU_FEATURE (SSSE3)
-	jz 4f
-	leaq    __memcpy_ssse3(%rip), %rax
-4:	ret
+	jnz	2f
+	lea	__memcpy_sse2_unaligned(%rip), %RAX_LP
+	HAS_ARCH_FEATURE (Fast_Unaligned_Load)
+	jnz	2f
+	lea	__memcpy_sse2(%rip), %RAX_LP
+	HAS_CPU_FEATURE (SSSE3)
+	jz	2f
+	lea    __memcpy_ssse3_back(%rip), %RAX_LP
+	HAS_ARCH_FEATURE (Fast_Copy_Backward)
+	jnz	2f
+	lea	__memcpy_ssse3(%rip), %RAX_LP
+2:	ret
 END(__new_memcpy)
 
 # undef ENTRY
-- 
2.5.0


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-03-04 16:07 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-03  3:42 [PATCH x86_64] Fix for wrong selector in x86_64/multiarch/memcpy.S BZ #18880 Pawar, Amit
2016-03-03 13:14 ` H.J. Lu
2016-03-03 17:04   ` Pawar, Amit
2016-03-04 16:07     ` H.J. Lu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).