public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* Re: please verify my mail to community.
       [not found]             ` <5405CBBF.5010202@samsung.com>
@ 2014-09-02 15:03               ` Marat Zakirov
  2014-09-02 15:09                 ` [PATCH] Asan optimization for aligned accesses Marat Zakirov
  0 siblings, 1 reply; 8+ messages in thread
From: Marat Zakirov @ 2014-09-02 15:03 UTC (permalink / raw)
  To: gcc-patches, kcc, jakub, Yury Gribov

[-- Attachment #1: Type: text/plain, Size: 503 bytes --]

Hi all!

Here's a simple optimization patch for Asan. It stores alignment 
information into ASAN_CHECK which is then extracted by sanopt to reduce 
number of "and 0x7" instructions for sufficiently aligned accesses. I 
checked it on linux kernel by comparing results of objdump -d -j .text 
vmlinux | grep "and.*0x7," for optimized and regular cases. It 
eliminates 12% of and 0x7's.

No regressions. Sanitized GCC was successfully Asan-bootstrapped. No 
false positives were found in kernel.

--Marat


[-- Attachment #2: vdt627.diff --]
[-- Type: text/x-patch, Size: 3888 bytes --]

gcc/ChangeLog:

2014-09-02  Marat Zakirov  <m.zakirov@samsung.com>

	* asan.c (build_check_stmt): Alignment arg was added.
	(asan_expand_check_ifn): Optimization for alignment >= 8.

gcc/testsuite/ChangeLog:

2014-09-02  Marat Zakirov  <m.zakirov@samsung.com>

	* c-c++-common/asan/red-align-1.c: New test.
	* c-c++-common/asan/red-align-2.c: New test.

diff --git a/gcc/asan.c b/gcc/asan.c
index 58e7719..aed5ede 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -1639,9 +1639,11 @@ build_check_stmt (location_t loc, tree base, tree len,
   if (end_instrumented)
     flags |= ASAN_CHECK_END_INSTRUMENTED;
 
-  g = gimple_build_call_internal (IFN_ASAN_CHECK, 3,
+  g = gimple_build_call_internal (IFN_ASAN_CHECK, 4,
 				  build_int_cst (integer_type_node, flags),
-				  base, len);
+				  base, len,
+				  build_int_cst (integer_type_node,
+						 align/BITS_PER_UNIT));
   gimple_set_location (g, loc);
   if (before_p)
     gsi_insert_before (&gsi, g, GSI_SAME_STMT);
@@ -2434,6 +2436,7 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls)
 
   tree base = gimple_call_arg (g, 1);
   tree len = gimple_call_arg (g, 2);
+  HOST_WIDE_INT align = tree_to_shwi (gimple_call_arg (g, 3));
 
   HOST_WIDE_INT size_in_bytes
     = is_scalar_access && tree_fits_shwi_p (len) ? tree_to_shwi (len) : -1;
@@ -2547,7 +2550,10 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls)
 	  gimple shadow_test = build_assign (NE_EXPR, shadow, 0);
 	  gimple_seq seq = NULL;
 	  gimple_seq_add_stmt (&seq, shadow_test);
-	  gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR, base_addr, 7));
+	  /* Aligned (>= 8 bytes) access do not need & 7.  */
+	  if (align < 8)
+	    gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR,
+						     base_addr, 7));
 	  gimple_seq_add_stmt (&seq, build_type_cast (shadow_type,
 						      gimple_seq_last (seq)));
 	  if (real_size_in_bytes > 1)
diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def
index 7ae60f3..54ade9f 100644
--- a/gcc/internal-fn.def
+++ b/gcc/internal-fn.def
@@ -55,4 +55,4 @@ DEF_INTERNAL_FN (UBSAN_CHECK_SUB, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (UBSAN_CHECK_MUL, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (ABNORMAL_DISPATCHER, ECF_NORETURN, NULL)
 DEF_INTERNAL_FN (BUILTIN_EXPECT, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
-DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, ".W..")
+DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, ".W...")
diff --git a/gcc/testsuite/c-c++-common/asan/red-align-1.c b/gcc/testsuite/c-c++-common/asan/red-align-1.c
new file mode 100644
index 0000000..1edb3a2
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/red-align-1.c
@@ -0,0 +1,20 @@
+/* This tests aligment propagation to structure elem and
+   abcense of redudant & 7.  */
+
+/* { dg-options "-fdump-tree-sanopt" } */
+/* { dg-do compile } */
+/* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
+
+struct st {
+  int a;
+  int b;
+  int c;
+} __attribute__((aligned(16)));
+
+int foo (struct st * s_p)
+{
+  return s_p->a;
+}
+
+/* { dg-final { scan-tree-dump-times "& 7" 0 "sanopt" } } */
+/* { dg-final { cleanup-tree-dump "sanopt" } } */
diff --git a/gcc/testsuite/c-c++-common/asan/red-align-2.c b/gcc/testsuite/c-c++-common/asan/red-align-2.c
new file mode 100644
index 0000000..161fe3c
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/red-align-2.c
@@ -0,0 +1,20 @@
+/* This tests aligment propagation to structure elem and
+   abcense of redudant & 7.  */
+
+/* { dg-options "-fdump-tree-sanopt" } */
+/* { dg-do compile } */
+/* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
+
+struct st {
+  int a;
+  int b;
+  int c;
+} __attribute__((aligned(16)));
+
+int foo (struct st * s_p)
+{
+  return s_p->b;
+}
+
+/* { dg-final { scan-tree-dump-times "& 7" 1 "sanopt" } } */
+/* { dg-final { cleanup-tree-dump "sanopt" } } */

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] Asan optimization for aligned accesses.
  2014-09-02 15:03               ` please verify my mail to community Marat Zakirov
@ 2014-09-02 15:09                 ` Marat Zakirov
  2014-09-10 12:30                   ` [PING][PATCH] " Marat Zakirov
  2014-09-24  9:22                   ` [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316) Jakub Jelinek
  0 siblings, 2 replies; 8+ messages in thread
From: Marat Zakirov @ 2014-09-02 15:09 UTC (permalink / raw)
  To: gcc-patches, kcc, jakub, Yury Gribov

[-- Attachment #1: Type: text/plain, Size: 599 bytes --]

Sorry for wrong subject!

On 09/02/2014 07:03 PM, Marat Zakirov wrote:
> Hi all!
>
> Here's a simple optimization patch for Asan. It stores alignment 
> information into ASAN_CHECK which is then extracted by sanopt to 
> reduce number of "and 0x7" instructions for sufficiently aligned 
> accesses. I checked it on linux kernel by comparing results of objdump 
> -d -j .text vmlinux | grep "and.*0x7," for optimized and regular 
> cases. It eliminates 12% of and 0x7's.
>
> No regressions. Sanitized GCC was successfully Asan-bootstrapped. No 
> false positives were found in kernel.
>
> --Marat
>


[-- Attachment #2: vdt627.diff --]
[-- Type: text/x-patch, Size: 3888 bytes --]

gcc/ChangeLog:

2014-09-02  Marat Zakirov  <m.zakirov@samsung.com>

	* asan.c (build_check_stmt): Alignment arg was added.
	(asan_expand_check_ifn): Optimization for alignment >= 8.

gcc/testsuite/ChangeLog:

2014-09-02  Marat Zakirov  <m.zakirov@samsung.com>

	* c-c++-common/asan/red-align-1.c: New test.
	* c-c++-common/asan/red-align-2.c: New test.

diff --git a/gcc/asan.c b/gcc/asan.c
index 58e7719..aed5ede 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -1639,9 +1639,11 @@ build_check_stmt (location_t loc, tree base, tree len,
   if (end_instrumented)
     flags |= ASAN_CHECK_END_INSTRUMENTED;
 
-  g = gimple_build_call_internal (IFN_ASAN_CHECK, 3,
+  g = gimple_build_call_internal (IFN_ASAN_CHECK, 4,
 				  build_int_cst (integer_type_node, flags),
-				  base, len);
+				  base, len,
+				  build_int_cst (integer_type_node,
+						 align/BITS_PER_UNIT));
   gimple_set_location (g, loc);
   if (before_p)
     gsi_insert_before (&gsi, g, GSI_SAME_STMT);
@@ -2434,6 +2436,7 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls)
 
   tree base = gimple_call_arg (g, 1);
   tree len = gimple_call_arg (g, 2);
+  HOST_WIDE_INT align = tree_to_shwi (gimple_call_arg (g, 3));
 
   HOST_WIDE_INT size_in_bytes
     = is_scalar_access && tree_fits_shwi_p (len) ? tree_to_shwi (len) : -1;
@@ -2547,7 +2550,10 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls)
 	  gimple shadow_test = build_assign (NE_EXPR, shadow, 0);
 	  gimple_seq seq = NULL;
 	  gimple_seq_add_stmt (&seq, shadow_test);
-	  gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR, base_addr, 7));
+	  /* Aligned (>= 8 bytes) access do not need & 7.  */
+	  if (align < 8)
+	    gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR,
+						     base_addr, 7));
 	  gimple_seq_add_stmt (&seq, build_type_cast (shadow_type,
 						      gimple_seq_last (seq)));
 	  if (real_size_in_bytes > 1)
diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def
index 7ae60f3..54ade9f 100644
--- a/gcc/internal-fn.def
+++ b/gcc/internal-fn.def
@@ -55,4 +55,4 @@ DEF_INTERNAL_FN (UBSAN_CHECK_SUB, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (UBSAN_CHECK_MUL, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (ABNORMAL_DISPATCHER, ECF_NORETURN, NULL)
 DEF_INTERNAL_FN (BUILTIN_EXPECT, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
-DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, ".W..")
+DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, ".W...")
diff --git a/gcc/testsuite/c-c++-common/asan/red-align-1.c b/gcc/testsuite/c-c++-common/asan/red-align-1.c
new file mode 100644
index 0000000..1edb3a2
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/red-align-1.c
@@ -0,0 +1,20 @@
+/* This tests aligment propagation to structure elem and
+   abcense of redudant & 7.  */
+
+/* { dg-options "-fdump-tree-sanopt" } */
+/* { dg-do compile } */
+/* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
+
+struct st {
+  int a;
+  int b;
+  int c;
+} __attribute__((aligned(16)));
+
+int foo (struct st * s_p)
+{
+  return s_p->a;
+}
+
+/* { dg-final { scan-tree-dump-times "& 7" 0 "sanopt" } } */
+/* { dg-final { cleanup-tree-dump "sanopt" } } */
diff --git a/gcc/testsuite/c-c++-common/asan/red-align-2.c b/gcc/testsuite/c-c++-common/asan/red-align-2.c
new file mode 100644
index 0000000..161fe3c
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/red-align-2.c
@@ -0,0 +1,20 @@
+/* This tests aligment propagation to structure elem and
+   abcense of redudant & 7.  */
+
+/* { dg-options "-fdump-tree-sanopt" } */
+/* { dg-do compile } */
+/* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
+
+struct st {
+  int a;
+  int b;
+  int c;
+} __attribute__((aligned(16)));
+
+int foo (struct st * s_p)
+{
+  return s_p->b;
+}
+
+/* { dg-final { scan-tree-dump-times "& 7" 1 "sanopt" } } */
+/* { dg-final { cleanup-tree-dump "sanopt" } } */

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PING][PATCH] Asan optimization for aligned accesses.
  2014-09-02 15:09                 ` [PATCH] Asan optimization for aligned accesses Marat Zakirov
@ 2014-09-10 12:30                   ` Marat Zakirov
  2014-09-16 15:00                     ` [PINGv2][PATCH] " Marat Zakirov
  2014-09-24  9:22                   ` [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316) Jakub Jelinek
  1 sibling, 1 reply; 8+ messages in thread
From: Marat Zakirov @ 2014-09-10 12:30 UTC (permalink / raw)
  To: gcc-patches, kcc, jakub, Yury Gribov

[-- Attachment #1: Type: text/plain, Size: 587 bytes --]

On 09/02/2014 07:09 PM, Marat Zakirov wrote:
>> Hi all!
>>
>> Here's a simple optimization patch for Asan. It stores alignment 
>> information into ASAN_CHECK which is then extracted by sanopt to 
>> reduce number of "and 0x7" instructions for sufficiently aligned 
>> accesses. I checked it on linux kernel by comparing results of 
>> objdump -d -j .text vmlinux | grep "and.*0x7," for optimized and 
>> regular cases. It eliminates 12% of and 0x7's.
>>
>> No regressions. Sanitized GCC was successfully Asan-bootstrapped. No 
>> false positives were found in kernel.
>>
>> --Marat
>>


[-- Attachment #2: vdt627.diff --]
[-- Type: text/x-patch, Size: 3889 bytes --]

gcc/ChangeLog:

2014-09-02  Marat Zakirov  <m.zakirov@samsung.com>

	* asan.c (build_check_stmt): Alignment arg was added.
	(asan_expand_check_ifn): Optimization for alignment >= 8.

gcc/testsuite/ChangeLog:

2014-09-02  Marat Zakirov  <m.zakirov@samsung.com>

	* c-c++-common/asan/red-align-1.c: New test.
	* c-c++-common/asan/red-align-2.c: New test.

diff --git a/gcc/asan.c b/gcc/asan.c
index 58e7719..aed5ede 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -1639,9 +1639,11 @@ build_check_stmt (location_t loc, tree base, tree len,
   if (end_instrumented)
     flags |= ASAN_CHECK_END_INSTRUMENTED;
 
-  g = gimple_build_call_internal (IFN_ASAN_CHECK, 3,
+  g = gimple_build_call_internal (IFN_ASAN_CHECK, 4,
 				  build_int_cst (integer_type_node, flags),
-				  base, len);
+				  base, len,
+				  build_int_cst (integer_type_node,
+						 align/BITS_PER_UNIT));
   gimple_set_location (g, loc);
   if (before_p)
     gsi_insert_before (&gsi, g, GSI_SAME_STMT);
@@ -2434,6 +2436,7 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls)
 
   tree base = gimple_call_arg (g, 1);
   tree len = gimple_call_arg (g, 2);
+  HOST_WIDE_INT align = tree_to_shwi (gimple_call_arg (g, 3));
 
   HOST_WIDE_INT size_in_bytes
     = is_scalar_access && tree_fits_shwi_p (len) ? tree_to_shwi (len) : -1;
@@ -2547,7 +2550,10 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls)
 	  gimple shadow_test = build_assign (NE_EXPR, shadow, 0);
 	  gimple_seq seq = NULL;
 	  gimple_seq_add_stmt (&seq, shadow_test);
-	  gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR, base_addr, 7));
+	  /* Aligned (>= 8 bytes) access do not need & 7.  */
+	  if (align < 8)
+	    gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR,
+						     base_addr, 7));
 	  gimple_seq_add_stmt (&seq, build_type_cast (shadow_type,
 						      gimple_seq_last (seq)));
 	  if (real_size_in_bytes > 1)
diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def
index 7ae60f3..54ade9f 100644
--- a/gcc/internal-fn.def
+++ b/gcc/internal-fn.def
@@ -55,4 +55,4 @@ DEF_INTERNAL_FN (UBSAN_CHECK_SUB, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (UBSAN_CHECK_MUL, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (ABNORMAL_DISPATCHER, ECF_NORETURN, NULL)
 DEF_INTERNAL_FN (BUILTIN_EXPECT, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
-DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, ".W..")
+DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, ".W...")
diff --git a/gcc/testsuite/c-c++-common/asan/red-align-1.c b/gcc/testsuite/c-c++-common/asan/red-align-1.c
new file mode 100644
index 0000000..1edb3a2
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/red-align-1.c
@@ -0,0 +1,20 @@
+/* This tests aligment propagation to structure elem and
+   abcense of redudant & 7.  */
+
+/* { dg-options "-fdump-tree-sanopt" } */
+/* { dg-do compile } */
+/* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
+
+struct st {
+  int a;
+  int b;
+  int c;
+} __attribute__((aligned(16)));
+
+int foo (struct st * s_p)
+{
+  return s_p->a;
+}
+
+/* { dg-final { scan-tree-dump-times "& 7" 0 "sanopt" } } */
+/* { dg-final { cleanup-tree-dump "sanopt" } } */
diff --git a/gcc/testsuite/c-c++-common/asan/red-align-2.c b/gcc/testsuite/c-c++-common/asan/red-align-2.c
new file mode 100644
index 0000000..161fe3c
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/red-align-2.c
@@ -0,0 +1,20 @@
+/* This tests aligment propagation to structure elem and
+   abcense of redudant & 7.  */
+
+/* { dg-options "-fdump-tree-sanopt" } */
+/* { dg-do compile } */
+/* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
+
+struct st {
+  int a;
+  int b;
+  int c;
+} __attribute__((aligned(16)));
+
+int foo (struct st * s_p)
+{
+  return s_p->b;
+}
+
+/* { dg-final { scan-tree-dump-times "& 7" 1 "sanopt" } } */
+/* { dg-final { cleanup-tree-dump "sanopt" } } */


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PINGv2][PATCH] Asan optimization for aligned accesses.
  2014-09-10 12:30                   ` [PING][PATCH] " Marat Zakirov
@ 2014-09-16 15:00                     ` Marat Zakirov
  2014-09-16 15:19                       ` Jakub Jelinek
  0 siblings, 1 reply; 8+ messages in thread
From: Marat Zakirov @ 2014-09-16 15:00 UTC (permalink / raw)
  To: gcc-patches, kcc, jakub, Yury Gribov

[-- Attachment #1: Type: text/plain, Size: 651 bytes --]


On 09/10/2014 04:30 PM, Marat Zakirov wrote:
> On 09/02/2014 07:09 PM, Marat Zakirov wrote:
>>> Hi all!
>>>
>>> Here's a simple optimization patch for Asan. It stores alignment 
>>> information into ASAN_CHECK which is then extracted by sanopt to 
>>> reduce number of "and 0x7" instructions for sufficiently aligned 
>>> accesses. I checked it on linux kernel by comparing results of 
>>> objdump -d -j .text vmlinux | grep "and.*0x7," for optimized and 
>>> regular cases. It eliminates 12% of and 0x7's.
>>>
>>> No regressions. Sanitized GCC was successfully Asan-bootstrapped. No 
>>> false positives were found in kernel.
>>>
>>> --Marat
>>>
>


[-- Attachment #2: vdt627.diff --]
[-- Type: text/x-patch, Size: 3888 bytes --]

gcc/ChangeLog:

2014-09-02  Marat Zakirov  <m.zakirov@samsung.com>

	* asan.c (build_check_stmt): Alignment arg was added.
	(asan_expand_check_ifn): Optimization for alignment >= 8.

gcc/testsuite/ChangeLog:

2014-09-02  Marat Zakirov  <m.zakirov@samsung.com>

	* c-c++-common/asan/red-align-1.c: New test.
	* c-c++-common/asan/red-align-2.c: New test.

diff --git a/gcc/asan.c b/gcc/asan.c
index 58e7719..aed5ede 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -1639,9 +1639,11 @@ build_check_stmt (location_t loc, tree base, tree len,
   if (end_instrumented)
     flags |= ASAN_CHECK_END_INSTRUMENTED;
 
-  g = gimple_build_call_internal (IFN_ASAN_CHECK, 3,
+  g = gimple_build_call_internal (IFN_ASAN_CHECK, 4,
 				  build_int_cst (integer_type_node, flags),
-				  base, len);
+				  base, len,
+				  build_int_cst (integer_type_node,
+						 align/BITS_PER_UNIT));
   gimple_set_location (g, loc);
   if (before_p)
     gsi_insert_before (&gsi, g, GSI_SAME_STMT);
@@ -2434,6 +2436,7 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls)
 
   tree base = gimple_call_arg (g, 1);
   tree len = gimple_call_arg (g, 2);
+  HOST_WIDE_INT align = tree_to_shwi (gimple_call_arg (g, 3));
 
   HOST_WIDE_INT size_in_bytes
     = is_scalar_access && tree_fits_shwi_p (len) ? tree_to_shwi (len) : -1;
@@ -2547,7 +2550,10 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, bool use_calls)
 	  gimple shadow_test = build_assign (NE_EXPR, shadow, 0);
 	  gimple_seq seq = NULL;
 	  gimple_seq_add_stmt (&seq, shadow_test);
-	  gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR, base_addr, 7));
+	  /* Aligned (>= 8 bytes) access do not need & 7.  */
+	  if (align < 8)
+	    gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR,
+						     base_addr, 7));
 	  gimple_seq_add_stmt (&seq, build_type_cast (shadow_type,
 						      gimple_seq_last (seq)));
 	  if (real_size_in_bytes > 1)
diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def
index 7ae60f3..54ade9f 100644
--- a/gcc/internal-fn.def
+++ b/gcc/internal-fn.def
@@ -55,4 +55,4 @@ DEF_INTERNAL_FN (UBSAN_CHECK_SUB, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (UBSAN_CHECK_MUL, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
 DEF_INTERNAL_FN (ABNORMAL_DISPATCHER, ECF_NORETURN, NULL)
 DEF_INTERNAL_FN (BUILTIN_EXPECT, ECF_CONST | ECF_LEAF | ECF_NOTHROW, NULL)
-DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, ".W..")
+DEF_INTERNAL_FN (ASAN_CHECK, ECF_TM_PURE | ECF_LEAF | ECF_NOTHROW, ".W...")
diff --git a/gcc/testsuite/c-c++-common/asan/red-align-1.c b/gcc/testsuite/c-c++-common/asan/red-align-1.c
new file mode 100644
index 0000000..1edb3a2
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/red-align-1.c
@@ -0,0 +1,20 @@
+/* This tests aligment propagation to structure elem and
+   abcense of redudant & 7.  */
+
+/* { dg-options "-fdump-tree-sanopt" } */
+/* { dg-do compile } */
+/* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
+
+struct st {
+  int a;
+  int b;
+  int c;
+} __attribute__((aligned(16)));
+
+int foo (struct st * s_p)
+{
+  return s_p->a;
+}
+
+/* { dg-final { scan-tree-dump-times "& 7" 0 "sanopt" } } */
+/* { dg-final { cleanup-tree-dump "sanopt" } } */
diff --git a/gcc/testsuite/c-c++-common/asan/red-align-2.c b/gcc/testsuite/c-c++-common/asan/red-align-2.c
new file mode 100644
index 0000000..161fe3c
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/red-align-2.c
@@ -0,0 +1,20 @@
+/* This tests aligment propagation to structure elem and
+   abcense of redudant & 7.  */
+
+/* { dg-options "-fdump-tree-sanopt" } */
+/* { dg-do compile } */
+/* { dg-skip-if "" { *-*-* } { "-flto" } { "" } } */
+
+struct st {
+  int a;
+  int b;
+  int c;
+} __attribute__((aligned(16)));
+
+int foo (struct st * s_p)
+{
+  return s_p->b;
+}
+
+/* { dg-final { scan-tree-dump-times "& 7" 1 "sanopt" } } */
+/* { dg-final { cleanup-tree-dump "sanopt" } } */

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PINGv2][PATCH] Asan optimization for aligned accesses.
  2014-09-16 15:00                     ` [PINGv2][PATCH] " Marat Zakirov
@ 2014-09-16 15:19                       ` Jakub Jelinek
  0 siblings, 0 replies; 8+ messages in thread
From: Jakub Jelinek @ 2014-09-16 15:19 UTC (permalink / raw)
  To: Marat Zakirov; +Cc: gcc-patches, kcc, Yury Gribov

On Tue, Sep 16, 2014 at 06:59:57PM +0400, Marat Zakirov wrote:
> --- a/gcc/asan.c
> +++ b/gcc/asan.c
> @@ -1639,9 +1639,11 @@ build_check_stmt (location_t loc, tree base, tree len,
>    if (end_instrumented)
>      flags |= ASAN_CHECK_END_INSTRUMENTED;
>  
> -  g = gimple_build_call_internal (IFN_ASAN_CHECK, 3,
> +  g = gimple_build_call_internal (IFN_ASAN_CHECK, 4,
>  				  build_int_cst (integer_type_node, flags),
> -				  base, len);
> +				  base, len,
> +				  build_int_cst (integer_type_node,
> +						 align/BITS_PER_UNIT));

Formatting.  Spaces should be around / (both before and after).

> --- /dev/null
> +++ b/gcc/testsuite/c-c++-common/asan/red-align-1.c
> @@ -0,0 +1,20 @@
> +/* This tests aligment propagation to structure elem and
> +   abcense of redudant & 7.  */

absence of redundant

> --- /dev/null
> +++ b/gcc/testsuite/c-c++-common/asan/red-align-2.c
> @@ -0,0 +1,20 @@
> +/* This tests aligment propagation to structure elem and
> +   abcense of redudant & 7.  */

Likewise.

Otherwise, LGTM.

	Jakub

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316)
  2014-09-02 15:09                 ` [PATCH] Asan optimization for aligned accesses Marat Zakirov
  2014-09-10 12:30                   ` [PING][PATCH] " Marat Zakirov
@ 2014-09-24  9:22                   ` Jakub Jelinek
  2014-09-24 13:50                     ` ygribov
  1 sibling, 1 reply; 8+ messages in thread
From: Jakub Jelinek @ 2014-09-24  9:22 UTC (permalink / raw)
  To: Marat Zakirov; +Cc: gcc-patches, kcc, Yury Gribov

On Tue, Sep 02, 2014 at 07:09:50PM +0400, Marat Zakirov wrote:
> >Here's a simple optimization patch for Asan. It stores alignment
> >information into ASAN_CHECK which is then extracted by sanopt to reduce
> >number of "and 0x7" instructions for sufficiently aligned accesses. I
> >checked it on linux kernel by comparing results of objdump -d -j .text
> >vmlinux | grep "and.*0x7," for optimized and regular cases. It eliminates
> >12% of and 0x7's.
> >
> >No regressions. Sanitized GCC was successfully Asan-bootstrapped. No false
> >positives were found in kernel.

Unfortunately it broke PR63316.  The problem is that you've just replaced
base_addr & 7 with base_addr in the
(base_addr & 7) + (real_size_in_bytes - 1) >= shadow
computation.  & 7 is of course not useless there, & ~7 would be.
For known sufficiently aligned base_addr, instead we know that
(base_addr & 7) is always 0 and thus can simplify the test
to (real_size_in_bytes - 1) >= shadow
where (real_size_in_bytes - 1) is a constant.

Fixed thusly, committed to trunk.

BTW, I've noticed that perhaps using BIT_AND_EXPR for the
(shadow != 0) & ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow)
tests isn't best, maybe we could get better code if we expanded it as
(shadow != 0) && ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow)
(i.e. an extra basic block containing the second half of the test
and fastpath for the shadow == 0 case if it is sufficiently common
(probably it is)).  Will try to code this up unless somebody beats me to
that, but if somebody volunteered to benchmark such a change, it would
be very much appreciated.

2014-09-24  Jakub Jelinek  <jakub@redhat.com>

	PR sanitizer/63316
	* asan.c (asan_expand_check_ifn): Fix up align >= 8 optimization.

	* c-c++-common/asan/pr63316.c: New test.

--- gcc/asan.c.jj	2014-09-24 08:26:49.000000000 +0200
+++ gcc/asan.c	2014-09-24 11:00:59.380298362 +0200
@@ -2585,19 +2585,26 @@ asan_expand_check_ifn (gimple_stmt_itera
 	  gimple shadow_test = build_assign (NE_EXPR, shadow, 0);
 	  gimple_seq seq = NULL;
 	  gimple_seq_add_stmt (&seq, shadow_test);
-	  /* Aligned (>= 8 bytes) access do not need & 7.  */
+	  /* Aligned (>= 8 bytes) can test just
+	     (real_size_in_bytes - 1 >= shadow), as base_addr & 7 is known
+	     to be 0.  */
 	  if (align < 8)
-	    gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR,
-						     base_addr, 7));
-	  gimple_seq_add_stmt (&seq, build_type_cast (shadow_type,
-						      gimple_seq_last (seq)));
-	  if (real_size_in_bytes > 1)
-	    gimple_seq_add_stmt (&seq,
-				 build_assign (PLUS_EXPR, gimple_seq_last (seq),
-					       real_size_in_bytes - 1));
-	  gimple_seq_add_stmt (&seq, build_assign (GE_EXPR,
+	    {
+	      gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR,
+						       base_addr, 7));
+	      gimple_seq_add_stmt (&seq,
+				   build_type_cast (shadow_type,
+						    gimple_seq_last (seq)));
+	      if (real_size_in_bytes > 1)
+		gimple_seq_add_stmt (&seq,
+				     build_assign (PLUS_EXPR,
 						   gimple_seq_last (seq),
-						   shadow));
+						   real_size_in_bytes - 1));
+	      t = gimple_assign_lhs (gimple_seq_last_stmt (seq));
+	    }
+	  else
+	    t = build_int_cst (shadow_type, real_size_in_bytes - 1);
+	  gimple_seq_add_stmt (&seq, build_assign (GE_EXPR, t, shadow));
 	  gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR, shadow_test,
 						   gimple_seq_last (seq)));
 	  t = gimple_assign_lhs (gimple_seq_last (seq));
--- gcc/testsuite/c-c++-common/asan/pr63316.c.jj	2014-09-24 10:57:21.879454411 +0200
+++ gcc/testsuite/c-c++-common/asan/pr63316.c	2014-09-24 11:04:16.773241665 +0200
@@ -0,0 +1,22 @@
+/* PR sanitizer/63316 */
+/* { dg-do run } */
+/* { dg-options "-fsanitize=address -O2" } */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+extern void *malloc (__SIZE_TYPE__);
+extern void free (void *);
+#ifdef __cplusplus
+}
+#endif
+
+int
+main ()
+{
+  int *p = (int *) malloc (sizeof (int));
+  *p = 3;
+  asm volatile ("" : : "r" (p) : "memory");
+  free (p);
+  return 0;
+}


	Jakub

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316)
  2014-09-24  9:22                   ` [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316) Jakub Jelinek
@ 2014-09-24 13:50                     ` ygribov
  2014-09-24 13:52                       ` ygribov
  0 siblings, 1 reply; 8+ messages in thread
From: ygribov @ 2014-09-24 13:50 UTC (permalink / raw)
  To: gcc-patches

> BTW, I've noticed that perhaps using BIT_AND_EXPR for the 
> (shadow != 0) & ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow) 
> tests isn't best, maybe we could get better code if we expanded it as 
> (shadow != 0) && ((base_addr & 7) + (real_size_in_bytes - 1) >= shadow) 
> (i.e. an extra basic block containing the second half of the test 
> and fastpath for the shadow == 0 case if it is sufficiently common 
> (probably it is)).

BIT_AND_EXPR allows efficient branchless implementation on platforms which
allow chained conditional compares (e.g. ARM). You can't repro this on
current trunk though because I'm still waiting for ccmp patches from
Zhenqiang Chen to be approved :(

> Will try to code this up unless somebody beats me to 
> that, but if somebody volunteered to benchmark such a change, it would 
> be very much appreciated.

AFAIK LLVM team recently got some 1% on SPEC from this.

-Y



--
View this message in context: http://gcc.1065356.n5.nabble.com/Re-please-verify-my-mail-to-community-tp1066917p1073370.html
Sent from the gcc - patches mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316)
  2014-09-24 13:50                     ` ygribov
@ 2014-09-24 13:52                       ` ygribov
  0 siblings, 0 replies; 8+ messages in thread
From: ygribov @ 2014-09-24 13:52 UTC (permalink / raw)
  To: gcc-patches

> AFAIK LLVM team recently got some 1% on SPEC from this. 

On x64 that is.



--
View this message in context: http://gcc.1065356.n5.nabble.com/Re-please-verify-my-mail-to-community-tp1066917p1073371.html
Sent from the gcc - patches mailing list archive at Nabble.com.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-09-24 13:52 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <54047383.8000709@samsung.com>
     [not found] ` <54048C70.6000802@samsung.com>
     [not found]   ` <540495BD.20303@samsung.com>
     [not found]     ` <54059420.9070907@samsung.com>
     [not found]       ` <5405A74E.2070001@samsung.com>
     [not found]         ` <5405B5E5.9030904@samsung.com>
     [not found]           ` <5405BC13.5070504@samsung.com>
     [not found]             ` <5405CBBF.5010202@samsung.com>
2014-09-02 15:03               ` please verify my mail to community Marat Zakirov
2014-09-02 15:09                 ` [PATCH] Asan optimization for aligned accesses Marat Zakirov
2014-09-10 12:30                   ` [PING][PATCH] " Marat Zakirov
2014-09-16 15:00                     ` [PINGv2][PATCH] " Marat Zakirov
2014-09-16 15:19                       ` Jakub Jelinek
2014-09-24  9:22                   ` [PATCH] Fix asan optimization for aligned accesses. (PR sanitizer/63316) Jakub Jelinek
2014-09-24 13:50                     ` ygribov
2014-09-24 13:52                       ` ygribov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).