public inbox for gcc-patches@gcc.gnu.org
 help / color / mirror / Atom feed
* [PATCH v2][ASAN] Implement dynamic allocas/VLAs sanitization.​
       [not found] <CGME20170626124925eucas1p18c56742a07db5bb2dabbedd0e894aa0e@eucas1p1.samsung.com>
@ 2017-06-26 12:49 ` Maxim Ostapenko
  2017-06-29 12:35   ` Jakub Jelinek
  0 siblings, 1 reply; 3+ messages in thread
From: Maxim Ostapenko @ 2017-06-26 12:49 UTC (permalink / raw)
  To: GCC Patches; +Cc: Jakub Jelinek, Richard Biener, Yuri Gribov

[-- Attachment #1: Type: text/plain, Size: 1342 bytes --]

Hi,

I'm sorry for a long delay. Here an updated patch.
Following Jakub's suggestion from previous review, I've added a 
get_nonzero_bits stuff into handle_builtin_alloca in order to avoid 
redundant redzone size calculations in case we know that alloca has 
alignment >= ASAN_RED_ZONE_SIZE. Thus, for the following code:

struct __attribute__((aligned (N))) S { char s[N]; };

void bar (struct S *, struct S *);

void
foo (int x)
{
   struct S a;
   {
     struct S b[x];
     bar (&a, &b[0]);
   }
   {
     struct S b[x + 4];
     bar (&a, &b[0]);
   }
}

void
baz (int x)
{
   struct S a;
   struct S b[x];
   bar (&a, &b[0]);
}

compiled with -O2 -fsanitize=address -DN=64, we have expected

   _2 = (sizetype) x_1(D);
   _8 = _2 * 64;
   _14 = _8 + 96;
   _15 = __builtin_alloca_with_align (_14, 512);
   _16 = _15 + 64;
   __builtin___asan_alloca_poison (_16, _8);

instead of previous

   _1 = (sizetype) x_4(D);
   _2 = _1 * 64;
   _24 = _2 & 31;
   _19 = _2 + 128;
   _27 = _19 - _24;
   _28 = __builtin_alloca_with_align (_27, 512);
   _29 = _28 + 64;
   __builtin___asan_alloca_poison (_29, _2);


Also, I've added a simple pattern for X & C -> 0 if we know that corresponding bits are zero, but I'm not sure this pattern has a practical value.
Tested and bootstrapped on x86_64-unknown-linux-gnu. Could you take a look?

-Maxim


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: alloca-gcc-10.diff --]
[-- Type: text/x-diff; name="alloca-gcc-10.diff", Size: 25136 bytes --]

gcc/ChangeLog:

2017-06-26  Maxim Ostapenko  <m.ostapenko@samsung.com>

	* asan.c: Include gimple-fold.h.
	(get_last_alloca_addr): New function.
	(handle_builtin_stackrestore): Likewise.
	(handle_builtin_alloca): Likewise.
	(asan_emit_allocas_unpoison): Likewise.
	(get_mem_refs_of_builtin_call): Add new parameter, remove const
	quallifier from first paramerer. Handle BUILT_IN_ALLOCA,
	BUILT_IN_ALLOCA_WITH_ALIGN and BUILT_IN_STACK_RESTORE builtins.
	(instrument_builtin_call): Pass gimple iterator to
	get_mem_refs_of_builtin_call.
	(last_alloca_addr): New global.
	* asan.h (asan_emit_allocas_unpoison): Declare.
	* builtins.c (expand_asan_emit_allocas_unpoison): New function.
	(expand_builtin): Handle BUILT_IN_ASAN_ALLOCAS_UNPOISON.
	* cfgexpand.c (expand_used_vars): Call asan_emit_allocas_unpoison
	if function calls alloca.
	* gimple-fold.c (replace_call_with_value): Remove static keyword.
	* gimple-fold.h (replace_call_with_value): Declare.
	* internal-fn.c: Include asan.h.
	* sanitizer.def (BUILT_IN_ASAN_ALLOCA_POISON,
	BUILT_IN_ASAN_ALLOCAS_UNPOISON): New builtins.
	* match.pd: Add new pattern.

gcc/testsuite/ChangeLog:

2017-06-26  Maxim Ostapenko  <m.ostapenko@samsung.com>

	* c-c++-common/asan/alloca_big_alignment.c: New test.
	* c-c++-common/asan/alloca_detect_custom_size.c: Likewise.
	* c-c++-common/asan/alloca_instruments_all_paddings.c: Likewise.
	* c-c++-common/asan/alloca_loop_unpoisoning.c: Likewise.
	* c-c++-common/asan/alloca_overflow_partial.c: Likewise.
	* c-c++-common/asan/alloca_overflow_right.c: Likewise.
	* c-c++-common/asan/alloca_safe_access.c: Likewise.
	* c-c++-common/asan/alloca_underflow_left.c: Likewise.

diff --git a/gcc/asan.c b/gcc/asan.c
index e730530..6d7a5ec 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -55,6 +55,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "langhooks.h"
 #include "cfgloop.h"
 #include "gimple-builder.h"
+#include "gimple-fold.h"
 #include "ubsan.h"
 #include "params.h"
 #include "builtins.h"
@@ -245,6 +246,7 @@ along with GCC; see the file COPYING3.  If not see
 static unsigned HOST_WIDE_INT asan_shadow_offset_value;
 static bool asan_shadow_offset_computed;
 static vec<char *> sanitized_sections;
+static tree last_alloca_addr = NULL_TREE;
 
 /* Set of variable declarations that are going to be guarded by
    use-after-scope sanitizer.  */
@@ -529,11 +531,183 @@ get_mem_ref_of_assignment (const gassign *assignment,
   return true;
 }
 
+/* Return address of last allocated dynamic alloca.  */
+
+static tree
+get_last_alloca_addr ()
+{
+  if (last_alloca_addr)
+    return last_alloca_addr;
+
+  gimple_seq seq = NULL;
+  gassign *g;
+
+  last_alloca_addr = create_tmp_reg (ptr_type_node, "last_alloca_addr");
+  g = gimple_build_assign (last_alloca_addr, NOP_EXPR,
+			   build_int_cst (ptr_type_node, 0));
+  gimple_seq_add_stmt_without_update (&seq, g);
+
+  edge e = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun));
+  gsi_insert_seq_on_edge_immediate (e, seq);
+  return last_alloca_addr;
+}
+
+/* Insert __asan_allocas_unpoison (top, bottom) call after
+   __builtin_stack_restore (new_sp) call.
+   The pseudocode of this routine should look like this:
+     __builtin_stack_restore (new_sp);
+     top = last_alloca_addr;
+     bot = virtual_dynamic_stack_rtx;
+     __asan_allocas_unpoison (top, bottom);
+     last_alloca_addr = new_sp;
+   We don't use new_sp as bot parameter because on some architectures
+   SP has non zero offset from dynamic stack area.  Moreover, on some
+   architectures this offset (STACK_DYNAMIC_OFFSET) becomes known for each
+   particular function only after all callees were expanded to rtl.
+   The most noticable example is PowerPC{,64}, see
+   http://refspecs.linuxfoundation.org/ELF/ppc64/PPC-elf64abi.html#DYNAM-STACK.
+*/
+
+static void
+handle_builtin_stack_restore (gcall *call, gimple_stmt_iterator *iter)
+{
+  if (!iter)
+    return;
+
+  gimple_stmt_iterator gsi = *iter;
+  gimple_seq seq = NULL;
+  tree last_alloca_addr = get_last_alloca_addr ();
+  tree restored_stack = gimple_call_arg (call, 0);
+  tree fn = builtin_decl_implicit (BUILT_IN_ASAN_ALLOCAS_UNPOISON);
+  gimple *g = gimple_build_call (fn, 2, last_alloca_addr, restored_stack);
+  gimple_seq_add_stmt_without_update (&seq, g);
+  g = gimple_build_assign (last_alloca_addr, NOP_EXPR, restored_stack);
+  gimple_seq_add_stmt_without_update (&seq, g);
+  gsi_insert_seq_after (&gsi, seq, GSI_SAME_STMT);
+}
+
+/* Deploy and poison redzones around __builtin_alloca call.  To do this, we
+   should replace this call with another one with changed parameters and
+   replace all its uses with new address, so
+     addr = __builtin_alloca (old_size, align);
+   is replaced by
+     new_size = old_size + additional_size;
+     tmp = __builtin_alloca (new_size, max (align, 32))
+     addr = tmp + 32 (first 32 bytes are for the left redzone);
+   ADDITIONAL_SIZE is added to make new memory allocation contain not only
+   requested memory, but also left, partial and right redzones as well as some
+   additional space, required by alignment.  */
+
+static void
+handle_builtin_alloca (gcall *call, gimple_stmt_iterator *iter)
+{
+  if (!iter)
+    return;
+
+  gimple_seq seq = NULL;
+  gassign *g;
+  gcall *gg;
+  gimple_stmt_iterator gsi = *iter;
+  const HOST_WIDE_INT redzone_mask = ASAN_RED_ZONE_SIZE - 1;
+
+  tree last_alloca_addr = get_last_alloca_addr ();
+  tree callee = gimple_call_fndecl (call);
+  tree old_size = gimple_call_arg (call, 0);
+  tree ptr_type = gimple_call_lhs (call) ? TREE_TYPE (gimple_call_lhs (call))
+					 : ptr_type_node;
+  tree partial_size = NULL_TREE;
+  bool alloca_with_align
+    = DECL_FUNCTION_CODE (callee) == BUILT_IN_ALLOCA_WITH_ALIGN;
+  unsigned int align
+    = alloca_with_align ? tree_to_uhwi (gimple_call_arg (call, 1)) : 0;
+
+  /* If ALIGN > ASAN_RED_ZONE_SIZE, we embed left redzone into first ALIGN
+     bytes of allocated space.  Otherwise, align alloca to ASAN_RED_ZONE_SIZE
+     manually.  */
+  align = MAX (align, ASAN_RED_ZONE_SIZE * BITS_PER_UNIT);
+
+  tree alloca_rz_mask = build_int_cst (size_type_node, redzone_mask);
+  tree redzone_size = build_int_cst (size_type_node, ASAN_RED_ZONE_SIZE);
+
+  /* Extract lower bits from old_size.  */
+  wide_int size_nonzero_bits = get_nonzero_bits (old_size);
+  wide_int rz_mask
+    = wi::uhwi (redzone_mask, wi::get_precision (size_nonzero_bits));
+  wide_int old_size_lower_bits = wi::bit_and (size_nonzero_bits, rz_mask);
+
+  /* If alloca size is aligned to ASAN_RED_ZONE_SIZE, we don't need partial
+     redzone.  Otherwise, compute its size here.  */
+  if (wi::ne_p (old_size_lower_bits, 0))
+    {
+      /* misalign = size & (ASAN_RED_ZONE_SIZE - 1)
+         partial_size = ASAN_RED_ZONE_SIZE - misalign.  */
+      g = gimple_build_assign (make_ssa_name (size_type_node, NULL),
+			       BIT_AND_EXPR, old_size, alloca_rz_mask);
+      gimple_seq_add_stmt_without_update (&seq, g);
+      tree misalign = gimple_assign_lhs (g);
+      g = gimple_build_assign (make_ssa_name (size_type_node, NULL), MINUS_EXPR,
+			       redzone_size, misalign);
+      gimple_seq_add_stmt_without_update (&seq, g);
+      partial_size = gimple_assign_lhs (g);
+    }
+
+  /* additional_size = align + ASAN_RED_ZONE_SIZE.  */
+  tree additional_size = build_int_cst (size_type_node, align / BITS_PER_UNIT
+							+ ASAN_RED_ZONE_SIZE);
+  /* If alloca has partial redzone, include it to additional_size too.  */
+  if (partial_size)
+    {
+      /* additional_size += partial_size.  */
+      g = gimple_build_assign (make_ssa_name (size_type_node), PLUS_EXPR,
+			       partial_size, additional_size);
+      gimple_seq_add_stmt_without_update (&seq, g);
+      additional_size = gimple_assign_lhs (g);
+    }
+
+  /* new_size = old_size + additional_size.  */
+  g = gimple_build_assign (make_ssa_name (size_type_node), PLUS_EXPR, old_size,
+			   additional_size);
+  gimple_seq_add_stmt_without_update (&seq, g);
+  tree new_size = gimple_assign_lhs (g);
+
+  /* Build new __builtin_alloca call:
+       new_alloca_with_rz = __builtin_alloca (new_size, align).  */
+  tree fn = builtin_decl_implicit (BUILT_IN_ALLOCA_WITH_ALIGN);
+  gg = gimple_build_call (fn, 2, new_size,
+			  build_int_cst (size_type_node, align));
+  tree new_alloca_with_rz = make_ssa_name (ptr_type, gg);
+  gimple_call_set_lhs (gg, new_alloca_with_rz);
+  gimple_seq_add_stmt_without_update (&seq, gg);
+
+  /* new_alloca = new_alloca_with_rz + align.  */
+  g = gimple_build_assign (make_ssa_name (ptr_type), POINTER_PLUS_EXPR,
+			   new_alloca_with_rz,
+			   build_int_cst (size_type_node,
+					  align / BITS_PER_UNIT));
+  gimple_seq_add_stmt_without_update (&seq, g);
+  tree new_alloca = gimple_assign_lhs (g);
+
+  /* Replace old alloca ptr with NEW_ALLOCA.  */
+  replace_call_with_value (&gsi, new_alloca);
+
+  /* Poison newly created alloca redzones:
+      __asan_alloca_poison (new_alloca, old_size).  */
+  fn = builtin_decl_implicit (BUILT_IN_ASAN_ALLOCA_POISON);
+  gg = gimple_build_call (fn, 2, new_alloca, old_size);
+  gimple_seq_add_stmt_without_update (&seq, gg);
+
+  /* Save new_alloca_with_rz value into last_alloca_addr to use it during
+     allocas unpoisoning.  */
+  g = gimple_build_assign (last_alloca_addr, NOP_EXPR, new_alloca_with_rz);
+  gimple_seq_add_stmt_without_update (&seq, g);
+  gsi_insert_seq_before (&gsi, seq, GSI_SAME_STMT);
+}
+
 /* Return the memory references contained in a gimple statement
    representing a builtin call that has to do with memory access.  */
 
 static bool
-get_mem_refs_of_builtin_call (const gcall *call,
+get_mem_refs_of_builtin_call (gcall *call,
 			      asan_mem_ref *src0,
 			      tree *src0_len,
 			      bool *src0_is_store,
@@ -544,7 +718,8 @@ get_mem_refs_of_builtin_call (const gcall *call,
 			      tree *dst_len,
 			      bool *dst_is_store,
 			      bool *dest_is_deref,
-			      bool *intercepted_p)
+			      bool *intercepted_p,
+			      gimple_stmt_iterator *iter = NULL)
 {
   gcc_checking_assert (gimple_call_builtin_p (call, BUILT_IN_NORMAL));
 
@@ -603,6 +778,14 @@ get_mem_refs_of_builtin_call (const gcall *call,
       len = gimple_call_lhs (call);
       break;
 
+    case BUILT_IN_STACK_RESTORE:
+      handle_builtin_stack_restore (call, iter);
+      break;
+
+    case BUILT_IN_ALLOCA_WITH_ALIGN:
+    case BUILT_IN_ALLOCA:
+      handle_builtin_alloca (call, iter);
+      break;
     /* And now the __atomic* and __sync builtins.
        These are handled differently from the classical memory memory
        access builtins above.  */
@@ -1366,6 +1549,28 @@ asan_emit_stack_protection (rtx base, rtx pbase, unsigned int alignb,
   return insns;
 }
 
+/* Emit __asan_allocas_unpoison (top, bot) call.  The BASE parameter corresponds
+   to BOT argument, for TOP virtual_stack_dynamic_rtx is used.  NEW_SEQUENCE
+   indicates whether we're emitting new instructions sequence or not.  */
+
+rtx_insn *
+asan_emit_allocas_unpoison (rtx top, rtx bot, rtx_insn *before)
+{
+  if (before)
+    push_to_sequence (before);
+  else
+    start_sequence ();
+  rtx ret = init_one_libfunc ("__asan_allocas_unpoison");
+  ret = emit_library_call_value (ret, NULL_RTX, LCT_NORMAL, ptr_mode, 2, top,
+				 TYPE_MODE (pointer_sized_int_node), bot,
+				 TYPE_MODE (pointer_sized_int_node));
+
+  do_pending_stack_adjust ();
+  rtx_insn *insns = get_insns ();
+  end_sequence ();
+  return insns;
+}
+
 /* Return true if DECL, a global var, might be overridden and needs
    therefore a local alias.  */
 
@@ -2000,7 +2205,7 @@ instrument_builtin_call (gimple_stmt_iterator *iter)
 				    &src0, &src0_len, &src0_is_store,
 				    &src1, &src1_len, &src1_is_store,
 				    &dest, &dest_len, &dest_is_store,
-				    &dest_is_deref, &intercepted_p))
+				    &dest_is_deref, &intercepted_p, iter))
     {
       if (dest_is_deref)
 	{
@@ -3186,6 +3391,7 @@ asan_instrument (void)
   if (shadow_ptr_types[0] == NULL_TREE)
     asan_init_shadow_ptr_types ();
   transform_statements ();
+  last_alloca_addr = NULL_TREE;
   return 0;
 }
 
diff --git a/gcc/asan.h b/gcc/asan.h
index 95bb89e..4e8120e 100644
--- a/gcc/asan.h
+++ b/gcc/asan.h
@@ -25,6 +25,7 @@ extern void asan_function_start (void);
 extern void asan_finish_file (void);
 extern rtx_insn *asan_emit_stack_protection (rtx, rtx, unsigned int,
 					     HOST_WIDE_INT *, tree *, int);
+extern rtx_insn *asan_emit_allocas_unpoison (rtx, rtx, rtx_insn *);
 extern bool asan_protect_global (tree);
 extern void initialize_sanitizer_builtins (void);
 extern tree asan_dynamic_init_call (bool);
diff --git a/gcc/builtins.c b/gcc/builtins.c
index 7e829ef..a9700a1 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -4962,6 +4962,20 @@ expand_builtin_alloca (tree exp)
   return result;
 }
 
+static rtx
+expand_asan_emit_allocas_unpoison (tree exp)
+{
+  tree arg0 = CALL_EXPR_ARG (exp, 0);
+  rtx top = expand_expr (arg0, NULL_RTX, GET_MODE (virtual_stack_dynamic_rtx),
+			 EXPAND_NORMAL);
+  rtx ret = init_one_libfunc ("__asan_allocas_unpoison");
+  ret = emit_library_call_value (ret, NULL_RTX, LCT_NORMAL, ptr_mode, 2, top,
+				 TYPE_MODE (pointer_sized_int_node),
+				 virtual_stack_dynamic_rtx,
+				 TYPE_MODE (pointer_sized_int_node));
+  return ret;
+}
+
 /* Expand a call to bswap builtin in EXP.
    Return NULL_RTX if a normal call should be emitted rather than expanding the
    function in-line.  If convenient, the result should be placed in TARGET.
@@ -6763,6 +6777,12 @@ expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
 	return target;
       break;
 
+    case BUILT_IN_ASAN_ALLOCAS_UNPOISON:
+      target = expand_asan_emit_allocas_unpoison (exp);
+      if (target)
+	return target;
+      break;
+
     case BUILT_IN_STACK_SAVE:
       return expand_stack_save ();
 
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index c1f8072..b4450d4 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -2241,6 +2241,11 @@ expand_used_vars (void)
       expand_stack_vars (NULL, &data);
     }
 
+  if ((flag_sanitize & SANITIZE_ADDRESS) && cfun->calls_alloca)
+    var_end_seq = asan_emit_allocas_unpoison (virtual_stack_dynamic_rtx,
+					      virtual_stack_vars_rtx,
+					      var_end_seq);
+
   fini_vars_expansion ();
 
   /* If there were any artificial non-ignored vars without rtl
diff --git a/gcc/gimple-fold.c b/gcc/gimple-fold.c
index a00c2c8..7577013 100644
--- a/gcc/gimple-fold.c
+++ b/gcc/gimple-fold.c
@@ -571,7 +571,7 @@ gimplify_and_update_call_from_tree (gimple_stmt_iterator *si_p, tree expr)
 
 /* Replace the call at *GSI with the gimple value VAL.  */
 
-static void
+void
 replace_call_with_value (gimple_stmt_iterator *gsi, tree val)
 {
   gimple *stmt = gsi_stmt (*gsi);
diff --git a/gcc/gimple-fold.h b/gcc/gimple-fold.h
index ad0b00d..2cee385 100644
--- a/gcc/gimple-fold.h
+++ b/gcc/gimple-fold.h
@@ -58,6 +58,7 @@ extern bool gimple_fold_builtin_sprintf (gimple_stmt_iterator *);
 extern bool gimple_fold_builtin_snprintf (gimple_stmt_iterator *);
 extern bool arith_code_with_undefined_signed_overflow (tree_code);
 extern gimple_seq rewrite_to_defined_overflow (gimple *);
+extern void replace_call_with_value (gimple_stmt_iterator *, tree);
 
 /* gimple_build, functionally matching fold_buildN, outputs stmts
    int the provided sequence, matching and simplifying them on-the-fly.
diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c
index 75fe027..eaaff90 100644
--- a/gcc/internal-fn.c
+++ b/gcc/internal-fn.c
@@ -39,6 +39,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "stor-layout.h"
 #include "dojump.h"
 #include "expr.h"
+#include "asan.h"
 #include "ubsan.h"
 #include "recog.h"
 #include "builtins.h"
diff --git a/gcc/match.pd b/gcc/match.pd
index a4cae11..a451b9c 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -669,6 +669,15 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
   (bitop @0 @0)
   (non_lvalue @0)))
 
+/* x & C -> 0 if we know that corresponding bits are zero */
+#if GIMPLE
+(simplify
+  (bit_and SSA_NAME@0 INTEGER_CST@1)
+  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
+       && (get_nonzero_bits (@0) & @1) == 0)
+  { build_zero_cst (type); }))
+#endif
+
 /* x & C -> x if we know that x & ~C == 0.  */
 #if GIMPLE
 (simplify
diff --git a/gcc/sanitizer.def b/gcc/sanitizer.def
index 8b4acfc..91759a8 100644
--- a/gcc/sanitizer.def
+++ b/gcc/sanitizer.def
@@ -171,6 +171,10 @@ DEF_SANITIZER_BUILTIN(BUILT_IN_ASAN_POISON_STACK_MEMORY,
 DEF_SANITIZER_BUILTIN(BUILT_IN_ASAN_UNPOISON_STACK_MEMORY,
 		      "__asan_unpoison_stack_memory",
 		      BT_FN_VOID_PTR_PTRMODE, ATTR_NOTHROW_LEAF_LIST)
+DEF_SANITIZER_BUILTIN(BUILT_IN_ASAN_ALLOCA_POISON, "__asan_alloca_poison",
+		      BT_FN_VOID_PTR_PTRMODE, ATTR_NOTHROW_LEAF_LIST)
+DEF_SANITIZER_BUILTIN(BUILT_IN_ASAN_ALLOCAS_UNPOISON, "__asan_allocas_unpoison",
+		      BT_FN_VOID_PTR_PTRMODE, ATTR_NOTHROW_LEAF_LIST)
 
 /* Thread Sanitizer */
 DEF_SANITIZER_BUILTIN(BUILT_IN_TSAN_INIT, "__tsan_init", 
diff --git a/gcc/testsuite/c-c++-common/asan/alloca_big_alignment.c b/gcc/testsuite/c-c++-common/asan/alloca_big_alignment.c
new file mode 100644
index 0000000..54f03b8
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/alloca_big_alignment.c
@@ -0,0 +1,22 @@
+/* { dg-do run } */
+/* { dg-shouldfail "asan" } */
+
+#include <assert.h>
+
+volatile int ten = 10;
+
+__attribute__((noinline)) void foo(int index, int len) {
+  volatile char str[len] __attribute__((aligned(128)));
+  assert(!((long) str & 127L));
+  str[index] = '1'; // BOOM
+}
+
+int main() {
+  foo(ten, ten);
+  return 0;
+}
+
+/* { dg-output "WRITE of size 1 at 0x\[0-9a-f\]+ thread T0\[^\n\r]*(\n|\r\n|\r)" } */
+/* { dg-output "    #0 0x\[0-9a-f\]+ +(in _*foo(\[^\n\r]*alloca_big_alignment.c:11|\[^\n\r]*:0)|\[(\]).*(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*Address 0x\[0-9a-f\]+ is located in stack of thread T0.*(\n|\r\n|\r)" */
+/* { dg-output "\[^\n\r]*in foo.*alloca_big_alignment.c.*(\n|\r\n|\r)" */
diff --git a/gcc/testsuite/c-c++-common/asan/alloca_detect_custom_size.c b/gcc/testsuite/c-c++-common/asan/alloca_detect_custom_size.c
new file mode 100644
index 0000000..609dafe
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/alloca_detect_custom_size.c
@@ -0,0 +1,27 @@
+/* { dg-do run } */
+/* { dg-shouldfail "asan" } */
+
+#include <assert.h>
+
+struct A {
+  char a[3];
+  int b[3];
+};
+
+volatile int ten = 10;
+
+__attribute__((noinline)) void foo(int index, int len) {
+  volatile struct A str[len] __attribute__((aligned(32)));
+  assert(!((long) str & 31L));
+  str[index].a[0] = '1'; // BOOM
+}
+
+int main(int argc, char **argv) {
+  foo(ten, ten);
+  return 0;
+}
+
+/* { dg-output "WRITE of size 1 at 0x\[0-9a-f\]+ thread T0\[^\n\r]*(\n|\r\n|\r)" } */
+/* { dg-output "    #0 0x\[0-9a-f\]+ +(in _*foo(\[^\n\r]*alloca_detect_custom_size.c:16|\[^\n\r]*:0)|\[(\]).*(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*Address 0x\[0-9a-f\]+ is located in stack of thread T0.*(\n|\r\n|\r)" */
+/* { dg-output "\[^\n\r]*in foo.*alloca_detect_custom_size.c.*(\n|\r\n|\r)" */
diff --git a/gcc/testsuite/c-c++-common/asan/alloca_instruments_all_paddings.c b/gcc/testsuite/c-c++-common/asan/alloca_instruments_all_paddings.c
new file mode 100644
index 0000000..6b08c0b
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/alloca_instruments_all_paddings.c
@@ -0,0 +1,21 @@
+/* { dg-do run } */
+
+#include "sanitizer/asan_interface.h"
+#include <assert.h>
+
+__attribute__((noinline)) void foo(int index, int len) {
+  volatile char str[len] __attribute__((aligned(32)));
+  assert(!((long) str & 31L));
+  char *q = (char *)__asan_region_is_poisoned((char *)str, 64);
+  assert(q && ((q - str) == index));
+}
+
+int main(int argc, char **argv) {
+  for (int i = 1; i < 33; ++i)
+    foo(i, i);
+
+  for (int i = 1; i < 33; ++i)
+    foo(i, i);
+
+  return 0;
+}
diff --git a/gcc/testsuite/c-c++-common/asan/alloca_loop_unpoisoning.c b/gcc/testsuite/c-c++-common/asan/alloca_loop_unpoisoning.c
new file mode 100644
index 0000000..0ddadb9
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/alloca_loop_unpoisoning.c
@@ -0,0 +1,34 @@
+/* { dg-do run } */
+
+/* This testcase checks that allocas and VLAs inside loop are correctly unpoisoned.  */
+
+#include <assert.h>
+#include <stdint.h>
+#include <stdlib.h>
+#include <stdio.h>
+#include "sanitizer/asan_interface.h"
+
+void *top, *bot;
+volatile int thirty_two = 32;
+
+__attribute__((noinline)) void foo(int len) {
+  char x;
+  top = &x;
+  volatile char array[len];
+  assert(!((uintptr_t) array & 31L));
+  alloca(len);
+  for (int i = 0; i < thirty_two; ++i) {
+    char array[i];
+    bot = array;
+    /* Just to prevent optimization.  */
+    printf("%p\n", bot);
+    assert(!((uintptr_t) bot & 31L));
+  }
+}
+
+int main(int argc, char **argv) {
+  foo(thirty_two);
+  void *q = __asan_region_is_poisoned(bot, (char *)top - (char *)bot);
+  assert(!q);
+  return 0;
+}
diff --git a/gcc/testsuite/c-c++-common/asan/alloca_overflow_partial.c b/gcc/testsuite/c-c++-common/asan/alloca_overflow_partial.c
new file mode 100644
index 0000000..9f4d078
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/alloca_overflow_partial.c
@@ -0,0 +1,22 @@
+/* { dg-do run } */
+/* { dg-shouldfail "asan" } */
+
+#include <assert.h>
+
+volatile const int ten = 10;
+
+__attribute__((noinline)) void foo(int index, int len) {
+  volatile char str[len] __attribute__((aligned(32)));
+  assert(!((long) str & 31L));
+  str[index] = '1'; // BOOM
+}
+
+int main(int argc, char **argv) {
+  foo(ten, ten);
+  return 0;
+}
+
+/* { dg-output "WRITE of size 1 at 0x\[0-9a-f\]+ thread T0\[^\n\r]*(\n|\r\n|\r)" } */
+/* { dg-output "    #0 0x\[0-9a-f\]+ +(in _*foo(\[^\n\r]*alloca_overflow_partial.c:11|\[^\n\r]*:0)|\[(\]).*(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*Address 0x\[0-9a-f\]+ is located in stack of thread T0.*(\n|\r\n|\r)" */
+/* { dg-output "\[^\n\r]*in foo.*alloca_overflow_partial.c.*(\n|\r\n|\r)" */
diff --git a/gcc/testsuite/c-c++-common/asan/alloca_overflow_right.c b/gcc/testsuite/c-c++-common/asan/alloca_overflow_right.c
new file mode 100644
index 0000000..085fdae
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/alloca_overflow_right.c
@@ -0,0 +1,22 @@
+/* { dg-do run } */
+/* { dg-shouldfail "asan" } */
+
+#include <assert.h>
+
+volatile const int ten = 10;
+
+__attribute__((noinline)) void foo(int index, int len) {
+  volatile char str[len] __attribute__((aligned(32)));
+  assert(!((long) str & 31L));
+  str[index] = '1'; // BOOM
+}
+
+int main(int argc, char **argv) {
+  foo(33, ten);
+  return 0;
+}
+
+/* { dg-output "WRITE of size 1 at 0x\[0-9a-f\]+ thread T0\[^\n\r]*(\n|\r\n|\r)" } */
+/* { dg-output "    #0 0x\[0-9a-f\]+ +(in _*foo(\[^\n\r]*alloca_overflow_right.c:11|\[^\n\r]*:0)|\[(\]).*(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*Address 0x\[0-9a-f\]+ is located in stack of thread T0.*(\n|\r\n|\r)" */
+/* { dg-output "\[^\n\r]*in foo.*alloca_overflow_right.c.*(\n|\r\n|\r)" */
diff --git a/gcc/testsuite/c-c++-common/asan/alloca_safe_access.c b/gcc/testsuite/c-c++-common/asan/alloca_safe_access.c
new file mode 100644
index 0000000..92da1b2
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/alloca_safe_access.c
@@ -0,0 +1,15 @@
+/* { dg-do run } */
+
+#include <assert.h>
+
+__attribute__((noinline)) void foo(int index, int len) {
+  volatile char str[len] __attribute__((aligned(32)));
+  assert(!((long)str & 31L));
+  str[index] = '1';
+}
+
+int main(int argc, char **argv) {
+  foo(4, 5);
+  foo(39, 40);
+  return 0;
+}
diff --git a/gcc/testsuite/c-c++-common/asan/alloca_underflow_left.c b/gcc/testsuite/c-c++-common/asan/alloca_underflow_left.c
new file mode 100644
index 0000000..fe2abe1
--- /dev/null
+++ b/gcc/testsuite/c-c++-common/asan/alloca_underflow_left.c
@@ -0,0 +1,22 @@
+/* { dg-do run } */
+/* { dg-shouldfail "asan" } */
+
+#include <assert.h>
+
+volatile const int ten = 10;
+
+__attribute__((noinline)) void foo(int index, int len) {
+  volatile char str[len] __attribute__((aligned(32)));
+  assert(!((long) str & 31L));
+  str[index] = '1'; // BOOM
+}
+
+int main(int argc, char **argv) {
+  foo(-1, ten);
+  return 0;
+}
+
+/* { dg-output "WRITE of size 1 at 0x\[0-9a-f\]+ thread T0\[^\n\r]*(\n|\r\n|\r)" } */
+/* { dg-output "    #0 0x\[0-9a-f\]+ +(in _*foo(\[^\n\r]*alloca_underflow_left.c:11|\[^\n\r]*:0)|\[(\]).*(\n|\r\n|\r)" } */
+/* { dg-output "\[^\n\r]*Address 0x\[0-9a-f\]+ is located in stack of thread T0.*(\n|\r\n|\r)" */
+/* { dg-output "\[^\n\r]*in foo.*alloca_underflow_left.c.*(\n|\r\n|\r)" */

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2][ASAN] Implement dynamic allocas/VLAs sanitization.​
  2017-06-26 12:49 ` [PATCH v2][ASAN] Implement dynamic allocas/VLAs sanitization.​ Maxim Ostapenko
@ 2017-06-29 12:35   ` Jakub Jelinek
  2017-06-30 16:37     ` Maxim Ostapenko
  0 siblings, 1 reply; 3+ messages in thread
From: Jakub Jelinek @ 2017-06-29 12:35 UTC (permalink / raw)
  To: Maxim Ostapenko; +Cc: GCC Patches, Richard Biener, Yuri Gribov

Hi!

Sorry for the review delay.

On Mon, Jun 26, 2017 at 03:49:23PM +0300, Maxim Ostapenko wrote:
>	(handle_builtin_stackrestore): Likewise.

The function is called with _ between stack and restore.

> 	* match.pd: Add new pattern.

Unless the patch relies on this, I think it should be posted separately
and reviewed by Richard.

> @@ -245,6 +246,7 @@ along with GCC; see the file COPYING3.  If not see
>  static unsigned HOST_WIDE_INT asan_shadow_offset_value;
>  static bool asan_shadow_offset_computed;
>  static vec<char *> sanitized_sections;
> +static tree last_alloca_addr = NULL_TREE;

You are shadowing this variable in multiple places.  Either rename it to
something different, or rename the results of get_last_alloca_addr.
And the " = NULL_TREE" part is not needed.

>  
>  /* Set of variable declarations that are going to be guarded by
>     use-after-scope sanitizer.  */
> @@ -529,11 +531,183 @@ get_mem_ref_of_assignment (const gassign *assignment,
>    return true;
>  }
>  
> +/* Return address of last allocated dynamic alloca.  */
> +
> +static tree
> +get_last_alloca_addr ()
> +{
> +  if (last_alloca_addr)
> +    return last_alloca_addr;
> +
> +  gimple_seq seq = NULL;
> +  gassign *g;
> +
> +  last_alloca_addr = create_tmp_reg (ptr_type_node, "last_alloca_addr");
> +  g = gimple_build_assign (last_alloca_addr, NOP_EXPR,
> +			   build_int_cst (ptr_type_node, 0));

Instead of build_int_cst (ptr_type_node, 0) you should use
null_pointer_node.  And the NOP_EXPR there is just wrong, either it
should be gimple_build_assign (last_alloca_addr, null_pointer_node);
or gimple_build_assign (last_alloca_addr, INTEGER_CST, null_pointer_node);

> +  gimple_seq_add_stmt_without_update (&seq, g);

Why the seq stuff at all?  You have a single stmt you want to insert on
edge.

> +
> +  edge e = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun));
> +  gsi_insert_seq_on_edge_immediate (e, seq);

So just use here
  gsi_insert_on_edge_immediate (e, g);
instead.

> +  return last_alloca_addr;
> +}
> +
> +/* Insert __asan_allocas_unpoison (top, bottom) call after
> +   __builtin_stack_restore (new_sp) call.
> +   The pseudocode of this routine should look like this:
> +     __builtin_stack_restore (new_sp);
> +     top = last_alloca_addr;
> +     bot = virtual_dynamic_stack_rtx;
> +     __asan_allocas_unpoison (top, bottom);
> +     last_alloca_addr = new_sp;

The comment doesn't seem to agree with what you actually implement.
There is no virtual_dynamic_stack_rtx during the asan pass, it is there
only during expansion until the virtual regs are instantiated in the next
pass.  Furthermore, you have bot variable, but then use bottom.

> +  tree last_alloca_addr = get_last_alloca_addr ();

Here is the shadowing I talked about.

> +  tree restored_stack = gimple_call_arg (call, 0);
> +  tree fn = builtin_decl_implicit (BUILT_IN_ASAN_ALLOCAS_UNPOISON);
> +  gimple *g = gimple_build_call (fn, 2, last_alloca_addr, restored_stack);

Here you clearly use the first argument of __builtin_stack_restore, which
is that new_sp.

> +  gimple_seq_add_stmt_without_update (&seq, g);

Why the messing up with sequences?  Just insert the stmt immediately in,
and the others as well.

> +  g = gimple_build_assign (last_alloca_addr, NOP_EXPR, restored_stack);

This is again wrong, here you really don't know what restored_stack is,
it could be SSA_NAME, but also something different, so you should use
gimple_build_assign (last_alloca_addr, restored_stack);
and let it figure out the rhs code.

> +  /* Extract lower bits from old_size.  */
> +  wide_int size_nonzero_bits = get_nonzero_bits (old_size);
> +  wide_int rz_mask
> +    = wi::uhwi (redzone_mask, wi::get_precision (size_nonzero_bits));
> +  wide_int old_size_lower_bits = wi::bit_and (size_nonzero_bits, rz_mask);
> +
> +  /* If alloca size is aligned to ASAN_RED_ZONE_SIZE, we don't need partial
> +     redzone.  Otherwise, compute its size here.  */
> +  if (wi::ne_p (old_size_lower_bits, 0))
> +    {
> +      /* misalign = size & (ASAN_RED_ZONE_SIZE - 1)
> +         partial_size = ASAN_RED_ZONE_SIZE - misalign.  */
> +      g = gimple_build_assign (make_ssa_name (size_type_node, NULL),
> +			       BIT_AND_EXPR, old_size, alloca_rz_mask);
> +      gimple_seq_add_stmt_without_update (&seq, g);
> +      tree misalign = gimple_assign_lhs (g);
> +      g = gimple_build_assign (make_ssa_name (size_type_node, NULL), MINUS_EXPR,
> +			       redzone_size, misalign);
> +      gimple_seq_add_stmt_without_update (&seq, g);

Again, why add the stmts into a seq first instead of just adding it
immediately into the IL?
> @@ -4962,6 +4962,20 @@ expand_builtin_alloca (tree exp)
>    return result;
>  }
>  

Missing function comment here.

> +static rtx
> +expand_asan_emit_allocas_unpoison (tree exp)
> +{
> +  tree arg0 = CALL_EXPR_ARG (exp, 0);
> +  rtx top = expand_expr (arg0, NULL_RTX, GET_MODE (virtual_stack_dynamic_rtx),
> +			 EXPAND_NORMAL);
> +  rtx ret = init_one_libfunc ("__asan_allocas_unpoison");
> +  ret = emit_library_call_value (ret, NULL_RTX, LCT_NORMAL, ptr_mode, 2, top,
> +				 TYPE_MODE (pointer_sized_int_node),
> +				 virtual_stack_dynamic_rtx,
> +				 TYPE_MODE (pointer_sized_int_node));

I see you are here pretty much ignoring the old second argument and instead
using a different one.  But then the above mentioned comment should explain
that, how you transform it during the asan pass and how you later change it
during expansion.

> +    case BUILT_IN_ASAN_ALLOCAS_UNPOISON:
> +      target = expand_asan_emit_allocas_unpoison (exp);
> +      if (target)
> +	return target;

Do you need this test, when it always returns non-NULL?
Just return expand_asan_emit_allocas_unpoison (exp);

	Jakub

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2][ASAN] Implement dynamic allocas/VLAs sanitization.​
  2017-06-29 12:35   ` Jakub Jelinek
@ 2017-06-30 16:37     ` Maxim Ostapenko
  0 siblings, 0 replies; 3+ messages in thread
From: Maxim Ostapenko @ 2017-06-30 16:37 UTC (permalink / raw)
  To: Jakub Jelinek; +Cc: GCC Patches, Richard Biener, Yuri Gribov

Hi,

On 29/06/17 15:35, Jakub Jelinek wrote:
> Hi!
>
> Sorry for the review delay.
>
> On Mon, Jun 26, 2017 at 03:49:23PM +0300, Maxim Ostapenko wrote:
>> 	(handle_builtin_stackrestore): Likewise.
> The function is called with _ between stack and restore.
>
>> 	* match.pd: Add new pattern.
> Unless the patch relies on this, I think it should be posted separately
> and reviewed by Richard.

OK, good point, will remove from this patch.

>
>> @@ -245,6 +246,7 @@ along with GCC; see the file COPYING3.  If not see
>>   static unsigned HOST_WIDE_INT asan_shadow_offset_value;
>>   static bool asan_shadow_offset_computed;
>>   static vec<char *> sanitized_sections;
>> +static tree last_alloca_addr = NULL_TREE;
> You are shadowing this variable in multiple places.  Either rename it to
> something different, or rename the results of get_last_alloca_addr.
> And the " = NULL_TREE" part is not needed.

Err, thanks, will fix it.

>
>>   
>>   /* Set of variable declarations that are going to be guarded by
>>      use-after-scope sanitizer.  */
>> @@ -529,11 +531,183 @@ get_mem_ref_of_assignment (const gassign *assignment,
>>     return true;
>>   }
>>   
>> +/* Return address of last allocated dynamic alloca.  */
>> +
>> +static tree
>> +get_last_alloca_addr ()
>> +{
>> +  if (last_alloca_addr)
>> +    return last_alloca_addr;
>> +
>> +  gimple_seq seq = NULL;
>> +  gassign *g;
>> +
>> +  last_alloca_addr = create_tmp_reg (ptr_type_node, "last_alloca_addr");
>> +  g = gimple_build_assign (last_alloca_addr, NOP_EXPR,
>> +			   build_int_cst (ptr_type_node, 0));
> Instead of build_int_cst (ptr_type_node, 0) you should use
> null_pointer_node.  And the NOP_EXPR there is just wrong, either it
> should be gimple_build_assign (last_alloca_addr, null_pointer_node);
> or gimple_build_assign (last_alloca_addr, INTEGER_CST, null_pointer_node);
>
>> +  gimple_seq_add_stmt_without_update (&seq, g);
> Why the seq stuff at all?  You have a single stmt you want to insert on
> edge.

Right, will fix it.

>> +
>> +  edge e = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (cfun));
>> +  gsi_insert_seq_on_edge_immediate (e, seq);
> So just use here
>    gsi_insert_on_edge_immediate (e, g);
> instead.
>
>> +  return last_alloca_addr;
>> +}
>> +
>> +/* Insert __asan_allocas_unpoison (top, bottom) call after
>> +   __builtin_stack_restore (new_sp) call.
>> +   The pseudocode of this routine should look like this:
>> +     __builtin_stack_restore (new_sp);
>> +     top = last_alloca_addr;
>> +     bot = virtual_dynamic_stack_rtx;
>> +     __asan_allocas_unpoison (top, bottom);
>> +     last_alloca_addr = new_sp;
> The comment doesn't seem to agree with what you actually implement.
> There is no virtual_dynamic_stack_rtx during the asan pass, it is there
> only during expansion until the virtual regs are instantiated in the next
> pass.  Furthermore, you have bot variable, but then use bottom.

Right, 'bottom' should be replaced by 'bot'.
Regarding to virtual_dynamic_stactk_rtx - as you correctly noted below, 
second parameter of __asan_allocas_unpoison will be completely rewritten 
in expand_builtin_alloca function by virtual_dynamic_stactk_rtx. Here 
I've just passed new_sp as a dummy argument. This looks hacky, but the 
problem is that for several architectures (e.g. PPC) we cannot use 
new_sp as a 'bot' parameter because new_sp != last_alloca_addr on these 
targets. Originally, I tried to do something like this:

   top = last_alloca_addr;
   bot = new_sp + STACK_DYNAMIC_OFFSET;
   __asan_allocas_unpoison(top, bot);
   last_alloca_addr = bot;

but I was confused by the fact that STACK_DYNAMIC_OFFSET becomes 
available only after expansion to RTL. Rewriting 'bot' with 
virtual_dynamic_stactk_rtx in RTL looks like the most feasible way how 
to overcome this issue for me.

>
>> +  tree last_alloca_addr = get_last_alloca_addr ();
> Here is the shadowing I talked about.
>
>> +  tree restored_stack = gimple_call_arg (call, 0);
>> +  tree fn = builtin_decl_implicit (BUILT_IN_ASAN_ALLOCAS_UNPOISON);
>> +  gimple *g = gimple_build_call (fn, 2, last_alloca_addr, restored_stack);
> Here you clearly use the first argument of __builtin_stack_restore, which
> is that new_sp.
>
>> +  gimple_seq_add_stmt_without_update (&seq, g);
> Why the messing up with sequences?  Just insert the stmt immediately in,
> and the others as well.
>
>> +  g = gimple_build_assign (last_alloca_addr, NOP_EXPR, restored_stack);
> This is again wrong, here you really don't know what restored_stack is,
> it could be SSA_NAME, but also something different, so you should use
> gimple_build_assign (last_alloca_addr, restored_stack);
> and let it figure out the rhs code.

Thanks, will fix.

>
>> +  /* Extract lower bits from old_size.  */
>> +  wide_int size_nonzero_bits = get_nonzero_bits (old_size);
>> +  wide_int rz_mask
>> +    = wi::uhwi (redzone_mask, wi::get_precision (size_nonzero_bits));
>> +  wide_int old_size_lower_bits = wi::bit_and (size_nonzero_bits, rz_mask);
>> +
>> +  /* If alloca size is aligned to ASAN_RED_ZONE_SIZE, we don't need partial
>> +     redzone.  Otherwise, compute its size here.  */
>> +  if (wi::ne_p (old_size_lower_bits, 0))
>> +    {
>> +      /* misalign = size & (ASAN_RED_ZONE_SIZE - 1)
>> +         partial_size = ASAN_RED_ZONE_SIZE - misalign.  */
>> +      g = gimple_build_assign (make_ssa_name (size_type_node, NULL),
>> +			       BIT_AND_EXPR, old_size, alloca_rz_mask);
>> +      gimple_seq_add_stmt_without_update (&seq, g);
>> +      tree misalign = gimple_assign_lhs (g);
>> +      g = gimple_build_assign (make_ssa_name (size_type_node, NULL), MINUS_EXPR,
>> +			       redzone_size, misalign);
>> +      gimple_seq_add_stmt_without_update (&seq, g);
> Again, why add the stmts into a seq first instead of just adding it
> immediately into the IL?
>> @@ -4962,6 +4962,20 @@ expand_builtin_alloca (tree exp)
>>     return result;
>>   }
>>   
> Missing function comment here.
>
>> +static rtx
>> +expand_asan_emit_allocas_unpoison (tree exp)
>> +{
>> +  tree arg0 = CALL_EXPR_ARG (exp, 0);
>> +  rtx top = expand_expr (arg0, NULL_RTX, GET_MODE (virtual_stack_dynamic_rtx),
>> +			 EXPAND_NORMAL);
>> +  rtx ret = init_one_libfunc ("__asan_allocas_unpoison");
>> +  ret = emit_library_call_value (ret, NULL_RTX, LCT_NORMAL, ptr_mode, 2, top,
>> +				 TYPE_MODE (pointer_sized_int_node),
>> +				 virtual_stack_dynamic_rtx,
>> +				 TYPE_MODE (pointer_sized_int_node));
> I see you are here pretty much ignoring the old second argument and instead
> using a different one.  But then the above mentioned comment should explain
> that, how you transform it during the asan pass and how you later change it
> during expansion.

Yeah, I'll add a comment about motivation.

>
>> +    case BUILT_IN_ASAN_ALLOCAS_UNPOISON:
>> +      target = expand_asan_emit_allocas_unpoison (exp);
>> +      if (target)
>> +	return target;
> Do you need this test, when it always returns non-NULL?
> Just return expand_asan_emit_allocas_unpoison (exp);
>
> 	Jakub
>
>
>

-Maxim

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-06-30 16:37 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CGME20170626124925eucas1p18c56742a07db5bb2dabbedd0e894aa0e@eucas1p1.samsung.com>
2017-06-26 12:49 ` [PATCH v2][ASAN] Implement dynamic allocas/VLAs sanitization.​ Maxim Ostapenko
2017-06-29 12:35   ` Jakub Jelinek
2017-06-30 16:37     ` Maxim Ostapenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).