* [patch] Disable store merging in asan_expand_mark_ifn
@ 2019-02-11 10:03 Eric Botcazou
2019-02-11 10:13 ` Jakub Jelinek
0 siblings, 1 reply; 9+ messages in thread
From: Eric Botcazou @ 2019-02-11 10:03 UTC (permalink / raw)
To: gcc-patches
[-- Attachment #1: Type: text/plain, Size: 1046 bytes --]
Hi,
asan_expand_mark_ifn does manual store merging but doesn't take into account
the alignment, so this can break on strict-alignment platforms.
Tested on SPARC/Solaris 11, where this fixes this regression:
FAIL: gcc.dg/asan/use-after-scope-5.c -O0 output pattern test
FAIL: gcc.dg/asan/use-after-scope-5.c -O1 output pattern test
FAIL: gcc.dg/asan/use-after-scope-5.c -O2 output pattern test
FAIL: gcc.dg/asan/use-after-scope-5.c -O3 -fomit-frame-pointer -funroll-
loops -fpeel-loops -ftracer -finline-functions output pattern test
FAIL: gcc.dg/asan/use-after-scope-5.c -O3 -g output pattern test
FAIL: gcc.dg/asan/use-after-scope-5.c -Os output pattern test
FAIL: gcc.dg/asan/use-after-scope-5.c -O2 -flto -flto-partition=none output
pattern test
FAIL: gcc.dg/asan/use-after-scope-5.c -O2 -flto output pattern test
OK for mainline?
2019-02-11 Eric Botcazou <ebotcazou@adacore.com>
* asan.c (asan_expand_mark_ifn): Always use a size of 1 byte for the
stores on strict-alignment platforms.
--
Eric Botcazou
[-- Attachment #2: p.diff --]
[-- Type: text/x-patch, Size: 682 bytes --]
Index: asan.c
===================================================================
--- asan.c (revision 268508)
+++ asan.c (working copy)
@@ -3226,10 +3226,13 @@ asan_expand_mark_ifn (gimple_stmt_iterat
for (unsigned HOST_WIDE_INT offset = 0; offset < shadow_size;)
{
unsigned size = 1;
- if (shadow_size - offset >= 4)
- size = 4;
- else if (shadow_size - offset >= 2)
- size = 2;
+ if (!STRICT_ALIGNMENT)
+ {
+ if (shadow_size - offset >= 4)
+ size = 4;
+ else if (shadow_size - offset >= 2)
+ size = 2;
+ }
unsigned HOST_WIDE_INT last_chunk_size = 0;
unsigned HOST_WIDE_INT s = (offset + size) * ASAN_SHADOW_GRANULARITY;
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch] Disable store merging in asan_expand_mark_ifn
2019-02-11 10:03 [patch] Disable store merging in asan_expand_mark_ifn Eric Botcazou
@ 2019-02-11 10:13 ` Jakub Jelinek
2019-02-11 11:31 ` Eric Botcazou
0 siblings, 1 reply; 9+ messages in thread
From: Jakub Jelinek @ 2019-02-11 10:13 UTC (permalink / raw)
To: Eric Botcazou; +Cc: gcc-patches
On Mon, Feb 11, 2019 at 11:03:41AM +0100, Eric Botcazou wrote:
> asan_expand_mark_ifn does manual store merging but doesn't take into account
> the alignment, so this can break on strict-alignment platforms.
>
> Tested on SPARC/Solaris 11, where this fixes this regression:
>
> FAIL: gcc.dg/asan/use-after-scope-5.c -O0 output pattern test
> FAIL: gcc.dg/asan/use-after-scope-5.c -O1 output pattern test
> FAIL: gcc.dg/asan/use-after-scope-5.c -O2 output pattern test
> FAIL: gcc.dg/asan/use-after-scope-5.c -O3 -fomit-frame-pointer -funroll-
> loops -fpeel-loops -ftracer -finline-functions output pattern test
> FAIL: gcc.dg/asan/use-after-scope-5.c -O3 -g output pattern test
> FAIL: gcc.dg/asan/use-after-scope-5.c -Os output pattern test
> FAIL: gcc.dg/asan/use-after-scope-5.c -O2 -flto -flto-partition=none output
> pattern test
> FAIL: gcc.dg/asan/use-after-scope-5.c -O2 -flto output pattern test
>
> OK for mainline?
>
>
> 2019-02-11 Eric Botcazou <ebotcazou@adacore.com>
>
> * asan.c (asan_expand_mark_ifn): Always use a size of 1 byte for the
> stores on strict-alignment platforms.
So, wouldn't it be better to check for STRICT_ALIGNMENT
get_pointer_alignment (base_addr) and do this only if that alignment
(shifted right by ASAN_SHADOW_SHIFT) is not sufficient and e.g. if we would
know that the shadow is at least 2 byte aligned but not 4 byte aligned, use
size = 2 instead of always 1? E.g. compute this before the loop as
max_size and for !STRICT_ALIGNMENT use always max_size 4?
> Index: asan.c
> ===================================================================
> --- asan.c (revision 268508)
> +++ asan.c (working copy)
> @@ -3226,10 +3226,13 @@ asan_expand_mark_ifn (gimple_stmt_iterat
> for (unsigned HOST_WIDE_INT offset = 0; offset < shadow_size;)
> {
> unsigned size = 1;
> - if (shadow_size - offset >= 4)
> - size = 4;
> - else if (shadow_size - offset >= 2)
> - size = 2;
> + if (!STRICT_ALIGNMENT)
> + {
> + if (shadow_size - offset >= 4)
> + size = 4;
> + else if (shadow_size - offset >= 2)
> + size = 2;
> + }
>
> unsigned HOST_WIDE_INT last_chunk_size = 0;
> unsigned HOST_WIDE_INT s = (offset + size) * ASAN_SHADOW_GRANULARITY;
Jakub
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch] Disable store merging in asan_expand_mark_ifn
2019-02-11 10:13 ` Jakub Jelinek
@ 2019-02-11 11:31 ` Eric Botcazou
2019-02-11 11:57 ` Jakub Jelinek
0 siblings, 1 reply; 9+ messages in thread
From: Eric Botcazou @ 2019-02-11 11:31 UTC (permalink / raw)
To: Jakub Jelinek; +Cc: gcc-patches
> So, wouldn't it be better to check for STRICT_ALIGNMENT
> get_pointer_alignment (base_addr) and do this only if that alignment
> (shifted right by ASAN_SHADOW_SHIFT) is not sufficient and e.g. if we would
> know that the shadow is at least 2 byte aligned but not 4 byte aligned, use
> size = 2 instead of always 1? E.g. compute this before the loop as
> max_size and for !STRICT_ALIGNMENT use always max_size 4?
In practice this makes a difference only for objects aligned on 128-bit or
above boundaries though. Moreover, don't you need to take into account the
offset as well, which can be modified through -fasan-shadow-offset?
--
Eric Botcazou
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch] Disable store merging in asan_expand_mark_ifn
2019-02-11 11:31 ` Eric Botcazou
@ 2019-02-11 11:57 ` Jakub Jelinek
2019-02-11 12:31 ` Eric Botcazou
0 siblings, 1 reply; 9+ messages in thread
From: Jakub Jelinek @ 2019-02-11 11:57 UTC (permalink / raw)
To: Eric Botcazou; +Cc: gcc-patches
On Mon, Feb 11, 2019 at 12:31:43PM +0100, Eric Botcazou wrote:
> > So, wouldn't it be better to check for STRICT_ALIGNMENT
> > get_pointer_alignment (base_addr) and do this only if that alignment
> > (shifted right by ASAN_SHADOW_SHIFT) is not sufficient and e.g. if we would
> > know that the shadow is at least 2 byte aligned but not 4 byte aligned, use
> > size = 2 instead of always 1? E.g. compute this before the loop as
> > max_size and for !STRICT_ALIGNMENT use always max_size 4?
>
> In practice this makes a difference only for objects aligned on 128-bit or
No. 64-bit aligned offsets too. If you know 64-bit alignment of base_addr,
you can use size 2 stores (though not size 4 stores) on the
!STRICT_ALIGNMENT targets. And that is something still pretty common.
> above boundaries though. Moreover, don't you need to take into account the
> offset as well, which can be modified through -fasan-shadow-offset?
No. If people use a bogus shadow offset, prologues/epilogues will not work
either on strict aligned targets, and -fasan-shadow-offset is for
-fsanitize=kernel-address only. Only page aligned offsets are something
that is supported/reasonable.
Jakub
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch] Disable store merging in asan_expand_mark_ifn
2019-02-11 11:57 ` Jakub Jelinek
@ 2019-02-11 12:31 ` Eric Botcazou
2019-02-11 12:59 ` Jakub Jelinek
0 siblings, 1 reply; 9+ messages in thread
From: Eric Botcazou @ 2019-02-11 12:31 UTC (permalink / raw)
To: Jakub Jelinek; +Cc: gcc-patches
> No. 64-bit aligned offsets too. If you know 64-bit alignment of base_addr,
> you can use size 2 stores (though not size 4 stores) on the
> !STRICT_ALIGNMENT targets. And that is something still pretty common.
But we're talking about STRICT_ALIGNMENT targets here: an array of 2 doubles
at address 0x10000008 will have a shadow address of 0x20000001 modulo the
offset so you cannot use size 2. Moveover the store merging pass should be
able to merge the stores so I don't really understand why this matters at all.
--
Eric Botcazou
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch] Disable store merging in asan_expand_mark_ifn
2019-02-11 12:31 ` Eric Botcazou
@ 2019-02-11 12:59 ` Jakub Jelinek
2019-02-12 11:43 ` Eric Botcazou
0 siblings, 1 reply; 9+ messages in thread
From: Jakub Jelinek @ 2019-02-11 12:59 UTC (permalink / raw)
To: Eric Botcazou; +Cc: gcc-patches
On Mon, Feb 11, 2019 at 01:31:24PM +0100, Eric Botcazou wrote:
> > No. 64-bit aligned offsets too. If you know 64-bit alignment of base_addr,
> > you can use size 2 stores (though not size 4 stores) on the
> > !STRICT_ALIGNMENT targets. And that is something still pretty common.
>
> But we're talking about STRICT_ALIGNMENT targets here: an array of 2 doubles
> at address 0x10000008 will have a shadow address of 0x20000001 modulo the
Ok, stand corrected on that, 128-bit indeed, but even that is nothing not
really used.
> offset so you cannot use size 2. Moveover the store merging pass should be
> able to merge the stores so I don't really understand why this matters at all.
For STRICT_ALIGNMENT targets store merging pass obviously can't do anything
with those, because unlike asan.c it can't figure out the alignment.
Jakub
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch] Disable store merging in asan_expand_mark_ifn
2019-02-11 12:59 ` Jakub Jelinek
@ 2019-02-12 11:43 ` Eric Botcazou
2019-02-12 11:46 ` Jakub Jelinek
0 siblings, 1 reply; 9+ messages in thread
From: Eric Botcazou @ 2019-02-12 11:43 UTC (permalink / raw)
To: Jakub Jelinek; +Cc: gcc-patches
[-- Attachment #1: Type: text/plain, Size: 813 bytes --]
> Ok, stand corrected on that, 128-bit indeed, but even that is nothing not
> really used.
The irony is that I'm doing this for 32-bit SPARC (we cannot get ASAN to work
in 64-bit mode for the time being) and the maximum alignment on 32-bit SPARC
is 64-bit (even long doubles) so this will be totally unused. ;-)
> For STRICT_ALIGNMENT targets store merging pass obviously can't do anything
> with those, because unlike asan.c it can't figure out the alignment.
OK, revised patch attached. I have manually verified that it yields the
expected result for an array of long doubles on 64-bit SPARC.
2019-02-12 Eric Botcazou <ebotcazou@adacore.com>
* asan.c (asan_expand_mark_ifn): Take into account the alignment of
the object to pick the size of stores on strict-alignment platforms.
--
Eric Botcazou
[-- Attachment #2: p.diff --]
[-- Type: text/x-patch, Size: 1226 bytes --]
Index: asan.c
===================================================================
--- asan.c (revision 268508)
+++ asan.c (working copy)
@@ -3218,7 +3218,10 @@ asan_expand_mark_ifn (gimple_stmt_iterat
/* Generate direct emission if size_in_bytes is small. */
if (size_in_bytes <= ASAN_PARAM_USE_AFTER_SCOPE_DIRECT_EMISSION_THRESHOLD)
{
- unsigned HOST_WIDE_INT shadow_size = shadow_mem_size (size_in_bytes);
+ const unsigned HOST_WIDE_INT shadow_size
+ = shadow_mem_size (size_in_bytes);
+ const unsigned int shadow_align
+ = (get_pointer_alignment (base) / BITS_PER_UNIT) >> ASAN_SHADOW_SHIFT;
tree shadow = build_shadow_mem_access (iter, loc, base_addr,
shadow_ptr_types[0], true);
@@ -3226,9 +3229,11 @@ asan_expand_mark_ifn (gimple_stmt_iterat
for (unsigned HOST_WIDE_INT offset = 0; offset < shadow_size;)
{
unsigned size = 1;
- if (shadow_size - offset >= 4)
+ if (shadow_size - offset >= 4
+ && (!STRICT_ALIGNMENT || shadow_align >= 4))
size = 4;
- else if (shadow_size - offset >= 2)
+ else if (shadow_size - offset >= 2
+ && (!STRICT_ALIGNMENT || shadow_align >= 2))
size = 2;
unsigned HOST_WIDE_INT last_chunk_size = 0;
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch] Disable store merging in asan_expand_mark_ifn
2019-02-12 11:43 ` Eric Botcazou
@ 2019-02-12 11:46 ` Jakub Jelinek
2019-02-15 21:40 ` Eric Botcazou
0 siblings, 1 reply; 9+ messages in thread
From: Jakub Jelinek @ 2019-02-12 11:46 UTC (permalink / raw)
To: Eric Botcazou; +Cc: gcc-patches
On Tue, Feb 12, 2019 at 12:41:47PM +0100, Eric Botcazou wrote:
> > Ok, stand corrected on that, 128-bit indeed, but even that is nothing not
> > really used.
>
> The irony is that I'm doing this for 32-bit SPARC (we cannot get ASAN to work
> in 64-bit mode for the time being) and the maximum alignment on 32-bit SPARC
> is 64-bit (even long doubles) so this will be totally unused. ;-)
>
> > For STRICT_ALIGNMENT targets store merging pass obviously can't do anything
> > with those, because unlike asan.c it can't figure out the alignment.
>
> OK, revised patch attached. I have manually verified that it yields the
> expected result for an array of long doubles on 64-bit SPARC.
>
>
> 2019-02-12 Eric Botcazou <ebotcazou@adacore.com>
>
> * asan.c (asan_expand_mark_ifn): Take into account the alignment of
> the object to pick the size of stores on strict-alignment platforms.
Ok, thanks.
> Index: asan.c
> ===================================================================
> --- asan.c (revision 268508)
> +++ asan.c (working copy)
> @@ -3218,7 +3218,10 @@ asan_expand_mark_ifn (gimple_stmt_iterat
> /* Generate direct emission if size_in_bytes is small. */
> if (size_in_bytes <= ASAN_PARAM_USE_AFTER_SCOPE_DIRECT_EMISSION_THRESHOLD)
> {
> - unsigned HOST_WIDE_INT shadow_size = shadow_mem_size (size_in_bytes);
> + const unsigned HOST_WIDE_INT shadow_size
> + = shadow_mem_size (size_in_bytes);
> + const unsigned int shadow_align
> + = (get_pointer_alignment (base) / BITS_PER_UNIT) >> ASAN_SHADOW_SHIFT;
>
> tree shadow = build_shadow_mem_access (iter, loc, base_addr,
> shadow_ptr_types[0], true);
> @@ -3226,9 +3229,11 @@ asan_expand_mark_ifn (gimple_stmt_iterat
> for (unsigned HOST_WIDE_INT offset = 0; offset < shadow_size;)
> {
> unsigned size = 1;
> - if (shadow_size - offset >= 4)
> + if (shadow_size - offset >= 4
> + && (!STRICT_ALIGNMENT || shadow_align >= 4))
> size = 4;
> - else if (shadow_size - offset >= 2)
> + else if (shadow_size - offset >= 2
> + && (!STRICT_ALIGNMENT || shadow_align >= 2))
> size = 2;
>
> unsigned HOST_WIDE_INT last_chunk_size = 0;
Jakub
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [patch] Disable store merging in asan_expand_mark_ifn
2019-02-12 11:46 ` Jakub Jelinek
@ 2019-02-15 21:40 ` Eric Botcazou
0 siblings, 0 replies; 9+ messages in thread
From: Eric Botcazou @ 2019-02-15 21:40 UTC (permalink / raw)
To: Jakub Jelinek; +Cc: gcc-patches
[-- Attachment #1: Type: text/plain, Size: 1142 bytes --]
> > OK, revised patch attached. I have manually verified that it yields the
> > expected result for an array of long doubles on 64-bit SPARC.
> >
> >
> > 2019-02-12 Eric Botcazou <ebotcazou@adacore.com>
> >
> > * asan.c (asan_expand_mark_ifn): Take into account the alignment of
> > the object to pick the size of stores on strict-alignment platforms.
>
> Ok, thanks.
Glad you insisted in the end, because I have ASAN working on SPARC64/Linux,
but only after fixing another bug on 64-bit strict-alignment platforms:
/* Align base if target is STRICT_ALIGNMENT. */
if (STRICT_ALIGNMENT)
base = expand_binop (Pmode, and_optab, base,
gen_int_mode (-((GET_MODE_ALIGNMENT (SImode)
<< ASAN_SHADOW_SHIFT)
/ BITS_PER_UNIT), Pmode), NULL_RTX,
1, OPTAB_DIRECT);
GET_MODE_ALIGNMENT is unsigned int so this zero-extends to unsigned long...
Tested on 32-bit and 64-bit SPARC/Linux, applied on mainline as obvious.
2019-02-15 Eric Botcazou <ebotcazou@adacore.com>
* asan.c (asan_emit_stack_protection): Use full-sized mask to align
the base address on 64-bit strict-alignment platforms.
--
Eric Botcazou
[-- Attachment #2: p.diff --]
[-- Type: text/x-patch, Size: 1320 bytes --]
Index: asan.c
===================================================================
--- asan.c (revision 268849)
+++ asan.c (working copy)
@@ -1440,13 +1441,15 @@ asan_emit_stack_protection (rtx base, rt
base_align_bias = ((asan_frame_size + alignb - 1)
& ~(alignb - HOST_WIDE_INT_1)) - asan_frame_size;
}
+
/* Align base if target is STRICT_ALIGNMENT. */
if (STRICT_ALIGNMENT)
- base = expand_binop (Pmode, and_optab, base,
- gen_int_mode (-((GET_MODE_ALIGNMENT (SImode)
- << ASAN_SHADOW_SHIFT)
- / BITS_PER_UNIT), Pmode), NULL_RTX,
- 1, OPTAB_DIRECT);
+ {
+ const HOST_WIDE_INT align
+ = (GET_MODE_ALIGNMENT (SImode) / BITS_PER_UNIT) << ASAN_SHADOW_SHIFT;
+ base = expand_binop (Pmode, and_optab, base, gen_int_mode (-align, Pmode),
+ NULL_RTX, 1, OPTAB_DIRECT);
+ }
if (use_after_return_class == -1 && pbase)
emit_move_insn (pbase, base);
@ -1534,7 +1548,7 @@ asan_emit_stack_protection (rtx base, rt
shadow_mem = gen_rtx_MEM (SImode, shadow_base);
set_mem_alias_set (shadow_mem, asan_shadow_set);
if (STRICT_ALIGNMENT)
- set_mem_align (shadow_mem, (GET_MODE_ALIGNMENT (SImode)));
+ set_mem_align (shadow_mem, GET_MODE_ALIGNMENT (SImode));
prev_offset = base_offset;
asan_redzone_buffer rz_buffer (shadow_mem, prev_offset);
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2019-02-15 21:40 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-11 10:03 [patch] Disable store merging in asan_expand_mark_ifn Eric Botcazou
2019-02-11 10:13 ` Jakub Jelinek
2019-02-11 11:31 ` Eric Botcazou
2019-02-11 11:57 ` Jakub Jelinek
2019-02-11 12:31 ` Eric Botcazou
2019-02-11 12:59 ` Jakub Jelinek
2019-02-12 11:43 ` Eric Botcazou
2019-02-12 11:46 ` Jakub Jelinek
2019-02-15 21:40 ` Eric Botcazou
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).