Hi, I'm sorry for a long delay. Here an updated patch. Following Jakub's suggestion from previous review, I've added a get_nonzero_bits stuff into handle_builtin_alloca in order to avoid redundant redzone size calculations in case we know that alloca has alignment >= ASAN_RED_ZONE_SIZE. Thus, for the following code: struct __attribute__((aligned (N))) S { char s[N]; }; void bar (struct S *, struct S *); void foo (int x) { struct S a; { struct S b[x]; bar (&a, &b[0]); } { struct S b[x + 4]; bar (&a, &b[0]); } } void baz (int x) { struct S a; struct S b[x]; bar (&a, &b[0]); } compiled with -O2 -fsanitize=address -DN=64, we have expected _2 = (sizetype) x_1(D); _8 = _2 * 64; _14 = _8 + 96; _15 = __builtin_alloca_with_align (_14, 512); _16 = _15 + 64; __builtin___asan_alloca_poison (_16, _8); instead of previous _1 = (sizetype) x_4(D); _2 = _1 * 64; _24 = _2 & 31; _19 = _2 + 128; _27 = _19 - _24; _28 = __builtin_alloca_with_align (_27, 512); _29 = _28 + 64; __builtin___asan_alloca_poison (_29, _2); Also, I've added a simple pattern for X & C -> 0 if we know that corresponding bits are zero, but I'm not sure this pattern has a practical value. Tested and bootstrapped on x86_64-unknown-linux-gnu. Could you take a look? -Maxim