* Fix regression on PR46590 (slow compile with -O0)
@ 2012-01-13 17:19 Michael Matz
2012-01-16 9:09 ` Richard Guenther
0 siblings, 1 reply; 11+ messages in thread
From: Michael Matz @ 2012-01-13 17:19 UTC (permalink / raw)
To: gcc-patches
Hi,
the stack-var conflict generation code needs 13 (out of 34) seconds, with
-O0 on the second testcase of PR46590. Most of the time is spent in
generating the same conflicts again and again at each basic block (the
time right now is O(nr-of-bbs * N^2) where the number of conflicts is
O(N^2)).
If we simply remember for which partitions we already generated the lower
triangular conflict matrix we don't have to do that again, lowering the
overall time to O(N^2).
This patch does that. Time for variable expansion now is 0.4 seconds (of
overall 22).
Regstrapping in progress on x86_64-linux, okay for trunk?
Ciao,
Michael.
* cfgexpand.c (add_scope_conflicts_1): New old_conflicts argument,
use it in remembering which conflicts we already created.
(add_scope_conflicts): Adjust call to above, (de)allocate helper
bitmap.
Index: cfgexpand.c
===================================================================
*** cfgexpand.c (revision 183155)
--- cfgexpand.c (working copy)
*************** visit_conflict (gimple stmt ATTRIBUTE_UN
*** 441,451 ****
/* Helper routine for add_scope_conflicts, calculating the active partitions
at the end of BB, leaving the result in WORK. We're called to generate
! conflicts when FOR_CONFLICT is true, otherwise we're just tracking
! liveness. */
static void
! add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict)
{
edge e;
edge_iterator ei;
--- 441,452 ----
/* Helper routine for add_scope_conflicts, calculating the active partitions
at the end of BB, leaving the result in WORK. We're called to generate
! conflicts when OLD_CONFLICTS is non-null, otherwise we're just tracking
! liveness. If we generate conflicts then OLD_CONFLICTS stores the bits
! for which we generated conflicts already. */
static void
! add_scope_conflicts_1 (basic_block bb, bitmap work, bitmap old_conflicts)
{
edge e;
edge_iterator ei;
*************** add_scope_conflicts_1 (basic_block bb, b
*** 482,488 ****
}
else if (!is_gimple_debug (stmt))
{
! if (for_conflict
&& visit == visit_op)
{
/* If this is the first real instruction in this BB we need
--- 483,489 ----
}
else if (!is_gimple_debug (stmt))
{
! if (old_conflicts
&& visit == visit_op)
{
/* If this is the first real instruction in this BB we need
*************** add_scope_conflicts_1 (basic_block bb, b
*** 490,505 ****
Unlike classical liveness for named objects we can't
rely on seeing a def/use of the names we're interested in.
There might merely be indirect loads/stores. We'd not add any
! conflicts for such partitions. */
bitmap_iterator bi;
unsigned i;
! EXECUTE_IF_SET_IN_BITMAP (work, 0, i, bi)
{
unsigned j;
bitmap_iterator bj;
! EXECUTE_IF_SET_IN_BITMAP (work, i + 1, j, bj)
add_stack_var_conflict (i, j);
}
visit = visit_conflict;
}
walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit);
--- 491,521 ----
Unlike classical liveness for named objects we can't
rely on seeing a def/use of the names we're interested in.
There might merely be indirect loads/stores. We'd not add any
! conflicts for such partitions. We know that we generated
! conflicts between all partitions in old_conflicts already,
! so we need to generate only the new ones, avoiding to
! repeatedly pay the O(N^2) cost for each basic block. */
bitmap_iterator bi;
unsigned i;
!
! /* First the conflicts between new and old_conflicts members. */
! EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i, bi)
{
unsigned j;
bitmap_iterator bj;
! EXECUTE_IF_SET_IN_BITMAP (old_conflicts, 0, j, bj)
add_stack_var_conflict (i, j);
}
+ /* Then the conflicts between only the new members. */
+ EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i, bi)
+ {
+ unsigned j;
+ bitmap_iterator bj;
+ EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, j, bj)
+ add_stack_var_conflict (i, j);
+ }
+ /* And remember for the next basic block. */
+ bitmap_ior_into (old_conflicts, work);
visit = visit_conflict;
}
walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit);
*************** add_scope_conflicts (void)
*** 516,521 ****
--- 532,538 ----
basic_block bb;
bool changed;
bitmap work = BITMAP_ALLOC (NULL);
+ bitmap old_conflicts;
/* We approximate the live range of a stack variable by taking the first
mention of its name as starting point(s), and by the end-of-scope
*************** add_scope_conflicts (void)
*** 537,551 ****
FOR_EACH_BB (bb)
{
bitmap active = (bitmap)bb->aux;
! add_scope_conflicts_1 (bb, work, false);
if (bitmap_ior_into (active, work))
changed = true;
}
}
FOR_EACH_BB (bb)
! add_scope_conflicts_1 (bb, work, true);
BITMAP_FREE (work);
FOR_ALL_BB (bb)
BITMAP_FREE (bb->aux);
--- 554,571 ----
FOR_EACH_BB (bb)
{
bitmap active = (bitmap)bb->aux;
! add_scope_conflicts_1 (bb, work, NULL);
if (bitmap_ior_into (active, work))
changed = true;
}
}
+ old_conflicts = BITMAP_ALLOC (NULL);
+
FOR_EACH_BB (bb)
! add_scope_conflicts_1 (bb, work, old_conflicts);
+ BITMAP_FREE (old_conflicts);
BITMAP_FREE (work);
FOR_ALL_BB (bb)
BITMAP_FREE (bb->aux);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Fix regression on PR46590 (slow compile with -O0)
2012-01-13 17:19 Fix regression on PR46590 (slow compile with -O0) Michael Matz
@ 2012-01-16 9:09 ` Richard Guenther
2012-01-19 15:13 ` Michael Matz
0 siblings, 1 reply; 11+ messages in thread
From: Richard Guenther @ 2012-01-16 9:09 UTC (permalink / raw)
To: Michael Matz; +Cc: gcc-patches
On Fri, Jan 13, 2012 at 6:18 PM, Michael Matz <matz@suse.de> wrote:
> Hi,
>
> the stack-var conflict generation code needs 13 (out of 34) seconds, with
> -O0 on the second testcase of PR46590. Most of the time is spent in
> generating the same conflicts again and again at each basic block (the
> time right now is O(nr-of-bbs * N^2) where the number of conflicts is
> O(N^2)).
>
> If we simply remember for which partitions we already generated the lower
> triangular conflict matrix we don't have to do that again, lowering the
> overall time to O(N^2).
>
> This patch does that. Time for variable expansion now is 0.4 seconds (of
> overall 22).
>
> Regstrapping in progress on x86_64-linux, okay for trunk?
Ok.
Thanks,
Richard.
>
> Ciao,
> Michael.
>
> * cfgexpand.c (add_scope_conflicts_1): New old_conflicts argument,
> use it in remembering which conflicts we already created.
> (add_scope_conflicts): Adjust call to above, (de)allocate helper
> bitmap.
>
> Index: cfgexpand.c
> ===================================================================
> *** cfgexpand.c (revision 183155)
> --- cfgexpand.c (working copy)
> *************** visit_conflict (gimple stmt ATTRIBUTE_UN
> *** 441,451 ****
>
> /* Helper routine for add_scope_conflicts, calculating the active partitions
> at the end of BB, leaving the result in WORK. We're called to generate
> ! conflicts when FOR_CONFLICT is true, otherwise we're just tracking
> ! liveness. */
>
> static void
> ! add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict)
> {
> edge e;
> edge_iterator ei;
> --- 441,452 ----
>
> /* Helper routine for add_scope_conflicts, calculating the active partitions
> at the end of BB, leaving the result in WORK. We're called to generate
> ! conflicts when OLD_CONFLICTS is non-null, otherwise we're just tracking
> ! liveness. If we generate conflicts then OLD_CONFLICTS stores the bits
> ! for which we generated conflicts already. */
>
> static void
> ! add_scope_conflicts_1 (basic_block bb, bitmap work, bitmap old_conflicts)
> {
> edge e;
> edge_iterator ei;
> *************** add_scope_conflicts_1 (basic_block bb, b
> *** 482,488 ****
> }
> else if (!is_gimple_debug (stmt))
> {
> ! if (for_conflict
> && visit == visit_op)
> {
> /* If this is the first real instruction in this BB we need
> --- 483,489 ----
> }
> else if (!is_gimple_debug (stmt))
> {
> ! if (old_conflicts
> && visit == visit_op)
> {
> /* If this is the first real instruction in this BB we need
> *************** add_scope_conflicts_1 (basic_block bb, b
> *** 490,505 ****
> Unlike classical liveness for named objects we can't
> rely on seeing a def/use of the names we're interested in.
> There might merely be indirect loads/stores. We'd not add any
> ! conflicts for such partitions. */
> bitmap_iterator bi;
> unsigned i;
> ! EXECUTE_IF_SET_IN_BITMAP (work, 0, i, bi)
> {
> unsigned j;
> bitmap_iterator bj;
> ! EXECUTE_IF_SET_IN_BITMAP (work, i + 1, j, bj)
> add_stack_var_conflict (i, j);
> }
> visit = visit_conflict;
> }
> walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit);
> --- 491,521 ----
> Unlike classical liveness for named objects we can't
> rely on seeing a def/use of the names we're interested in.
> There might merely be indirect loads/stores. We'd not add any
> ! conflicts for such partitions. We know that we generated
> ! conflicts between all partitions in old_conflicts already,
> ! so we need to generate only the new ones, avoiding to
> ! repeatedly pay the O(N^2) cost for each basic block. */
> bitmap_iterator bi;
> unsigned i;
> !
> ! /* First the conflicts between new and old_conflicts members. */
> ! EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i, bi)
> {
> unsigned j;
> bitmap_iterator bj;
> ! EXECUTE_IF_SET_IN_BITMAP (old_conflicts, 0, j, bj)
> add_stack_var_conflict (i, j);
> }
> + /* Then the conflicts between only the new members. */
> + EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i, bi)
> + {
> + unsigned j;
> + bitmap_iterator bj;
> + EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, j, bj)
> + add_stack_var_conflict (i, j);
> + }
> + /* And remember for the next basic block. */
> + bitmap_ior_into (old_conflicts, work);
> visit = visit_conflict;
> }
> walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit);
> *************** add_scope_conflicts (void)
> *** 516,521 ****
> --- 532,538 ----
> basic_block bb;
> bool changed;
> bitmap work = BITMAP_ALLOC (NULL);
> + bitmap old_conflicts;
>
> /* We approximate the live range of a stack variable by taking the first
> mention of its name as starting point(s), and by the end-of-scope
> *************** add_scope_conflicts (void)
> *** 537,551 ****
> FOR_EACH_BB (bb)
> {
> bitmap active = (bitmap)bb->aux;
> ! add_scope_conflicts_1 (bb, work, false);
> if (bitmap_ior_into (active, work))
> changed = true;
> }
> }
>
> FOR_EACH_BB (bb)
> ! add_scope_conflicts_1 (bb, work, true);
>
> BITMAP_FREE (work);
> FOR_ALL_BB (bb)
> BITMAP_FREE (bb->aux);
> --- 554,571 ----
> FOR_EACH_BB (bb)
> {
> bitmap active = (bitmap)bb->aux;
> ! add_scope_conflicts_1 (bb, work, NULL);
> if (bitmap_ior_into (active, work))
> changed = true;
> }
> }
>
> + old_conflicts = BITMAP_ALLOC (NULL);
> +
> FOR_EACH_BB (bb)
> ! add_scope_conflicts_1 (bb, work, old_conflicts);
>
> + BITMAP_FREE (old_conflicts);
> BITMAP_FREE (work);
> FOR_ALL_BB (bb)
> BITMAP_FREE (bb->aux);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Fix regression on PR46590 (slow compile with -O0)
2012-01-16 9:09 ` Richard Guenther
@ 2012-01-19 15:13 ` Michael Matz
2012-01-20 20:57 ` Eric Botcazou
0 siblings, 1 reply; 11+ messages in thread
From: Michael Matz @ 2012-01-19 15:13 UTC (permalink / raw)
To: Richard Guenther; +Cc: gcc-patches
Hi,
On Mon, 16 Jan 2012, Richard Guenther wrote:
> > Regstrapping in progress on x86_64-linux, okay for trunk?
>
> Ok.
I've committed (r183305) a slightly changed variant that merges the two
outer bitmap loops again, like so:
+ EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i, bi)
{
unsigned j;
bitmap_iterator bj;
- EXECUTE_IF_SET_IN_BITMAP (work, i + 1, j, bj)
+ /* First the conflicts between new and old_conflicts. */
+ EXECUTE_IF_SET_IN_BITMAP (old_conflicts, 0, j, bj)
+ add_stack_var_conflict (i, j);
+ /* Then the conflicts between only the new members. */
+ EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, i + 1,
+ j, bj)
add_stack_var_conflict (i, j);
}
Ciao,
Michael.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Fix regression on PR46590 (slow compile with -O0)
2012-01-19 15:13 ` Michael Matz
@ 2012-01-20 20:57 ` Eric Botcazou
2012-01-21 0:10 ` Michael Matz
0 siblings, 1 reply; 11+ messages in thread
From: Eric Botcazou @ 2012-01-20 20:57 UTC (permalink / raw)
To: Michael Matz; +Cc: gcc-patches, Richard Guenther
[-- Attachment #1: Type: text/plain, Size: 1139 bytes --]
> I've committed (r183305) a slightly changed variant that merges the two
> outer bitmap loops again, like so:
>
> + EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i,
> bi) {
> unsigned j;
> bitmap_iterator bj;
> - EXECUTE_IF_SET_IN_BITMAP (work, i + 1, j, bj)
> + /* First the conflicts between new and old_conflicts. */
> + EXECUTE_IF_SET_IN_BITMAP (old_conflicts, 0, j, bj)
> + add_stack_var_conflict (i, j);
> + /* Then the conflicts between only the new members. */
> + EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, i +
> 1, + j, bj)
> add_stack_var_conflict (i, j);
> }
Is it supposed to change the generated code or...? Because it has introduced
regressions in stack usage. Compile the attached testcase (stack_test.adb)
with -fstack-usage on x86-64 and you'll get 496 bytes in the .su file. If the
patch is reverted, this goes back to the expected 64 bytes.
--
Eric Botcazou
[-- Attachment #2: pack.adb --]
[-- Type: text/x-adasrc, Size: 200 bytes --]
package body Pack is
function Ident_Int (X : Integer) return Integer is
begin
return X;
end Ident_Int;
procedure My_Proc (X : R) is
begin
null;
end My_Proc;
end Pack;
[-- Attachment #3: stack_test.adb --]
[-- Type: text/x-adasrc, Size: 1979 bytes --]
with Pack; use Pack;
procedure Stack_Test is
A : Integer := Ident_Int (123);
begin
case A is
when 0 =>
My_Proc (R'(Ident_Int(0), Ident_Int(1), Ident_Int(2), Ident_Int(3), Ident_Int(4), Ident_Int(5), Ident_Int(6), Ident_Int(7), Ident_Int(8), Ident_Int(9)));
when 1 =>
My_Proc (R'(Ident_Int(0), Ident_Int(1), Ident_Int(2), Ident_Int(3), Ident_Int(4), Ident_Int(5), Ident_Int(6), Ident_Int(7), Ident_Int(8), Ident_Int(9)));
when 2 =>
My_Proc (R'(Ident_Int(0), Ident_Int(1), Ident_Int(2), Ident_Int(3), Ident_Int(4), Ident_Int(5), Ident_Int(6), Ident_Int(7), Ident_Int(8), Ident_Int(9)));
when 3 =>
My_Proc (R'(Ident_Int(0), Ident_Int(1), Ident_Int(2), Ident_Int(3), Ident_Int(4), Ident_Int(5), Ident_Int(6), Ident_Int(7), Ident_Int(8), Ident_Int(9)));
when 4 =>
My_Proc (R'(Ident_Int(0), Ident_Int(1), Ident_Int(2), Ident_Int(3), Ident_Int(4), Ident_Int(5), Ident_Int(6), Ident_Int(7), Ident_Int(8), Ident_Int(9)));
when 5 =>
My_Proc (R'(Ident_Int(0), Ident_Int(1), Ident_Int(2), Ident_Int(3), Ident_Int(4), Ident_Int(5), Ident_Int(6), Ident_Int(7), Ident_Int(8), Ident_Int(9)));
when 6 =>
My_Proc (R'(Ident_Int(0), Ident_Int(1), Ident_Int(2), Ident_Int(3), Ident_Int(4), Ident_Int(5), Ident_Int(6), Ident_Int(7), Ident_Int(8), Ident_Int(9)));
when 7 =>
My_Proc (R'(Ident_Int(0), Ident_Int(1), Ident_Int(2), Ident_Int(3), Ident_Int(4), Ident_Int(5), Ident_Int(6), Ident_Int(7), Ident_Int(8), Ident_Int(9)));
when 8 =>
My_Proc (R'(Ident_Int(0), Ident_Int(1), Ident_Int(2), Ident_Int(3), Ident_Int(4), Ident_Int(5), Ident_Int(6), Ident_Int(7), Ident_Int(8), Ident_Int(9)));
when 9 =>
My_Proc (R'(Ident_Int(0), Ident_Int(1), Ident_Int(2), Ident_Int(3), Ident_Int(4), Ident_Int(5), Ident_Int(6), Ident_Int(7), Ident_Int(8), Ident_Int(9)));
when others =>
null;
end case;
end Stack_Test;
[-- Attachment #4: pack.ads --]
[-- Type: text/x-adasrc, Size: 218 bytes --]
package Pack is
function Ident_Int (X : Integer) return Integer;
type R is
record
C0, C1, C2, C3, C4, C5, C6, C7, C8, C9 : Integer;
end record;
procedure My_Proc (X : R);
end Pack;
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Fix regression on PR46590 (slow compile with -O0)
2012-01-20 20:57 ` Eric Botcazou
@ 2012-01-21 0:10 ` Michael Matz
2012-01-21 10:43 ` Eric Botcazou
0 siblings, 1 reply; 11+ messages in thread
From: Michael Matz @ 2012-01-21 0:10 UTC (permalink / raw)
To: Eric Botcazou; +Cc: gcc-patches, Richard Guenther
Hi,
On Fri, 20 Jan 2012, Eric Botcazou wrote:
> Is it supposed to change the generated code or...?
It was supposed to not change the set of conflicts, no. But I made a
thinko, and sometimes too many conflicts are generated. On the testcases
I tried that wasn't the case, so thanks for finding one :) Trivially
fixing the thinko (iterating over (work bit-and old_conflict) in the first
inner loop) would fix the testcase but in general create too few
conflicts, i.e. generate wrong code. I need some time to think about this
again.
Ciao,
Michael.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Fix regression on PR46590 (slow compile with -O0)
2012-01-21 0:10 ` Michael Matz
@ 2012-01-21 10:43 ` Eric Botcazou
2012-01-23 17:01 ` Michael Matz
0 siblings, 1 reply; 11+ messages in thread
From: Eric Botcazou @ 2012-01-21 10:43 UTC (permalink / raw)
To: Michael Matz; +Cc: gcc-patches, Richard Guenther
> It was supposed to not change the set of conflicts, no. But I made a
> thinko, and sometimes too many conflicts are generated. On the testcases
> I tried that wasn't the case, so thanks for finding one :)
You're welcome. In fact, I should have installed it in the gnat.dg testsuite
long ago but it had been created before -fstack-usage was contributed and then
I forgot. So it would be nice to install it once the regression is fixed, the
incantation being modelled on that of gcc.dg/stack-usage-1.c:
-- { dg-do compile }
-- { dg-options "-fstack-usage" }
[...]
-- { dg-final { scan-stack-usage "xxxx" } }
-- { dg-final { cleanup-stack-usage } }
The expected stack usage is <= 100 bytes for x86 and x86-64.
> Trivially fixing the thinko (iterating over (work bit-and old_conflict) in
> the first inner loop) would fix the testcase but in general create too few
> conflicts, i.e. generate wrong code. I need some time to think about this
> again.
OK, thanks in advance.
--
Eric Botcazou
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Fix regression on PR46590 (slow compile with -O0)
2012-01-21 10:43 ` Eric Botcazou
@ 2012-01-23 17:01 ` Michael Matz
2012-01-24 10:15 ` Richard Guenther
0 siblings, 1 reply; 11+ messages in thread
From: Michael Matz @ 2012-01-23 17:01 UTC (permalink / raw)
To: Richard Guenther, Eric Botcazou; +Cc: gcc-patches
Hi,
On Sat, 21 Jan 2012, Eric Botcazou wrote:
> > Trivially fixing the thinko (iterating over (work bit-and
> > old_conflict) in the first inner loop) would fix the testcase but in
> > general create too few conflicts, i.e. generate wrong code. I need
> > some time to think about this again.
>
> OK, thanks in advance.
Sigh. I can't come up with a way to generally speed up the conflict
generation without sometimes adding artificial conflicts. I.e. I'll have
to revert r183305. I can still fix the specific situation of PR46590 by
making sure clobbers are added for all aggregates, not only the (at
gimplification time) address takens.
So, this is currently in regstrapping, x86_64-linux, all langs+Ada. Okay
for trunk (well, the revert is obvious, but the gimplify.c hunk)?
Ciao,
Michael.
PR tree-optimization/46590
* cfgexpand.c: Revert last change (r183305).
* gimplify.c (gimplify_bind_expr): Add clobbers for all non-gimple
regs.
Index: gimplify.c
===================================================================
--- gimplify.c (revision 183303)
+++ gimplify.c (working copy)
@@ -1226,12 +1226,11 @@ gimplify_bind_expr (tree *expr_p, gimple
if (TREE_CODE (t) == VAR_DECL
&& !is_global_var (t)
&& DECL_CONTEXT (t) == current_function_decl
- && !DECL_HARD_REGISTER (t)
- && !TREE_THIS_VOLATILE (t)
&& !DECL_HAS_VALUE_EXPR_P (t)
/* Only care for variables that have to be in memory. Others
will be rewritten into SSA names, hence moved to the top-level. */
- && needs_to_live_in_memory (t))
+ && !is_gimple_reg (t))
+
{
tree clobber = build_constructor (TREE_TYPE (t), NULL);
TREE_THIS_VOLATILE (clobber) = 1;
Index: cfgexpand.c
===================================================================
--- cfgexpand.c (revision 183305)
+++ cfgexpand.c (working copy)
@@ -441,12 +441,11 @@ visit_conflict (gimple stmt ATTRIBUTE_UN
/* Helper routine for add_scope_conflicts, calculating the active partitions
at the end of BB, leaving the result in WORK. We're called to generate
- conflicts when OLD_CONFLICTS is non-null, otherwise we're just tracking
- liveness. If we generate conflicts then OLD_CONFLICTS stores the bits
- for which we generated conflicts already. */
+ conflicts when FOR_CONFLICT is true, otherwise we're just tracking
+ liveness. */
static void
-add_scope_conflicts_1 (basic_block bb, bitmap work, bitmap old_conflicts)
+add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict)
{
edge e;
edge_iterator ei;
@@ -483,7 +482,7 @@ add_scope_conflicts_1 (basic_block bb, b
}
else if (!is_gimple_debug (stmt))
{
- if (old_conflicts
+ if (for_conflict
&& visit == visit_op)
{
/* If this is the first real instruction in this BB we need
@@ -491,27 +490,16 @@ add_scope_conflicts_1 (basic_block bb, b
Unlike classical liveness for named objects we can't
rely on seeing a def/use of the names we're interested in.
There might merely be indirect loads/stores. We'd not add any
- conflicts for such partitions. We know that we generated
- conflicts between all partitions in old_conflicts already,
- so we need to generate only the new ones, avoiding to
- repeatedly pay the O(N^2) cost for each basic block. */
+ conflicts for such partitions. */
bitmap_iterator bi;
unsigned i;
-
- EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i, bi)
+ EXECUTE_IF_SET_IN_BITMAP (work, 0, i, bi)
{
unsigned j;
bitmap_iterator bj;
- /* First the conflicts between new and old_conflicts. */
- EXECUTE_IF_SET_IN_BITMAP (old_conflicts, 0, j, bj)
- add_stack_var_conflict (i, j);
- /* Then the conflicts between only the new members. */
- EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, i + 1,
- j, bj)
+ EXECUTE_IF_SET_IN_BITMAP (work, i + 1, j, bj)
add_stack_var_conflict (i, j);
}
- /* And remember for the next basic block. */
- bitmap_ior_into (old_conflicts, work);
visit = visit_conflict;
}
walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit);
@@ -528,7 +516,6 @@ add_scope_conflicts (void)
basic_block bb;
bool changed;
bitmap work = BITMAP_ALLOC (NULL);
- bitmap old_conflicts;
/* We approximate the live range of a stack variable by taking the first
mention of its name as starting point(s), and by the end-of-scope
@@ -550,18 +537,15 @@ add_scope_conflicts (void)
FOR_EACH_BB (bb)
{
bitmap active = (bitmap)bb->aux;
- add_scope_conflicts_1 (bb, work, NULL);
+ add_scope_conflicts_1 (bb, work, false);
if (bitmap_ior_into (active, work))
changed = true;
}
}
- old_conflicts = BITMAP_ALLOC (NULL);
-
FOR_EACH_BB (bb)
- add_scope_conflicts_1 (bb, work, old_conflicts);
+ add_scope_conflicts_1 (bb, work, true);
- BITMAP_FREE (old_conflicts);
BITMAP_FREE (work);
FOR_ALL_BB (bb)
BITMAP_FREE (bb->aux);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Fix regression on PR46590 (slow compile with -O0)
2012-01-23 17:01 ` Michael Matz
@ 2012-01-24 10:15 ` Richard Guenther
2012-01-26 14:23 ` Michael Matz
0 siblings, 1 reply; 11+ messages in thread
From: Richard Guenther @ 2012-01-24 10:15 UTC (permalink / raw)
To: Michael Matz; +Cc: Eric Botcazou, gcc-patches
On Mon, Jan 23, 2012 at 6:01 PM, Michael Matz <matz@suse.de> wrote:
> Hi,
>
> On Sat, 21 Jan 2012, Eric Botcazou wrote:
>
>> > Trivially fixing the thinko (iterating over (work bit-and
>> > old_conflict) in the first inner loop) would fix the testcase but in
>> > general create too few conflicts, i.e. generate wrong code. I need
>> > some time to think about this again.
>>
>> OK, thanks in advance.
>
> Sigh. I can't come up with a way to generally speed up the conflict
> generation without sometimes adding artificial conflicts. I.e. I'll have
> to revert r183305. I can still fix the specific situation of PR46590 by
> making sure clobbers are added for all aggregates, not only the (at
> gimplification time) address takens.
>
> So, this is currently in regstrapping, x86_64-linux, all langs+Ada. Okay
> for trunk (well, the revert is obvious, but the gimplify.c hunk)?
>
>
> Ciao,
> Michael.
>
> PR tree-optimization/46590
> * cfgexpand.c: Revert last change (r183305).
> * gimplify.c (gimplify_bind_expr): Add clobbers for all non-gimple
> regs.
>
> Index: gimplify.c
> ===================================================================
> --- gimplify.c (revision 183303)
> +++ gimplify.c (working copy)
> @@ -1226,12 +1226,11 @@ gimplify_bind_expr (tree *expr_p, gimple
> if (TREE_CODE (t) == VAR_DECL
> && !is_global_var (t)
> && DECL_CONTEXT (t) == current_function_decl
> - && !DECL_HARD_REGISTER (t)
> - && !TREE_THIS_VOLATILE (t)
> && !DECL_HAS_VALUE_EXPR_P (t)
> /* Only care for variables that have to be in memory. Others
> will be rewritten into SSA names, hence moved to the top-level. */
> - && needs_to_live_in_memory (t))
> + && !is_gimple_reg (t))
> +
Ok with the excessive vertical space removed.
Thanks,
Richard.
> {
> tree clobber = build_constructor (TREE_TYPE (t), NULL);
> TREE_THIS_VOLATILE (clobber) = 1;
> Index: cfgexpand.c
> ===================================================================
> --- cfgexpand.c (revision 183305)
> +++ cfgexpand.c (working copy)
> @@ -441,12 +441,11 @@ visit_conflict (gimple stmt ATTRIBUTE_UN
>
> /* Helper routine for add_scope_conflicts, calculating the active partitions
> at the end of BB, leaving the result in WORK. We're called to generate
> - conflicts when OLD_CONFLICTS is non-null, otherwise we're just tracking
> - liveness. If we generate conflicts then OLD_CONFLICTS stores the bits
> - for which we generated conflicts already. */
> + conflicts when FOR_CONFLICT is true, otherwise we're just tracking
> + liveness. */
>
> static void
> -add_scope_conflicts_1 (basic_block bb, bitmap work, bitmap old_conflicts)
> +add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict)
> {
> edge e;
> edge_iterator ei;
> @@ -483,7 +482,7 @@ add_scope_conflicts_1 (basic_block bb, b
> }
> else if (!is_gimple_debug (stmt))
> {
> - if (old_conflicts
> + if (for_conflict
> && visit == visit_op)
> {
> /* If this is the first real instruction in this BB we need
> @@ -491,27 +490,16 @@ add_scope_conflicts_1 (basic_block bb, b
> Unlike classical liveness for named objects we can't
> rely on seeing a def/use of the names we're interested in.
> There might merely be indirect loads/stores. We'd not add any
> - conflicts for such partitions. We know that we generated
> - conflicts between all partitions in old_conflicts already,
> - so we need to generate only the new ones, avoiding to
> - repeatedly pay the O(N^2) cost for each basic block. */
> + conflicts for such partitions. */
> bitmap_iterator bi;
> unsigned i;
> -
> - EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i, bi)
> + EXECUTE_IF_SET_IN_BITMAP (work, 0, i, bi)
> {
> unsigned j;
> bitmap_iterator bj;
> - /* First the conflicts between new and old_conflicts. */
> - EXECUTE_IF_SET_IN_BITMAP (old_conflicts, 0, j, bj)
> - add_stack_var_conflict (i, j);
> - /* Then the conflicts between only the new members. */
> - EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, i + 1,
> - j, bj)
> + EXECUTE_IF_SET_IN_BITMAP (work, i + 1, j, bj)
> add_stack_var_conflict (i, j);
> }
> - /* And remember for the next basic block. */
> - bitmap_ior_into (old_conflicts, work);
> visit = visit_conflict;
> }
> walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit);
> @@ -528,7 +516,6 @@ add_scope_conflicts (void)
> basic_block bb;
> bool changed;
> bitmap work = BITMAP_ALLOC (NULL);
> - bitmap old_conflicts;
>
> /* We approximate the live range of a stack variable by taking the first
> mention of its name as starting point(s), and by the end-of-scope
> @@ -550,18 +537,15 @@ add_scope_conflicts (void)
> FOR_EACH_BB (bb)
> {
> bitmap active = (bitmap)bb->aux;
> - add_scope_conflicts_1 (bb, work, NULL);
> + add_scope_conflicts_1 (bb, work, false);
> if (bitmap_ior_into (active, work))
> changed = true;
> }
> }
>
> - old_conflicts = BITMAP_ALLOC (NULL);
> -
> FOR_EACH_BB (bb)
> - add_scope_conflicts_1 (bb, work, old_conflicts);
> + add_scope_conflicts_1 (bb, work, true);
>
> - BITMAP_FREE (old_conflicts);
> BITMAP_FREE (work);
> FOR_ALL_BB (bb)
> BITMAP_FREE (bb->aux);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Fix regression on PR46590 (slow compile with -O0)
2012-01-24 10:15 ` Richard Guenther
@ 2012-01-26 14:23 ` Michael Matz
2012-01-26 14:59 ` Richard Guenther
2012-01-27 8:37 ` Eric Botcazou
0 siblings, 2 replies; 11+ messages in thread
From: Michael Matz @ 2012-01-26 14:23 UTC (permalink / raw)
To: Richard Guenther; +Cc: Eric Botcazou, gcc-patches
[-- Attachment #1: Type: TEXT/PLAIN, Size: 6509 bytes --]
Hi,
On Tue, 24 Jan 2012, Richard Guenther wrote:
> > + Â Â Â Â && !is_gimple_reg (t))
> > +
>
> Ok with the excessive vertical space removed.
Actually the patch as is was regressing some testcases (pr48794.f90, fixed
with an tree-eh change in another thread) and pack9.adb, which is fixed
with the tree-eh change below. As we emit more clobbers with my gimplify
patch we more often run into the situation that certain EH blocks aren't
really empty. The easy fix is to try making them empty before checking in
cleanup_empty_eh.
Compared to the last version I also removed the unrelated changes in the
predicate, it was a thinko.
This patch as is completed regstrap on x86_64-linux for all,ada,obj-c++ .
Still okay?
Ciao,
Michael.
PR tree-optimization/46590
* cfgexpand.c: Revert last change (r183305).
* gimplify.c (gimplify_bind_expr): Add clobbers for all non-gimple
regs.
* tree-eh.c (cleanup_empty_eh): Try to optimize clobbers before
checking for emptiness.
Index: gimplify.c
===================================================================
--- gimplify.c (revision 183524)
+++ gimplify.c (working copy)
@@ -1231,7 +1231,7 @@ gimplify_bind_expr (tree *expr_p, gimple
&& !DECL_HAS_VALUE_EXPR_P (t)
/* Only care for variables that have to be in memory. Others
will be rewritten into SSA names, hence moved to the top-level. */
- && needs_to_live_in_memory (t))
+ && !is_gimple_reg (t))
{
tree clobber = build_constructor (TREE_TYPE (t), NULL);
TREE_THIS_VOLATILE (clobber) = 1;
Index: tree-eh.c
===================================================================
--- tree-eh.c (revision 183559)
+++ tree-eh.c (working copy)
@@ -4056,6 +4056,7 @@ cleanup_empty_eh (eh_landing_pad lp)
edge_iterator ei;
edge e, e_out;
bool has_non_eh_pred;
+ bool ret = false;
int new_lp_nr;
/* There can be zero or one edges out of BB. This is the quickest test. */
@@ -4070,6 +4071,16 @@ cleanup_empty_eh (eh_landing_pad lp)
default:
return false;
}
+
+ resx = last_stmt (bb);
+ if (resx && is_gimple_resx (resx))
+ {
+ if (stmt_can_throw_external (resx))
+ optimize_clobbers (bb);
+ else if (sink_clobbers (bb))
+ ret = true;
+ }
+
gsi = gsi_after_labels (bb);
/* Make sure to skip debug statements. */
@@ -4081,9 +4092,9 @@ cleanup_empty_eh (eh_landing_pad lp)
{
/* For the degenerate case of an infinite loop bail out. */
if (infinite_empty_loop_p (e_out))
- return false;
+ return ret;
- return cleanup_empty_eh_unsplit (bb, e_out, lp);
+ return ret | cleanup_empty_eh_unsplit (bb, e_out, lp);
}
/* The block should consist only of a single RESX statement, modulo a
@@ -4096,7 +4107,7 @@ cleanup_empty_eh (eh_landing_pad lp)
resx = gsi_stmt (gsi);
}
if (!is_gimple_resx (resx))
- return false;
+ return ret;
gcc_assert (gsi_one_before_end_p (gsi));
/* Determine if there are non-EH edges, or resx edges into the handler. */
@@ -4172,7 +4183,7 @@ cleanup_empty_eh (eh_landing_pad lp)
return true;
}
- return false;
+ return ret;
succeed:
if (dump_file && (dump_flags & TDF_DETAILS))
Index: cfgexpand.c
===================================================================
--- cfgexpand.c (revision 183524)
+++ cfgexpand.c (working copy)
@@ -440,12 +440,11 @@ visit_conflict (gimple stmt ATTRIBUTE_UN
/* Helper routine for add_scope_conflicts, calculating the active partitions
at the end of BB, leaving the result in WORK. We're called to generate
- conflicts when OLD_CONFLICTS is non-null, otherwise we're just tracking
- liveness. If we generate conflicts then OLD_CONFLICTS stores the bits
- for which we generated conflicts already. */
+ conflicts when FOR_CONFLICT is true, otherwise we're just tracking
+ liveness. */
static void
-add_scope_conflicts_1 (basic_block bb, bitmap work, bitmap old_conflicts)
+add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict)
{
edge e;
edge_iterator ei;
@@ -482,7 +481,7 @@ add_scope_conflicts_1 (basic_block bb, b
}
else if (!is_gimple_debug (stmt))
{
- if (old_conflicts
+ if (for_conflict
&& visit == visit_op)
{
/* If this is the first real instruction in this BB we need
@@ -490,27 +489,16 @@ add_scope_conflicts_1 (basic_block bb, b
Unlike classical liveness for named objects we can't
rely on seeing a def/use of the names we're interested in.
There might merely be indirect loads/stores. We'd not add any
- conflicts for such partitions. We know that we generated
- conflicts between all partitions in old_conflicts already,
- so we need to generate only the new ones, avoiding to
- repeatedly pay the O(N^2) cost for each basic block. */
+ conflicts for such partitions. */
bitmap_iterator bi;
unsigned i;
-
- EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i, bi)
+ EXECUTE_IF_SET_IN_BITMAP (work, 0, i, bi)
{
unsigned j;
bitmap_iterator bj;
- /* First the conflicts between new and old_conflicts. */
- EXECUTE_IF_SET_IN_BITMAP (old_conflicts, 0, j, bj)
- add_stack_var_conflict (i, j);
- /* Then the conflicts between only the new members. */
- EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, i + 1,
- j, bj)
+ EXECUTE_IF_SET_IN_BITMAP (work, i + 1, j, bj)
add_stack_var_conflict (i, j);
}
- /* And remember for the next basic block. */
- bitmap_ior_into (old_conflicts, work);
visit = visit_conflict;
}
walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit);
@@ -527,7 +515,6 @@ add_scope_conflicts (void)
basic_block bb;
bool changed;
bitmap work = BITMAP_ALLOC (NULL);
- bitmap old_conflicts;
/* We approximate the live range of a stack variable by taking the first
mention of its name as starting point(s), and by the end-of-scope
@@ -549,18 +536,15 @@ add_scope_conflicts (void)
FOR_EACH_BB (bb)
{
bitmap active = (bitmap)bb->aux;
- add_scope_conflicts_1 (bb, work, NULL);
+ add_scope_conflicts_1 (bb, work, false);
if (bitmap_ior_into (active, work))
changed = true;
}
}
- old_conflicts = BITMAP_ALLOC (NULL);
-
FOR_EACH_BB (bb)
- add_scope_conflicts_1 (bb, work, old_conflicts);
+ add_scope_conflicts_1 (bb, work, true);
- BITMAP_FREE (old_conflicts);
BITMAP_FREE (work);
FOR_ALL_BB (bb)
BITMAP_FREE (bb->aux);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Fix regression on PR46590 (slow compile with -O0)
2012-01-26 14:23 ` Michael Matz
@ 2012-01-26 14:59 ` Richard Guenther
2012-01-27 8:37 ` Eric Botcazou
1 sibling, 0 replies; 11+ messages in thread
From: Richard Guenther @ 2012-01-26 14:59 UTC (permalink / raw)
To: Michael Matz; +Cc: Eric Botcazou, gcc-patches
On Thu, Jan 26, 2012 at 3:23 PM, Michael Matz <matz@suse.de> wrote:
> Hi,
>
> On Tue, 24 Jan 2012, Richard Guenther wrote:
>
>> > + && !is_gimple_reg (t))
>> > +
>>
>> Ok with the excessive vertical space removed.
>
> Actually the patch as is was regressing some testcases (pr48794.f90, fixed
> with an tree-eh change in another thread) and pack9.adb, which is fixed
> with the tree-eh change below. As we emit more clobbers with my gimplify
> patch we more often run into the situation that certain EH blocks aren't
> really empty. The easy fix is to try making them empty before checking in
> cleanup_empty_eh.
>
> Compared to the last version I also removed the unrelated changes in the
> predicate, it was a thinko.
>
> This patch as is completed regstrap on x86_64-linux for all,ada,obj-c++ .
> Still okay?
Ok.
Thanks,
Richard.
>
> Ciao,
> Michael.
>
> PR tree-optimization/46590
> * cfgexpand.c: Revert last change (r183305).
> * gimplify.c (gimplify_bind_expr): Add clobbers for all non-gimple
> regs.
> * tree-eh.c (cleanup_empty_eh): Try to optimize clobbers before
> checking for emptiness.
>
> Index: gimplify.c
> ===================================================================
> --- gimplify.c (revision 183524)
> +++ gimplify.c (working copy)
> @@ -1231,7 +1231,7 @@ gimplify_bind_expr (tree *expr_p, gimple
> && !DECL_HAS_VALUE_EXPR_P (t)
> /* Only care for variables that have to be in memory. Others
> will be rewritten into SSA names, hence moved to the top-level. */
> - && needs_to_live_in_memory (t))
> + && !is_gimple_reg (t))
> {
> tree clobber = build_constructor (TREE_TYPE (t), NULL);
> TREE_THIS_VOLATILE (clobber) = 1;
> Index: tree-eh.c
> ===================================================================
> --- tree-eh.c (revision 183559)
> +++ tree-eh.c (working copy)
> @@ -4056,6 +4056,7 @@ cleanup_empty_eh (eh_landing_pad lp)
> edge_iterator ei;
> edge e, e_out;
> bool has_non_eh_pred;
> + bool ret = false;
> int new_lp_nr;
>
> /* There can be zero or one edges out of BB. This is the quickest test. */
> @@ -4070,6 +4071,16 @@ cleanup_empty_eh (eh_landing_pad lp)
> default:
> return false;
> }
> +
> + resx = last_stmt (bb);
> + if (resx && is_gimple_resx (resx))
> + {
> + if (stmt_can_throw_external (resx))
> + optimize_clobbers (bb);
> + else if (sink_clobbers (bb))
> + ret = true;
> + }
> +
> gsi = gsi_after_labels (bb);
>
> /* Make sure to skip debug statements. */
> @@ -4081,9 +4092,9 @@ cleanup_empty_eh (eh_landing_pad lp)
> {
> /* For the degenerate case of an infinite loop bail out. */
> if (infinite_empty_loop_p (e_out))
> - return false;
> + return ret;
>
> - return cleanup_empty_eh_unsplit (bb, e_out, lp);
> + return ret | cleanup_empty_eh_unsplit (bb, e_out, lp);
> }
>
> /* The block should consist only of a single RESX statement, modulo a
> @@ -4096,7 +4107,7 @@ cleanup_empty_eh (eh_landing_pad lp)
> resx = gsi_stmt (gsi);
> }
> if (!is_gimple_resx (resx))
> - return false;
> + return ret;
> gcc_assert (gsi_one_before_end_p (gsi));
>
> /* Determine if there are non-EH edges, or resx edges into the handler. */
> @@ -4172,7 +4183,7 @@ cleanup_empty_eh (eh_landing_pad lp)
> return true;
> }
>
> - return false;
> + return ret;
>
> succeed:
> if (dump_file && (dump_flags & TDF_DETAILS))
> Index: cfgexpand.c
> ===================================================================
> --- cfgexpand.c (revision 183524)
> +++ cfgexpand.c (working copy)
> @@ -440,12 +440,11 @@ visit_conflict (gimple stmt ATTRIBUTE_UN
>
> /* Helper routine for add_scope_conflicts, calculating the active partitions
> at the end of BB, leaving the result in WORK. We're called to generate
> - conflicts when OLD_CONFLICTS is non-null, otherwise we're just tracking
> - liveness. If we generate conflicts then OLD_CONFLICTS stores the bits
> - for which we generated conflicts already. */
> + conflicts when FOR_CONFLICT is true, otherwise we're just tracking
> + liveness. */
>
> static void
> -add_scope_conflicts_1 (basic_block bb, bitmap work, bitmap old_conflicts)
> +add_scope_conflicts_1 (basic_block bb, bitmap work, bool for_conflict)
> {
> edge e;
> edge_iterator ei;
> @@ -482,7 +481,7 @@ add_scope_conflicts_1 (basic_block bb, b
> }
> else if (!is_gimple_debug (stmt))
> {
> - if (old_conflicts
> + if (for_conflict
> && visit == visit_op)
> {
> /* If this is the first real instruction in this BB we need
> @@ -490,27 +489,16 @@ add_scope_conflicts_1 (basic_block bb, b
> Unlike classical liveness for named objects we can't
> rely on seeing a def/use of the names we're interested in.
> There might merely be indirect loads/stores. We'd not add any
> - conflicts for such partitions. We know that we generated
> - conflicts between all partitions in old_conflicts already,
> - so we need to generate only the new ones, avoiding to
> - repeatedly pay the O(N^2) cost for each basic block. */
> + conflicts for such partitions. */
> bitmap_iterator bi;
> unsigned i;
> -
> - EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, 0, i, bi)
> + EXECUTE_IF_SET_IN_BITMAP (work, 0, i, bi)
> {
> unsigned j;
> bitmap_iterator bj;
> - /* First the conflicts between new and old_conflicts. */
> - EXECUTE_IF_SET_IN_BITMAP (old_conflicts, 0, j, bj)
> - add_stack_var_conflict (i, j);
> - /* Then the conflicts between only the new members. */
> - EXECUTE_IF_AND_COMPL_IN_BITMAP (work, old_conflicts, i + 1,
> - j, bj)
> + EXECUTE_IF_SET_IN_BITMAP (work, i + 1, j, bj)
> add_stack_var_conflict (i, j);
> }
> - /* And remember for the next basic block. */
> - bitmap_ior_into (old_conflicts, work);
> visit = visit_conflict;
> }
> walk_stmt_load_store_addr_ops (stmt, work, visit, visit, visit);
> @@ -527,7 +515,6 @@ add_scope_conflicts (void)
> basic_block bb;
> bool changed;
> bitmap work = BITMAP_ALLOC (NULL);
> - bitmap old_conflicts;
>
> /* We approximate the live range of a stack variable by taking the first
> mention of its name as starting point(s), and by the end-of-scope
> @@ -549,18 +536,15 @@ add_scope_conflicts (void)
> FOR_EACH_BB (bb)
> {
> bitmap active = (bitmap)bb->aux;
> - add_scope_conflicts_1 (bb, work, NULL);
> + add_scope_conflicts_1 (bb, work, false);
> if (bitmap_ior_into (active, work))
> changed = true;
> }
> }
>
> - old_conflicts = BITMAP_ALLOC (NULL);
> -
> FOR_EACH_BB (bb)
> - add_scope_conflicts_1 (bb, work, old_conflicts);
> + add_scope_conflicts_1 (bb, work, true);
>
> - BITMAP_FREE (old_conflicts);
> BITMAP_FREE (work);
> FOR_ALL_BB (bb)
> BITMAP_FREE (bb->aux);
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Fix regression on PR46590 (slow compile with -O0)
2012-01-26 14:23 ` Michael Matz
2012-01-26 14:59 ` Richard Guenther
@ 2012-01-27 8:37 ` Eric Botcazou
1 sibling, 0 replies; 11+ messages in thread
From: Eric Botcazou @ 2012-01-27 8:37 UTC (permalink / raw)
To: Michael Matz; +Cc: Richard Guenther, gcc-patches
> PR tree-optimization/46590
> * cfgexpand.c: Revert last change (r183305).
> * gimplify.c (gimplify_bind_expr): Add clobbers for all non-gimple
> regs.
> * tree-eh.c (cleanup_empty_eh): Try to optimize clobbers before
> checking for emptiness.
I have installed the Ada stack usage testcase since it passes again (thanks!).
But it is only enabled on x86 and x86-64 so, please, consider switching your
development platform to these exotic architectures. ;-)
--
Eric Botcazou
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2012-01-27 8:37 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-13 17:19 Fix regression on PR46590 (slow compile with -O0) Michael Matz
2012-01-16 9:09 ` Richard Guenther
2012-01-19 15:13 ` Michael Matz
2012-01-20 20:57 ` Eric Botcazou
2012-01-21 0:10 ` Michael Matz
2012-01-21 10:43 ` Eric Botcazou
2012-01-23 17:01 ` Michael Matz
2012-01-24 10:15 ` Richard Guenther
2012-01-26 14:23 ` Michael Matz
2012-01-26 14:59 ` Richard Guenther
2012-01-27 8:37 ` Eric Botcazou
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).