* Patch 1/9: Remove dead code
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
@ 2010-06-18 14:08 ` Bernd Schmidt
2010-06-18 14:11 ` Jeff Law
2010-06-18 14:09 ` Patch 2/9: Split up and reorganize some functions Bernd Schmidt
` (9 subsequent siblings)
10 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-18 14:08 UTC (permalink / raw)
To: GCC Patches
[-- Attachment #1: Type: text/plain, Size: 0 bytes --]
[-- Attachment #2: delete-dead-code.diff --]
[-- Type: text/plain, Size: 626 bytes --]
Delete a few variables that are never used - there are static versions in
a different file.
* ira.c (allocno_pool, copy_pool, allocno_live_range_pool): Delete.
Index: gcc/ira.c
===================================================================
--- gcc.orig/ira.c
+++ gcc/ira.c
@@ -331,9 +331,6 @@ int internal_flag_ira_verbose;
/* Dump file of the allocator if it is not NULL. */
FILE *ira_dump_file;
-/* Pools for allocnos, copies, allocno live ranges. */
-alloc_pool allocno_pool, copy_pool, allocno_live_range_pool;
-
/* The number of elements in the following array. */
int ira_spilled_reg_stack_slots_num;
^ permalink raw reply [flat|nested] 42+ messages in thread
* Patch 2/9: Split up and reorganize some functions
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
2010-06-18 14:08 ` Patch 1/9: Remove dead code Bernd Schmidt
@ 2010-06-18 14:09 ` Bernd Schmidt
2010-06-18 18:26 ` Jeff Law
2010-06-18 14:10 ` Patch 3/9: create some more small helper functions Bernd Schmidt
` (8 subsequent siblings)
10 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-18 14:09 UTC (permalink / raw)
To: GCC Patches
[-- Attachment #1: Type: text/plain, Size: 1 bytes --]
[-- Attachment #2: split-functions.diff --]
[-- Type: text/plain, Size: 27448 bytes --]
This creates a few helper functions by breaking them out of larger ones.
In part, this is to make the code easier to read and should be a stand-alone
improvement, but mostly it is motivated by subsequent patches which will
modify the behaviour of the new functions.
In ira-lives.c, there is a slightly more involved restructuring of the:
Functions to mark registers live or dead are split up into versions for
pseudos and hard registers. set_allocno_live and clear_allocno_live get
the more descriptive names inc_register_pressure and dec_register_pressure;
all code that isn't related to tracking pressure is moved into other
functions. This allows us to further reduce code duplication by reusing
these functions in mark_hard_reg_live and mark_hard_reg_dead. I've also
fixed up some confusion about when to clear allocno_saved_at_call.
Note that there is a small change in behaviour: in inc/dec_register_pressure,
we no longer recompute nregs as we iterate through the classes. I think it's
more correct this way, and I haven't seen any difference in code generation.
* ira-build.c (merge_hard_reg_conflicts): New function.
(create_cap_allocno, copy_info_to_removed_store_destinations,
propagate_some_info_from_allocno, propagate_allocno_info): Use it.
(move_allocno_live_ranges, copy_allocno_live_ranges): New functions.
(remove_unnecessary_allocnos, remove_low_level_allocnos)
copy_nifo_to_removed_store_destination): Use them.
* ira-lives.c (make_hard_regno_born): New function, split out of
make_regno_born.
(make_allocno_born): Likewise.
(make_hard_regno_dead): New function, split out of make_regno_dead.
(make_allocno_dead): Likewise.
(inc_register_pressure): New function, split out of set_allocno_live.
(dec_register_pressure): New function, split out of clear_allocno_live.
(mark_pseudo_regno_live): New function, split out of mark_reg_live.
(mark_hard_reg_live): Likewise. Use inc_register_pressure.
(mark_pseudo_regno_dead): New function, split out of mark_reg_dead.
(mark_hard_reg_dead): Likewise. Use dec_register_pressure.
(make_pseudo_conflict): Use mark_pseudo_regno_dead and
mark_pseudo_regno_live.
(process_bb_node_lives): Use mark_pseudo_regno_live,
make_hard_regno_born and make_allocno_dead.
(make_regno_born, make_regno_dead, mark_reg_live, mark_reg_dead,
set_allocno_live, clear_allocno_live): Delete functions.
(Index: gcc/ira-build.c
===================================================================
--- gcc.orig/ira-build.c
+++ gcc/ira-build.c
@@ -504,6 +504,25 @@ ira_set_allocno_cover_class (ira_allocno
reg_class_contents[cover_class]);
}
+/* Merge hard register conflicts from allocno FROM into allocno TO. If
+ TOTAL_ONLY is true, we ignore ALLOCNO_CONFLICT_HARD_REGS. */
+static void
+merge_hard_reg_conflicts (ira_allocno_t from, ira_allocno_t to,
+ bool total_only)
+{
+ if (!total_only)
+ IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (to),
+ ALLOCNO_CONFLICT_HARD_REGS (from));
+ IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (to),
+ ALLOCNO_TOTAL_CONFLICT_HARD_REGS (from));
+#ifdef STACK_REGS
+ if (!total_only && ALLOCNO_NO_STACK_REG_P (from))
+ ALLOCNO_NO_STACK_REG_P (to) = true;
+ if (ALLOCNO_TOTAL_NO_STACK_REG_P (from))
+ ALLOCNO_TOTAL_NO_STACK_REG_P (to) = true;
+#endif
+}
+
/* Return TRUE if the conflict vector with NUM elements is more
profitable than conflict bit vector for A. */
bool
@@ -781,15 +800,8 @@ create_cap_allocno (ira_allocno_t a)
ALLOCNO_NREFS (cap) = ALLOCNO_NREFS (a);
ALLOCNO_FREQ (cap) = ALLOCNO_FREQ (a);
ALLOCNO_CALL_FREQ (cap) = ALLOCNO_CALL_FREQ (a);
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (cap),
- ALLOCNO_CONFLICT_HARD_REGS (a));
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (cap),
- ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
+ merge_hard_reg_conflicts (a, cap, false);
ALLOCNO_CALLS_CROSSED_NUM (cap) = ALLOCNO_CALLS_CROSSED_NUM (a);
-#ifdef STACK_REGS
- ALLOCNO_NO_STACK_REG_P (cap) = ALLOCNO_NO_STACK_REG_P (a);
- ALLOCNO_TOTAL_NO_STACK_REG_P (cap) = ALLOCNO_TOTAL_NO_STACK_REG_P (a);
-#endif
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
{
fprintf (ira_dump_file, " Creating cap ");
@@ -1603,12 +1615,7 @@ propagate_allocno_info (void)
ALLOCNO_NREFS (parent_a) += ALLOCNO_NREFS (a);
ALLOCNO_FREQ (parent_a) += ALLOCNO_FREQ (a);
ALLOCNO_CALL_FREQ (parent_a) += ALLOCNO_CALL_FREQ (a);
-#ifdef STACK_REGS
- if (ALLOCNO_TOTAL_NO_STACK_REG_P (a))
- ALLOCNO_TOTAL_NO_STACK_REG_P (parent_a) = true;
-#endif
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (parent_a),
- ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
+ merge_hard_reg_conflicts (a, parent_a, true);
ALLOCNO_CALLS_CROSSED_NUM (parent_a)
+= ALLOCNO_CALLS_CROSSED_NUM (a);
ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (parent_a)
@@ -1657,6 +1664,46 @@ change_allocno_in_range_list (allocno_li
r->allocno = a;
}
+/* Move all live ranges associated with allocno A to allocno OTHER_A. */
+static void
+move_allocno_live_ranges (ira_allocno_t from, ira_allocno_t to)
+{
+ allocno_live_range_t lr = ALLOCNO_LIVE_RANGES (from);
+
+ if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
+ {
+ fprintf (ira_dump_file,
+ " Moving ranges of a%dr%d to a%dr%d: ",
+ ALLOCNO_NUM (from), ALLOCNO_REGNO (from),
+ ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
+ ira_print_live_range_list (ira_dump_file, lr);
+ }
+ change_allocno_in_range_list (lr, to);
+ ALLOCNO_LIVE_RANGES (to)
+ = ira_merge_allocno_live_ranges (lr, ALLOCNO_LIVE_RANGES (to));
+ ALLOCNO_LIVE_RANGES (from) = NULL;
+}
+
+/* Copy all live ranges associated with allocno A to allocno OTHER_A. */
+static void
+copy_allocno_live_ranges (ira_allocno_t from, ira_allocno_t to)
+{
+ allocno_live_range_t lr = ALLOCNO_LIVE_RANGES (from);
+
+ if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
+ {
+ fprintf (ira_dump_file,
+ " Copying ranges of a%dr%d to a%dr%d: ",
+ ALLOCNO_NUM (from), ALLOCNO_REGNO (from),
+ ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
+ ira_print_live_range_list (ira_dump_file, lr);
+ }
+ lr = ira_copy_allocno_live_range_list (lr);
+ change_allocno_in_range_list (lr, to);
+ ALLOCNO_LIVE_RANGES (to)
+ = ira_merge_allocno_live_ranges (lr, ALLOCNO_LIVE_RANGES (to));
+}
+
/* Return TRUE if NODE represents a loop with low register
pressure. */
static bool
@@ -1890,26 +1937,15 @@ propagate_some_info_from_allocno (ira_al
{
enum reg_class cover_class;
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a),
- ALLOCNO_CONFLICT_HARD_REGS (from_a));
-#ifdef STACK_REGS
- if (ALLOCNO_NO_STACK_REG_P (from_a))
- ALLOCNO_NO_STACK_REG_P (a) = true;
-#endif
+ merge_hard_reg_conflicts (from_a, a, false);
ALLOCNO_NREFS (a) += ALLOCNO_NREFS (from_a);
ALLOCNO_FREQ (a) += ALLOCNO_FREQ (from_a);
ALLOCNO_CALL_FREQ (a) += ALLOCNO_CALL_FREQ (from_a);
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a),
- ALLOCNO_TOTAL_CONFLICT_HARD_REGS (from_a));
ALLOCNO_CALLS_CROSSED_NUM (a) += ALLOCNO_CALLS_CROSSED_NUM (from_a);
ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (a)
+= ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (from_a);
if (! ALLOCNO_BAD_SPILL_P (from_a))
ALLOCNO_BAD_SPILL_P (a) = false;
-#ifdef STACK_REGS
- if (ALLOCNO_TOTAL_NO_STACK_REG_P (from_a))
- ALLOCNO_TOTAL_NO_STACK_REG_P (a) = true;
-#endif
cover_class = ALLOCNO_COVER_CLASS (from_a);
ira_assert (cover_class == ALLOCNO_COVER_CLASS (a));
ira_allocate_and_accumulate_costs (&ALLOCNO_HARD_REG_COSTS (a), cover_class,
@@ -1930,7 +1966,6 @@ remove_unnecessary_allocnos (void)
bool merged_p, rebuild_p;
ira_allocno_t a, prev_a, next_a, parent_a;
ira_loop_tree_node_t a_node, parent;
- allocno_live_range_t r;
merged_p = false;
regno_allocnos = NULL;
@@ -1971,13 +2006,8 @@ remove_unnecessary_allocnos (void)
ira_regno_allocno_map[regno] = next_a;
else
ALLOCNO_NEXT_REGNO_ALLOCNO (prev_a) = next_a;
- r = ALLOCNO_LIVE_RANGES (a);
- change_allocno_in_range_list (r, parent_a);
- ALLOCNO_LIVE_RANGES (parent_a)
- = ira_merge_allocno_live_ranges
- (r, ALLOCNO_LIVE_RANGES (parent_a));
+ move_allocno_live_ranges (a, parent_a);
merged_p = true;
- ALLOCNO_LIVE_RANGES (a) = NULL;
propagate_some_info_from_allocno (parent_a, a);
/* Remove it from the corresponding regno allocno
map to avoid info propagation of subsequent
@@ -2011,7 +2041,6 @@ remove_low_level_allocnos (void)
bool merged_p, propagate_p;
ira_allocno_t a, top_a;
ira_loop_tree_node_t a_node, parent;
- allocno_live_range_t r;
ira_allocno_iterator ai;
merged_p = false;
@@ -2030,12 +2059,8 @@ remove_low_level_allocnos (void)
propagate_p = a_node->parent->regno_allocno_map[regno] == NULL;
/* Remove the allocno and update info of allocno in the upper
region. */
- r = ALLOCNO_LIVE_RANGES (a);
- change_allocno_in_range_list (r, top_a);
- ALLOCNO_LIVE_RANGES (top_a)
- = ira_merge_allocno_live_ranges (r, ALLOCNO_LIVE_RANGES (top_a));
+ move_allocno_live_ranges (a, top_a);
merged_p = true;
- ALLOCNO_LIVE_RANGES (a) = NULL;
if (propagate_p)
propagate_some_info_from_allocno (top_a, a);
}
@@ -2402,7 +2427,6 @@ copy_info_to_removed_store_destinations
ira_allocno_t a;
ira_allocno_t parent_a = NULL;
ira_loop_tree_node_t parent;
- allocno_live_range_t r;
bool merged_p;
merged_p = false;
@@ -2425,26 +2449,8 @@ copy_info_to_removed_store_destinations
break;
if (parent == NULL || parent_a == NULL)
continue;
- if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
- {
- fprintf
- (ira_dump_file,
- " Coping ranges of a%dr%d to a%dr%d: ",
- ALLOCNO_NUM (a), REGNO (ALLOCNO_REG (a)),
- ALLOCNO_NUM (parent_a), REGNO (ALLOCNO_REG (parent_a)));
- ira_print_live_range_list (ira_dump_file,
- ALLOCNO_LIVE_RANGES (a));
- }
- r = ira_copy_allocno_live_range_list (ALLOCNO_LIVE_RANGES (a));
- change_allocno_in_range_list (r, parent_a);
- ALLOCNO_LIVE_RANGES (parent_a)
- = ira_merge_allocno_live_ranges (r, ALLOCNO_LIVE_RANGES (parent_a));
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (parent_a),
- ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
-#ifdef STACK_REGS
- if (ALLOCNO_TOTAL_NO_STACK_REG_P (a))
- ALLOCNO_TOTAL_NO_STACK_REG_P (parent_a) = true;
-#endif
+ copy_allocno_live_ranges (a, parent_a);
+ merge_hard_reg_conflicts (a, parent_a, true);
ALLOCNO_CALL_FREQ (parent_a) += ALLOCNO_CALL_FREQ (a);
ALLOCNO_CALLS_CROSSED_NUM (parent_a)
+= ALLOCNO_CALLS_CROSSED_NUM (a);
@@ -2522,28 +2528,9 @@ ira_flattening (int max_regno_before_emi
mem_dest_p = true;
if (REGNO (ALLOCNO_REG (a)) == REGNO (ALLOCNO_REG (parent_a)))
{
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (parent_a),
- ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
-#ifdef STACK_REGS
- if (ALLOCNO_TOTAL_NO_STACK_REG_P (a))
- ALLOCNO_TOTAL_NO_STACK_REG_P (parent_a) = true;
-#endif
- if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
- {
- fprintf (ira_dump_file,
- " Moving ranges of a%dr%d to a%dr%d: ",
- ALLOCNO_NUM (a), REGNO (ALLOCNO_REG (a)),
- ALLOCNO_NUM (parent_a),
- REGNO (ALLOCNO_REG (parent_a)));
- ira_print_live_range_list (ira_dump_file,
- ALLOCNO_LIVE_RANGES (a));
- }
- change_allocno_in_range_list (ALLOCNO_LIVE_RANGES (a), parent_a);
- ALLOCNO_LIVE_RANGES (parent_a)
- = ira_merge_allocno_live_ranges
- (ALLOCNO_LIVE_RANGES (a), ALLOCNO_LIVE_RANGES (parent_a));
+ merge_hard_reg_conflicts (a, parent_a, true);
+ move_allocno_live_ranges (a, parent_a);
merged_p = true;
- ALLOCNO_LIVE_RANGES (a) = NULL;
ALLOCNO_MEM_OPTIMIZED_DEST_P (parent_a)
= (ALLOCNO_MEM_OPTIMIZED_DEST_P (parent_a)
|| ALLOCNO_MEM_OPTIMIZED_DEST_P (a));
Index: gcc/ira-lives.c
===================================================================
--- gcc.orig/ira-lives.c
+++ gcc/ira-lives.c
@@ -81,33 +81,44 @@ static int last_call_num;
/* The number of last call at which given allocno was saved. */
static int *allocno_saved_at_call;
-/* The function processing birth of register REGNO. It updates living
- hard regs and conflict hard regs for living allocnos or starts a
- new live range for the allocno corresponding to REGNO if it is
- necessary. */
+/* Record the birth of hard register REGNO, updating hard_regs_live
+ and hard reg conflict information for living allocno. */
static void
-make_regno_born (int regno)
+make_hard_regno_born (int regno)
{
unsigned int i;
- ira_allocno_t a;
- allocno_live_range_t p;
- if (regno < FIRST_PSEUDO_REGISTER)
+ SET_HARD_REG_BIT (hard_regs_live, regno);
+ EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, i)
{
- SET_HARD_REG_BIT (hard_regs_live, regno);
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, i)
- {
- SET_HARD_REG_BIT (ALLOCNO_CONFLICT_HARD_REGS (ira_allocnos[i]),
- regno);
- SET_HARD_REG_BIT (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (ira_allocnos[i]),
- regno);
- }
- return;
+ SET_HARD_REG_BIT (ALLOCNO_CONFLICT_HARD_REGS (ira_allocnos[i]),
+ regno);
+ SET_HARD_REG_BIT (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (ira_allocnos[i]),
+ regno);
}
- a = ira_curr_regno_allocno_map[regno];
- if (a == NULL)
- return;
- if ((p = ALLOCNO_LIVE_RANGES (a)) == NULL
+}
+
+/* Process the death of hard register REGNO. This updates
+ hard_regs_live. */
+static void
+make_hard_regno_dead (int regno)
+{
+ CLEAR_HARD_REG_BIT (hard_regs_live, regno);
+}
+
+/* Record the birth of allocno A, starting a new live range for
+ it if necessary, and updating hard reg conflict information. We also
+ record it in allocnos_live. */
+static void
+make_allocno_born (ira_allocno_t a)
+{
+ allocno_live_range_t p = ALLOCNO_LIVE_RANGES (a);
+
+ sparseset_set_bit (allocnos_live, ALLOCNO_NUM (a));
+ IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a), hard_regs_live);
+ IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a), hard_regs_live);
+
+ if (p == NULL
|| (p->finish != curr_point && p->finish + 1 != curr_point))
ALLOCNO_LIVE_RANGES (a)
= ira_create_allocno_live_range (a, curr_point, -1,
@@ -137,56 +148,39 @@ update_allocno_pressure_excess_length (i
}
}
-/* Process the death of register REGNO. This updates hard_regs_live
- or finishes the current live range for the allocno corresponding to
- REGNO. */
+/* Process the death of allocno A. This finishes the current live
+ range for it. */
static void
-make_regno_dead (int regno)
+make_allocno_dead (ira_allocno_t a)
{
- ira_allocno_t a;
allocno_live_range_t p;
- if (regno < FIRST_PSEUDO_REGISTER)
- {
- CLEAR_HARD_REG_BIT (hard_regs_live, regno);
- return;
- }
- a = ira_curr_regno_allocno_map[regno];
- if (a == NULL)
- return;
p = ALLOCNO_LIVE_RANGES (a);
ira_assert (p != NULL);
p->finish = curr_point;
update_allocno_pressure_excess_length (a);
+ sparseset_clear_bit (allocnos_live, ALLOCNO_NUM (a));
}
/* The current register pressures for each cover class for the current
basic block. */
static int curr_reg_pressure[N_REG_CLASSES];
-/* Mark allocno A as currently living and update current register
- pressure, maximal register pressure for the current BB, start point
- of the register pressure excess, and conflicting hard registers of
- A. */
+/* Record that register pressure for COVER_CLASS increased by N
+ registers. Update the current register pressure, maximal register
+ pressure for the current BB and the start point of the register
+ pressure excess. */
static void
-set_allocno_live (ira_allocno_t a)
+inc_register_pressure (enum reg_class cover_class, int n)
{
int i;
- enum reg_class cover_class, cl;
+ enum reg_class cl;
- /* Invalidate because it is referenced. */
- allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
- if (sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)))
- return;
- sparseset_set_bit (allocnos_live, ALLOCNO_NUM (a));
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a), hard_regs_live);
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a), hard_regs_live);
- cover_class = ALLOCNO_COVER_CLASS (a);
for (i = 0;
(cl = ira_reg_class_super_classes[cover_class][i]) != LIM_REG_CLASSES;
i++)
{
- curr_reg_pressure[cl] += ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
+ curr_reg_pressure[cl] += n;
if (high_pressure_start_point[cl] < 0
&& (curr_reg_pressure[cl] > ira_available_class_regs[cl]))
high_pressure_start_point[cl] = curr_point;
@@ -195,110 +189,87 @@ set_allocno_live (ira_allocno_t a)
}
}
-/* Mark allocno A as currently not living and update current register
- pressure, start point of the register pressure excess, and register
- pressure excess length for living allocnos. */
+/* Record that register pressure for COVER_CLASS has decreased by
+ NREGS registers; update current register pressure, start point of
+ the register pressure excess, and register pressure excess length
+ for living allocnos. */
+
static void
-clear_allocno_live (ira_allocno_t a)
+dec_register_pressure (enum reg_class cover_class, int nregs)
{
int i;
unsigned int j;
- enum reg_class cover_class, cl;
- bool set_p;
+ enum reg_class cl;
+ bool set_p = false;
- /* Invalidate because it is referenced. */
- allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
- if (sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)))
+ for (i = 0;
+ (cl = ira_reg_class_super_classes[cover_class][i]) != LIM_REG_CLASSES;
+ i++)
{
- cover_class = ALLOCNO_COVER_CLASS (a);
- set_p = false;
+ curr_reg_pressure[cl] -= nregs;
+ ira_assert (curr_reg_pressure[cl] >= 0);
+ if (high_pressure_start_point[cl] >= 0
+ && curr_reg_pressure[cl] <= ira_available_class_regs[cl])
+ set_p = true;
+ }
+ if (set_p)
+ {
+ EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, j)
+ update_allocno_pressure_excess_length (ira_allocnos[j]);
for (i = 0;
(cl = ira_reg_class_super_classes[cover_class][i])
!= LIM_REG_CLASSES;
i++)
- {
- curr_reg_pressure[cl] -= ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
- ira_assert (curr_reg_pressure[cl] >= 0);
- if (high_pressure_start_point[cl] >= 0
- && curr_reg_pressure[cl] <= ira_available_class_regs[cl])
- set_p = true;
- }
- if (set_p)
- {
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, j)
- update_allocno_pressure_excess_length (ira_allocnos[j]);
- for (i = 0;
- (cl = ira_reg_class_super_classes[cover_class][i])
- != LIM_REG_CLASSES;
- i++)
- if (high_pressure_start_point[cl] >= 0
- && curr_reg_pressure[cl] <= ira_available_class_regs[cl])
- high_pressure_start_point[cl] = -1;
-
- }
+ if (high_pressure_start_point[cl] >= 0
+ && curr_reg_pressure[cl] <= ira_available_class_regs[cl])
+ high_pressure_start_point[cl] = -1;
}
- sparseset_clear_bit (allocnos_live, ALLOCNO_NUM (a));
}
-/* Mark the register REG as live. Store a 1 in hard_regs_live or
- allocnos_live for this register or the corresponding allocno,
- record how many consecutive hardware registers it actually
- needs. */
+/* Mark the pseudo register REGNO as live. Update all information about
+ live ranges and register pressure. */
static void
-mark_reg_live (rtx reg)
+mark_pseudo_regno_live (int regno)
{
- int i, regno;
+ ira_allocno_t a = ira_curr_regno_allocno_map[regno];
+ enum reg_class cl;
+ int nregs;
- gcc_assert (REG_P (reg));
- regno = REGNO (reg);
+ if (a == NULL)
+ return;
- if (regno >= FIRST_PSEUDO_REGISTER)
- {
- ira_allocno_t a = ira_curr_regno_allocno_map[regno];
+ /* Invalidate because it is referenced. */
+ allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
- if (a != NULL)
- {
- if (sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)))
- {
- /* Invalidate because it is referenced. */
- allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
- return;
- }
- set_allocno_live (a);
- }
- make_regno_born (regno);
- }
- else if (! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno))
+ if (sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)))
+ return;
+
+ cl = ALLOCNO_COVER_CLASS (a);
+ nregs = ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
+ inc_register_pressure (cl, nregs);
+ make_allocno_born (a);
+}
+
+/* Mark the hard register REG as live. Store a 1 in hard_regs_live
+ for this register, record how many consecutive hardware registers
+ it actually needs. */
+static void
+mark_hard_reg_live (rtx reg)
+{
+ int regno = REGNO (reg);
+
+ if (! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno))
{
int last = regno + hard_regno_nregs[regno][GET_MODE (reg)];
- enum reg_class cover_class, cl;
while (regno < last)
{
if (! TEST_HARD_REG_BIT (hard_regs_live, regno)
&& ! TEST_HARD_REG_BIT (eliminable_regset, regno))
{
- cover_class = ira_hard_regno_cover_class[regno];
- for (i = 0;
- (cl = ira_reg_class_super_classes[cover_class][i])
- != LIM_REG_CLASSES;
- i++)
- {
- curr_reg_pressure[cl]++;
- if (high_pressure_start_point[cl] < 0
- && (curr_reg_pressure[cl]
- > ira_available_class_regs[cl]))
- high_pressure_start_point[cl] = curr_point;
- }
- make_regno_born (regno);
- for (i = 0;
- (cl = ira_reg_class_super_classes[cover_class][i])
- != LIM_REG_CLASSES;
- i++)
- {
- if (curr_bb_node->reg_pressure[cl] < curr_reg_pressure[cl])
- curr_bb_node->reg_pressure[cl] = curr_reg_pressure[cl];
- }
+ enum reg_class cover_class = ira_hard_regno_cover_class[regno];
+ inc_register_pressure (cover_class, 1);
+ make_hard_regno_born (regno);
}
regno++;
}
@@ -314,74 +285,55 @@ mark_ref_live (df_ref ref)
reg = DF_REF_REG (ref);
if (GET_CODE (reg) == SUBREG)
reg = SUBREG_REG (reg);
- mark_reg_live (reg);
+ if (REGNO (reg) >= FIRST_PSEUDO_REGISTER)
+ mark_pseudo_regno_live (REGNO (reg));
+ else
+ mark_hard_reg_live (reg);
}
-/* Mark the register REG as dead. Store a 0 in hard_regs_live or
- allocnos_live for the register. */
+/* Mark the pseudo register REGNO as dead. Update all information about
+ live ranges and register pressure. */
static void
-mark_reg_dead (rtx reg)
+mark_pseudo_regno_dead (int regno)
{
- int regno;
+ ira_allocno_t a = ira_curr_regno_allocno_map[regno];
+ enum reg_class cl;
+ int nregs;
- gcc_assert (REG_P (reg));
- regno = REGNO (reg);
+ if (a == NULL)
+ return;
- if (regno >= FIRST_PSEUDO_REGISTER)
- {
- ira_allocno_t a = ira_curr_regno_allocno_map[regno];
+ /* Invalidate because it is referenced. */
+ allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
- if (a != NULL)
- {
- if (! sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)))
- {
- /* Invalidate because it is referenced. */
- allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
- return;
- }
- clear_allocno_live (a);
- }
- make_regno_dead (regno);
- }
- else if (! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno))
+ if (! sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)))
+ return;
+
+ cl = ALLOCNO_COVER_CLASS (a);
+ nregs = ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
+ dec_register_pressure (cl, nregs);
+
+ make_allocno_dead (a);
+}
+
+/* Mark the hard register REG as dead. Store a 0 in hard_regs_live
+ for the register. */
+static void
+mark_hard_reg_dead (rtx reg)
+{
+ int regno = REGNO (reg);
+
+ if (! TEST_HARD_REG_BIT (ira_no_alloc_regs, regno))
{
- int i;
- unsigned int j;
int last = regno + hard_regno_nregs[regno][GET_MODE (reg)];
- enum reg_class cover_class, cl;
- bool set_p;
while (regno < last)
{
if (TEST_HARD_REG_BIT (hard_regs_live, regno))
{
- set_p = false;
- cover_class = ira_hard_regno_cover_class[regno];
- for (i = 0;
- (cl = ira_reg_class_super_classes[cover_class][i])
- != LIM_REG_CLASSES;
- i++)
- {
- curr_reg_pressure[cl]--;
- if (high_pressure_start_point[cl] >= 0
- && curr_reg_pressure[cl] <= ira_available_class_regs[cl])
- set_p = true;
- ira_assert (curr_reg_pressure[cl] >= 0);
- }
- if (set_p)
- {
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, j)
- update_allocno_pressure_excess_length (ira_allocnos[j]);
- for (i = 0;
- (cl = ira_reg_class_super_classes[cover_class][i])
- != LIM_REG_CLASSES;
- i++)
- if (high_pressure_start_point[cl] >= 0
- && (curr_reg_pressure[cl]
- <= ira_available_class_regs[cl]))
- high_pressure_start_point[cl] = -1;
- }
- make_regno_dead (regno);
+ enum reg_class cover_class = ira_hard_regno_cover_class[regno];
+ dec_register_pressure (cover_class, 1);
+ make_hard_regno_dead (regno);
}
regno++;
}
@@ -402,7 +354,10 @@ mark_ref_dead (df_ref def)
reg = DF_REF_REG (def);
if (GET_CODE (reg) == SUBREG)
reg = SUBREG_REG (reg);
- mark_reg_dead (reg);
+ if (REGNO (reg) >= FIRST_PSEUDO_REGISTER)
+ mark_pseudo_regno_dead (REGNO (reg));
+ else
+ mark_hard_reg_dead (reg);
}
/* Make pseudo REG conflicting with pseudo DREG, if the 1st pseudo
@@ -427,10 +382,10 @@ make_pseudo_conflict (rtx reg, enum reg_
if (advance_p)
curr_point++;
- mark_reg_live (reg);
- mark_reg_live (dreg);
- mark_reg_dead (reg);
- mark_reg_dead (dreg);
+ mark_pseudo_regno_live (REGNO (reg));
+ mark_pseudo_regno_live (REGNO (dreg));
+ mark_pseudo_regno_dead (REGNO (reg));
+ mark_pseudo_regno_dead (REGNO (dreg));
return false;
}
@@ -961,15 +916,7 @@ process_bb_node_lives (ira_loop_tree_nod
}
}
EXECUTE_IF_SET_IN_BITMAP (reg_live_out, FIRST_PSEUDO_REGISTER, j, bi)
- {
- ira_allocno_t a = ira_curr_regno_allocno_map[j];
-
- if (a == NULL)
- continue;
- ira_assert (! sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)));
- set_allocno_live (a);
- make_regno_born (j);
- }
+ mark_pseudo_regno_live (j);
freq = REG_FREQ_FROM_BB (bb);
if (freq == 0)
@@ -1137,7 +1084,7 @@ process_bb_node_lives (ira_loop_tree_nod
unsigned int regno = EH_RETURN_DATA_REGNO (j);
if (regno == INVALID_REGNUM)
break;
- make_regno_born (regno);
+ make_hard_regno_born (regno);
}
#endif
@@ -1155,7 +1102,7 @@ process_bb_node_lives (ira_loop_tree_nod
ALLOCNO_TOTAL_NO_STACK_REG_P (ira_allocnos[px]) = true;
}
for (px = FIRST_STACK_REG; px <= LAST_STACK_REG; px++)
- make_regno_born (px);
+ make_hard_regno_born (px);
#endif
/* No need to record conflicts for call clobbered regs if we
have nonlocal labels around, as we don't ever try to
@@ -1163,13 +1110,11 @@ process_bb_node_lives (ira_loop_tree_nod
if (!cfun->has_nonlocal_label && bb_has_abnormal_call_pred (bb))
for (px = 0; px < FIRST_PSEUDO_REGISTER; px++)
if (call_used_regs[px])
- make_regno_born (px);
+ make_hard_regno_born (px);
}
EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, i)
- {
- make_regno_dead (ALLOCNO_REGNO (ira_allocnos[i]));
- }
+ make_allocno_dead (ira_allocnos[i]);
curr_point++;
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 2/9: Split up and reorganize some functions
2010-06-18 14:09 ` Patch 2/9: Split up and reorganize some functions Bernd Schmidt
@ 2010-06-18 18:26 ` Jeff Law
2010-06-18 19:07 ` Bernd Schmidt
0 siblings, 1 reply; 42+ messages in thread
From: Jeff Law @ 2010-06-18 18:26 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches
On 06/18/10 08:05, Bernd Schmidt wrote:
> I've also
> fixed up some confusion about when to clear allocno_saved_at_call.
Yea, I had to look at that a few times to convince myself that
everything was OK.
>
> Note that there is a small change in behaviour: in
> inc/dec_register_pressure,
> we no longer recompute nregs as we iterate through the classes. I
> think it's
> more correct this way, and I haven't seen any difference in code
> generation.
There may be oddball architectures where this matters, but I can't think
of one offhand. One could argue that for such an architecture that the
regs should be in different classes. If we wanted to be absolutely sure
we could test for this situation once per compilation unit and ICE --
that might save a developer on one of these targets considerable
debugging time.
I think I'd give port maintainers a quick chance to chime in as to
whether or not their port has registers in a class where the number of
hard regs to represent a given mode varies within the class. Otherwise
it looks fine.
If you wanted to get the bulk of these changes in and old off on the
behaviour change, then that'd be fine by me.
jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 2/9: Split up and reorganize some functions
2010-06-18 18:26 ` Jeff Law
@ 2010-06-18 19:07 ` Bernd Schmidt
2010-06-21 16:08 ` Jeff Law
0 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-18 19:07 UTC (permalink / raw)
To: Jeff Law; +Cc: GCC Patches
On 06/18/10 17:09, Jeff Law wrote:
> I think I'd give port maintainers a quick chance to chime in as to
> whether or not their port has registers in a class where the number of
> hard regs to represent a given mode varies within the class. Otherwise
> it looks fine.
The reason why I think it's more correct this way is that we're going to
allocate the allocno from within its cover_class. In these functions
we're trying to estimate how many registers are going to be used by an
allocno, and we should use the nregs we compute for that cover class,
even when adjusting the pressure for its superclasses (since none of the
extra registers in the superclasses can be used anyway).
Example: we have a class A of 64 bit registers and a 64 bit allocno with
cover class A; nregs would be 1. The register pressure in A would be
increased by 1, and the pressure in ALL_REGS should also be increased
only by 1, even if ALL_REGS also contains 32 bit registers, since the
existence of those is irrelevant to the allocno we're looking at. The
current code would recompute nregs when cl == ALL_REGS and set it to 2,
which I think is wrong.
> If you wanted to get the bulk of these changes in and old off on the
> behaviour change, then that'd be fine by me.
I'll just wait until we have consensus on that; if we decide it's not
correct I can then figure out how to adjust it (it would make it harder
to share code between the allocno/hard reg cases).
Thanks for the quick reviews.
Bernd
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 2/9: Split up and reorganize some functions
2010-06-18 19:07 ` Bernd Schmidt
@ 2010-06-21 16:08 ` Jeff Law
2010-06-21 16:21 ` Bernd Schmidt
0 siblings, 1 reply; 42+ messages in thread
From: Jeff Law @ 2010-06-21 16:08 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches
On 06/18/10 11:25, Bernd Schmidt wrote:
> On 06/18/10 17:09, Jeff Law wrote:
>
>
>> I think I'd give port maintainers a quick chance to chime in as to
>> whether or not their port has registers in a class where the number of
>> hard regs to represent a given mode varies within the class. Otherwise
>> it looks fine.
>>
> The reason why I think it's more correct this way is that we're going to
> allocate the allocno from within its cover_class. In these functions
> we're trying to estimate how many registers are going to be used by an
> allocno, and we should use the nregs we compute for that cover class,
> even when adjusting the pressure for its superclasses (since none of the
> extra registers in the superclasses can be used anyway).
>
> Example: we have a class A of 64 bit registers and a 64 bit allocno with
> cover class A; nregs would be 1. The register pressure in A would be
> increased by 1, and the pressure in ALL_REGS should also be increased
> only by 1, even if ALL_REGS also contains 32 bit registers, since the
> existence of those is irrelevant to the allocno we're looking at. The
> current code would recompute nregs when cl == ALL_REGS and set it to 2,
> which I think is wrong.
>
I wasn't disagreeing with whether or not this change is more correct --
in fact, I'm in total agreement that it's more correct. I merely had a
concern that if we had a port where the number of hard regs needed to
represent a particular mode varied within the class was going to cause a
problem. I consider the possibility unlikely, particularly since an
under-estimation of the number of available regs merely results in
inefficient code, not incorrect code. I just wanted to give port
maintainers a chance to chime in.
I'd say, give them another 48hrs or so to object, and if none occur, go
forward with the patch.
Jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 2/9: Split up and reorganize some functions
2010-06-21 16:08 ` Jeff Law
@ 2010-06-21 16:21 ` Bernd Schmidt
0 siblings, 0 replies; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-21 16:21 UTC (permalink / raw)
To: Jeff Law; +Cc: GCC Patches
On 06/21/2010 05:54 PM, Jeff Law wrote:
> I wasn't disagreeing with whether or not this change is more correct --
> in fact, I'm in total agreement that it's more correct. I merely had a
> concern that if we had a port where the number of hard regs needed to
> represent a particular mode varied within the class was going to cause a
> problem.
Yeah, but I don't think that this part of the code is affected at all by
that particular issue. It's an estimate of register pressure, and it
always uses the maximum nregs that could be needed by a class/mode
combination. What am I missing?
The issue of variable nregs inside one class is something I have to deal
with in the final patch (which I expect to post sometime later this
evening, test results look good so far).
> I'd say, give them another 48hrs or so to object, and if none occur, go
> forward with the patch.
Will do. Thanks.
Bernd
^ permalink raw reply [flat|nested] 42+ messages in thread
* Patch 3/9: create some more small helper functions
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
2010-06-18 14:08 ` Patch 1/9: Remove dead code Bernd Schmidt
2010-06-18 14:09 ` Patch 2/9: Split up and reorganize some functions Bernd Schmidt
@ 2010-06-18 14:10 ` Bernd Schmidt
2010-06-18 22:07 ` Jeff Law
2010-06-18 14:11 ` Patch 4/9: minor formatting fix Bernd Schmidt
` (7 subsequent siblings)
10 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-18 14:10 UTC (permalink / raw)
To: GCC Patches
[-- Attachment #1: Type: text/plain, Size: 1 bytes --]
[-- Attachment #2: parent-allocno.diff --]
[-- Type: text/plain, Size: 7651 bytes --]
This is purely a cleanup patch that creates small helper functions for some
ugly if statements with side effects that occur in several places. None of
the subsequent patches depend on this, but I think it's a useful cleanup.
* ira-int.h (ira_parent_allocno, ira_parent_or_cap_allocno): Declare.
* ira-build.c (ira_parent_allocno, ira_parent_or_cap_allocno): New
functions.
(ira_flattening): Use ira_parent_allocno.
* ira-conflicts.c (process_regs_for_copy, propagate_copies)
build_allocno_conflicts): Use ira_parent_or_cap_allocno.
(Index: gcc/ira-build.c
===================================================================
--- gcc.orig/ira-build.c
+++ gcc/ira-build.c
@@ -2416,6 +2416,34 @@ create_caps (void)
IR with one region. */
static ira_allocno_t *regno_top_level_allocno_map;
+/* Find the allocno that corresponds to A at a level one higher up in the
+ loop tree. Returns NULL if A is a cap, or if it has no parent. */
+ira_allocno_t
+ira_parent_allocno (ira_allocno_t a)
+{
+ ira_loop_tree_node_t parent;
+
+ if (ALLOCNO_CAP (a) != NULL)
+ return NULL;
+
+ parent = ALLOCNO_LOOP_TREE_NODE (a)->parent;
+ if (parent == NULL)
+ return NULL;
+
+ return parent->regno_allocno_map[ALLOCNO_REGNO (a)];
+}
+
+/* Find the allocno that corresponds to A at a level one higher up in the
+ loop tree. If ALLOCNO_CAP is set for A, return that. */
+ira_allocno_t
+ira_parent_or_cap_allocno (ira_allocno_t a)
+{
+ if (ALLOCNO_CAP (a) != NULL)
+ return ALLOCNO_CAP (a);
+
+ return ira_parent_allocno (a);
+}
+
/* Process all allocnos originated from pseudo REGNO and copy live
ranges, hard reg conflicts, and allocno stack reg attributes from
low level allocnos to final allocnos which are destinations of
@@ -2478,7 +2506,7 @@ ira_flattening (int max_regno_before_emi
enum reg_class cover_class;
ira_allocno_t a, parent_a, first, second, node_first, node_second;
ira_copy_t cp;
- ira_loop_tree_node_t parent, node;
+ ira_loop_tree_node_t node;
allocno_live_range_t r;
ira_allocno_iterator ai;
ira_copy_iterator ci;
@@ -2513,10 +2541,8 @@ ira_flattening (int max_regno_before_emi
ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
if (ALLOCNO_SOMEWHERE_RENAMED_P (a))
new_pseudos_p = true;
- if (ALLOCNO_CAP (a) != NULL
- || (parent = ALLOCNO_LOOP_TREE_NODE (a)->parent) == NULL
- || ((parent_a = parent->regno_allocno_map[ALLOCNO_REGNO (a)])
- == NULL))
+ parent_a = ira_parent_allocno (a);
+ if (parent_a == NULL)
{
ALLOCNO_COPIES (a) = NULL;
regno_top_level_allocno_map[REGNO (ALLOCNO_REG (a))] = a;
@@ -2564,11 +2590,8 @@ ira_flattening (int max_regno_before_emi
ALLOCNO_COVER_CLASS_COST (parent_a)
-= ALLOCNO_COVER_CLASS_COST (a);
ALLOCNO_MEMORY_COST (parent_a) -= ALLOCNO_MEMORY_COST (a);
- if (ALLOCNO_CAP (parent_a) != NULL
- || (parent
- = ALLOCNO_LOOP_TREE_NODE (parent_a)->parent) == NULL
- || (parent_a = (parent->regno_allocno_map
- [ALLOCNO_REGNO (parent_a)])) == NULL)
+ parent_a = ira_parent_allocno (parent_a);
+ if (parent_a == NULL)
break;
}
ALLOCNO_COPIES (a) = NULL;
Index: gcc/ira-conflicts.c
===================================================================
--- gcc.orig/ira-conflicts.c
+++ gcc/ira-conflicts.c
@@ -346,7 +346,6 @@ process_regs_for_copy (rtx reg1, rtx reg
enum reg_class rclass, cover_class;
enum machine_mode mode;
ira_copy_t cp;
- ira_loop_tree_node_t parent;
gcc_assert (REG_SUBREG_P (reg1) && REG_SUBREG_P (reg2));
only_regs_p = REG_P (reg1) && REG_P (reg2);
@@ -397,7 +396,7 @@ process_regs_for_copy (rtx reg1, rtx reg
cost = ira_get_register_move_cost (mode, cover_class, rclass) * freq;
else
cost = ira_get_register_move_cost (mode, rclass, cover_class) * freq;
- for (;;)
+ do
{
ira_allocate_and_set_costs
(&ALLOCNO_HARD_REG_COSTS (a), cover_class,
@@ -408,12 +407,9 @@ process_regs_for_copy (rtx reg1, rtx reg
ALLOCNO_CONFLICT_HARD_REG_COSTS (a)[index] -= cost;
if (ALLOCNO_HARD_REG_COSTS (a)[index] < ALLOCNO_COVER_CLASS_COST (a))
ALLOCNO_COVER_CLASS_COST (a) = ALLOCNO_HARD_REG_COSTS (a)[index];
- if (ALLOCNO_CAP (a) != NULL)
- a = ALLOCNO_CAP (a);
- else if ((parent = ALLOCNO_LOOP_TREE_NODE (a)->parent) == NULL
- || (a = parent->regno_allocno_map[ALLOCNO_REGNO (a)]) == NULL)
- break;
+ a = ira_parent_or_cap_allocno (a);
}
+ while (a != NULL);
return true;
}
@@ -533,7 +529,6 @@ propagate_copies (void)
ira_copy_t cp;
ira_copy_iterator ci;
ira_allocno_t a1, a2, parent_a1, parent_a2;
- ira_loop_tree_node_t parent;
FOR_EACH_COPY (cp, ci)
{
@@ -542,11 +537,8 @@ propagate_copies (void)
if (ALLOCNO_LOOP_TREE_NODE (a1) == ira_loop_tree_root)
continue;
ira_assert ((ALLOCNO_LOOP_TREE_NODE (a2) != ira_loop_tree_root));
- parent = ALLOCNO_LOOP_TREE_NODE (a1)->parent;
- if ((parent_a1 = ALLOCNO_CAP (a1)) == NULL)
- parent_a1 = parent->regno_allocno_map[ALLOCNO_REGNO (a1)];
- if ((parent_a2 = ALLOCNO_CAP (a2)) == NULL)
- parent_a2 = parent->regno_allocno_map[ALLOCNO_REGNO (a2)];
+ parent_a1 = ira_parent_or_cap_allocno (a1);
+ parent_a2 = ira_parent_or_cap_allocno (a2);
ira_assert (parent_a1 != NULL && parent_a2 != NULL);
if (! CONFLICT_ALLOCNO_P (parent_a1, parent_a2))
ira_add_allocno_copy (parent_a1, parent_a2, cp->freq,
@@ -565,7 +557,6 @@ build_allocno_conflicts (ira_allocno_t a
{
int i, px, parent_num;
int conflict_bit_vec_words_num;
- ira_loop_tree_node_t parent;
ira_allocno_t parent_a, another_a, another_parent_a;
ira_allocno_t *vec;
IRA_INT_TYPE *allocno_conflicts;
@@ -601,13 +592,9 @@ build_allocno_conflicts (ira_allocno_t a
ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a)
= conflict_bit_vec_words_num * sizeof (IRA_INT_TYPE);
}
- parent = ALLOCNO_LOOP_TREE_NODE (a)->parent;
- if ((parent_a = ALLOCNO_CAP (a)) == NULL
- && (parent == NULL
- || (parent_a = parent->regno_allocno_map[ALLOCNO_REGNO (a)])
- == NULL))
+ parent_a = ira_parent_or_cap_allocno (a);
+ if (parent_a == NULL)
return;
- ira_assert (parent != NULL);
ira_assert (ALLOCNO_COVER_CLASS (a) == ALLOCNO_COVER_CLASS (parent_a));
parent_num = ALLOCNO_NUM (parent_a);
FOR_EACH_ALLOCNO_IN_SET (allocno_conflicts,
@@ -616,9 +603,8 @@ build_allocno_conflicts (ira_allocno_t a
another_a = ira_conflict_id_allocno_map[i];
ira_assert (ira_reg_classes_intersect_p
[ALLOCNO_COVER_CLASS (a)][ALLOCNO_COVER_CLASS (another_a)]);
- if ((another_parent_a = ALLOCNO_CAP (another_a)) == NULL
- && (another_parent_a = (parent->regno_allocno_map
- [ALLOCNO_REGNO (another_a)])) == NULL)
+ another_parent_a = ira_parent_or_cap_allocno (another_a);
+ if (another_parent_a == NULL)
continue;
ira_assert (ALLOCNO_NUM (another_parent_a) >= 0);
ira_assert (ALLOCNO_COVER_CLASS (another_a)
Index: gcc/ira-int.h
===================================================================
--- gcc.orig/ira-int.h
+++ gcc/ira-int.h
@@ -838,6 +838,8 @@ extern void ira_debug_allocno_copies (ir
extern void ira_traverse_loop_tree (bool, ira_loop_tree_node_t,
void (*) (ira_loop_tree_node_t),
void (*) (ira_loop_tree_node_t));
+extern ira_allocno_t ira_parent_allocno (ira_allocno_t);
+extern ira_allocno_t ira_parent_or_cap_allocno (ira_allocno_t);
extern ira_allocno_t ira_create_allocno (int, bool, ira_loop_tree_node_t);
extern void ira_set_allocno_cover_class (ira_allocno_t, enum reg_class);
extern bool ira_conflict_vector_profitable_p (ira_allocno_t, int);
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 3/9: create some more small helper functions
2010-06-18 14:10 ` Patch 3/9: create some more small helper functions Bernd Schmidt
@ 2010-06-18 22:07 ` Jeff Law
0 siblings, 0 replies; 42+ messages in thread
From: Jeff Law @ 2010-06-18 22:07 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches
On 06/18/10 08:06, Bernd Schmidt wrote:
> * ira-int.h (ira_parent_allocno, ira_parent_or_cap_allocno): Declare.
> * ira-build.c (ira_parent_allocno, ira_parent_or_cap_allocno): New
> functions.
> (ira_flattening): Use ira_parent_allocno.
> * ira-conflicts.c (process_regs_for_copy, propagate_copies)
> build_allocno_conflicts): Use ira_parent_or_cap_allocno.
I agree the original code was rather confusing. It wasn't trivial to
verify that the code would work the same before/after your changes,
primarily due to the convoluted way some of the original code was written.
Thanks. Please install,
Jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Patch 4/9: minor formatting fix
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
` (2 preceding siblings ...)
2010-06-18 14:10 ` Patch 3/9: create some more small helper functions Bernd Schmidt
@ 2010-06-18 14:11 ` Bernd Schmidt
2010-06-18 14:19 ` Jeff Law
2010-06-18 14:12 ` Patch 5/9: rename allocno_set to minmax_set Bernd Schmidt
` (6 subsequent siblings)
10 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-18 14:11 UTC (permalink / raw)
To: GCC Patches
[-- Attachment #1: Type: text/plain, Size: 1 bytes --]
[-- Attachment #2: formatting.diff --]
[-- Type: text/plain, Size: 679 bytes --]
A minor cleanup to make the code more readable.
* ira-color.c (assign_hard_reg): Improve formatting of multi-line for
statement.
Index: gcc/ira-color.c
===================================================================
--- gcc.orig/ira-color.c
+++ gcc/ira-color.c
@@ -485,9 +485,8 @@ assign_hard_reg (ira_allocno_t allocno,
#ifdef STACK_REGS
no_stack_reg_p = no_stack_reg_p || ALLOCNO_TOTAL_NO_STACK_REG_P (a);
#endif
- for (cost = ALLOCNO_UPDATED_COVER_CLASS_COST (a), i = 0;
- i < class_size;
- i++)
+ cost = ALLOCNO_UPDATED_COVER_CLASS_COST (a);
+ for (i = 0; i < class_size; i++)
if (a_costs != NULL)
{
costs[i] += a_costs[i];
^ permalink raw reply [flat|nested] 42+ messages in thread
* Patch 5/9: rename allocno_set to minmax_set
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
` (3 preceding siblings ...)
2010-06-18 14:11 ` Patch 4/9: minor formatting fix Bernd Schmidt
@ 2010-06-18 14:12 ` Bernd Schmidt
2010-06-18 14:42 ` Jeff Law
2010-06-18 14:25 ` Patch 6/9: remove "allocno" from live_range_t Bernd Schmidt
` (5 subsequent siblings)
10 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-18 14:12 UTC (permalink / raw)
To: GCC Patches
[-- Attachment #1: Type: text/plain, Size: 1 bytes --]
[-- Attachment #2: minmax-set.diff --]
[-- Type: text/plain, Size: 10215 bytes --]
This renames the bitset implementation in ira-int.h so that it no longer
has ALLOCNO in its name. I've called this type of set a MINMAX_SET, which
seems like a good description to me. Keeping track of min/max values to
reduce the size of the bit vector is what separates this from an sbitmap.
We could move this to sbitmap.h later IMO.
Currently these sets are still all used to keep track of allocnos, but
subsequent patches will change this.
* ira-int.h (SET_MINMAX_SET_BIT, CLEAR_MINMAX_SET_BIT,
TEST_MINMAX_SET_BIT, minmax_set_iterator, minmax_set_iter_init,
minmax_set_iter_cond, minmax_set_iter_next,
FOR_EACH_BIT_IN_MINMAX_SET): Renamed from SET_ALLOCNO_SET_BIT,
CLEAR_ALLOCNO_SET_BIT, TEST_ALLOCNO_SET_BIT, ira_allocno_set_iterator,
ira_allocno_set_iter_init, ira_allocno_set_iter_cond,
ira_allocno_set_iter_Next and FOR_EACH_ALLOCNO_IN_ALLOCNO_SET. All
uses changed.
Index: gcc/ira-conflicts.c
===================================================================
--- gcc.orig/ira-conflicts.c
+++ gcc/ira-conflicts.c
@@ -54,10 +54,10 @@ static IRA_INT_TYPE **conflicts;
#define CONFLICT_ALLOCNO_P(A1, A2) \
(ALLOCNO_MIN (A1) <= ALLOCNO_CONFLICT_ID (A2) \
&& ALLOCNO_CONFLICT_ID (A2) <= ALLOCNO_MAX (A1) \
- && TEST_ALLOCNO_SET_BIT (conflicts[ALLOCNO_NUM (A1)], \
- ALLOCNO_CONFLICT_ID (A2), \
- ALLOCNO_MIN (A1), \
- ALLOCNO_MAX (A1)))
+ && TEST_MINMAX_SET_BIT (conflicts[ALLOCNO_NUM (A1)], \
+ ALLOCNO_CONFLICT_ID (A2), \
+ ALLOCNO_MIN (A1), \
+ ALLOCNO_MAX (A1)))
\f
@@ -142,13 +142,13 @@ build_conflict_bit_table (void)
/* Don't set up conflict for the allocno with itself. */
&& num != (int) j)
{
- SET_ALLOCNO_SET_BIT (conflicts[num],
- ALLOCNO_CONFLICT_ID (live_a),
- ALLOCNO_MIN (allocno),
- ALLOCNO_MAX (allocno));
- SET_ALLOCNO_SET_BIT (conflicts[j], id,
- ALLOCNO_MIN (live_a),
- ALLOCNO_MAX (live_a));
+ SET_MINMAX_SET_BIT (conflicts[num],
+ ALLOCNO_CONFLICT_ID (live_a),
+ ALLOCNO_MIN (allocno),
+ ALLOCNO_MAX (allocno));
+ SET_MINMAX_SET_BIT (conflicts[j], id,
+ ALLOCNO_MIN (live_a),
+ ALLOCNO_MAX (live_a));
}
}
}
@@ -569,12 +569,12 @@ build_allocno_conflicts (ira_allocno_t a
ira_allocno_t parent_a, another_a, another_parent_a;
ira_allocno_t *vec;
IRA_INT_TYPE *allocno_conflicts;
- ira_allocno_set_iterator asi;
+ minmax_set_iterator asi;
allocno_conflicts = conflicts[ALLOCNO_NUM (a)];
px = 0;
- FOR_EACH_ALLOCNO_IN_SET (allocno_conflicts,
- ALLOCNO_MIN (a), ALLOCNO_MAX (a), i, asi)
+ FOR_EACH_BIT_IN_MINMAX_SET (allocno_conflicts,
+ ALLOCNO_MIN (a), ALLOCNO_MAX (a), i, asi)
{
another_a = ira_conflict_id_allocno_map[i];
ira_assert (ira_reg_classes_intersect_p
@@ -610,8 +610,8 @@ build_allocno_conflicts (ira_allocno_t a
ira_assert (parent != NULL);
ira_assert (ALLOCNO_COVER_CLASS (a) == ALLOCNO_COVER_CLASS (parent_a));
parent_num = ALLOCNO_NUM (parent_a);
- FOR_EACH_ALLOCNO_IN_SET (allocno_conflicts,
- ALLOCNO_MIN (a), ALLOCNO_MAX (a), i, asi)
+ FOR_EACH_BIT_IN_MINMAX_SET (allocno_conflicts,
+ ALLOCNO_MIN (a), ALLOCNO_MAX (a), i, asi)
{
another_a = ira_conflict_id_allocno_map[i];
ira_assert (ira_reg_classes_intersect_p
@@ -623,10 +623,10 @@ build_allocno_conflicts (ira_allocno_t a
ira_assert (ALLOCNO_NUM (another_parent_a) >= 0);
ira_assert (ALLOCNO_COVER_CLASS (another_a)
== ALLOCNO_COVER_CLASS (another_parent_a));
- SET_ALLOCNO_SET_BIT (conflicts[parent_num],
- ALLOCNO_CONFLICT_ID (another_parent_a),
- ALLOCNO_MIN (parent_a),
- ALLOCNO_MAX (parent_a));
+ SET_MINMAX_SET_BIT (conflicts[parent_num],
+ ALLOCNO_CONFLICT_ID (another_parent_a),
+ ALLOCNO_MIN (parent_a),
+ ALLOCNO_MAX (parent_a));
}
}
Index: gcc/ira-int.h
===================================================================
--- gcc.orig/ira-int.h
+++ gcc/ira-int.h
@@ -567,10 +567,16 @@ extern int ira_move_loops_num, ira_addit
/* Maximal value of element of array ira_reg_class_nregs. */
extern int ira_max_nregs;
+\f
+/* This page contains a bitset implementation called 'min/max sets' used to
+ record conflicts in IRA.
+ They are named min/maxs set since we keep track of a minimum and a maximum
+ bit number for each set representing the bounds of valid elements. Otherwise,
+ the implementation resembles sbitmaps in that we store an array of integers
+ whose bits directly represent the members of the set. */
-/* The number of bits in each element of array used to implement a bit
- vector of allocnos and what type that element has. We use the
- largest integer format on the host machine. */
+/* The type used as elements in the array, and the number of bits in
+ this type. */
#define IRA_INT_BITS HOST_BITS_PER_WIDE_INT
#define IRA_INT_TYPE HOST_WIDE_INT
@@ -579,7 +585,7 @@ extern int ira_max_nregs;
MAX. */
#if defined ENABLE_IRA_CHECKING && (GCC_VERSION >= 2007)
-#define SET_ALLOCNO_SET_BIT(R, I, MIN, MAX) __extension__ \
+#define SET_MINMAX_SET_BIT(R, I, MIN, MAX) __extension__ \
(({ int _min = (MIN), _max = (MAX), _i = (I); \
if (_i < _min || _i > _max) \
{ \
@@ -592,7 +598,7 @@ extern int ira_max_nregs;
|= ((IRA_INT_TYPE) 1 << ((unsigned) (_i - _min) % IRA_INT_BITS))); }))
-#define CLEAR_ALLOCNO_SET_BIT(R, I, MIN, MAX) __extension__ \
+#define CLEAR_MINMAX_SET_BIT(R, I, MIN, MAX) __extension__ \
(({ int _min = (MIN), _max = (MAX), _i = (I); \
if (_i < _min || _i > _max) \
{ \
@@ -604,7 +610,7 @@ extern int ira_max_nregs;
((R)[(unsigned) (_i - _min) / IRA_INT_BITS] \
&= ~((IRA_INT_TYPE) 1 << ((unsigned) (_i - _min) % IRA_INT_BITS))); }))
-#define TEST_ALLOCNO_SET_BIT(R, I, MIN, MAX) __extension__ \
+#define TEST_MINMAX_SET_BIT(R, I, MIN, MAX) __extension__ \
(({ int _min = (MIN), _max = (MAX), _i = (I); \
if (_i < _min || _i > _max) \
{ \
@@ -618,25 +624,24 @@ extern int ira_max_nregs;
#else
-#define SET_ALLOCNO_SET_BIT(R, I, MIN, MAX) \
+#define SET_MINMAX_SET_BIT(R, I, MIN, MAX) \
((R)[(unsigned) ((I) - (MIN)) / IRA_INT_BITS] \
|= ((IRA_INT_TYPE) 1 << ((unsigned) ((I) - (MIN)) % IRA_INT_BITS)))
-#define CLEAR_ALLOCNO_SET_BIT(R, I, MIN, MAX) \
+#define CLEAR_MINMAX_SET_BIT(R, I, MIN, MAX) \
((R)[(unsigned) ((I) - (MIN)) / IRA_INT_BITS] \
&= ~((IRA_INT_TYPE) 1 << ((unsigned) ((I) - (MIN)) % IRA_INT_BITS)))
-#define TEST_ALLOCNO_SET_BIT(R, I, MIN, MAX) \
+#define TEST_MINMAX_SET_BIT(R, I, MIN, MAX) \
((R)[(unsigned) ((I) - (MIN)) / IRA_INT_BITS] \
& ((IRA_INT_TYPE) 1 << ((unsigned) ((I) - (MIN)) % IRA_INT_BITS)))
#endif
-/* The iterator for allocno set implemented ed as allocno bit
- vector. */
+/* The iterator for min/max sets. */
typedef struct {
- /* Array containing the allocno bit vector. */
+ /* Array containing the bit vector. */
IRA_INT_TYPE *vec;
/* The number of the current element in the vector. */
@@ -653,13 +658,13 @@ typedef struct {
/* The word of the bit vector currently visited. */
unsigned IRA_INT_TYPE word;
-} ira_allocno_set_iterator;
+} minmax_set_iterator;
-/* Initialize the iterator I for allocnos bit vector VEC containing
- minimal and maximal values MIN and MAX. */
+/* Initialize the iterator I for bit vector VEC containing minimal and
+ maximal values MIN and MAX. */
static inline void
-ira_allocno_set_iter_init (ira_allocno_set_iterator *i,
- IRA_INT_TYPE *vec, int min, int max)
+minmax_set_iter_init (minmax_set_iterator *i, IRA_INT_TYPE *vec, int min,
+ int max)
{
i->vec = vec;
i->word_num = 0;
@@ -669,11 +674,11 @@ ira_allocno_set_iter_init (ira_allocno_s
i->word = i->nel == 0 ? 0 : vec[0];
}
-/* Return TRUE if we have more allocnos to visit, in which case *N is
- set to the allocno number to be visited. Otherwise, return
+/* Return TRUE if we have more elements to visit, in which case *N is
+ set to the number of the element to be visited. Otherwise, return
FALSE. */
static inline bool
-ira_allocno_set_iter_cond (ira_allocno_set_iterator *i, int *n)
+minmax_set_iter_cond (minmax_set_iterator *i, int *n)
{
/* Skip words that are zeros. */
for (; i->word == 0; i->word = i->vec[i->word_num])
@@ -695,23 +700,23 @@ ira_allocno_set_iter_cond (ira_allocno_s
return true;
}
-/* Advance to the next allocno in the set. */
+/* Advance to the next element in the set. */
static inline void
-ira_allocno_set_iter_next (ira_allocno_set_iterator *i)
+minmax_set_iter_next (minmax_set_iterator *i)
{
i->word >>= 1;
i->bit_num++;
}
-/* Loop over all elements of allocno set given by bit vector VEC and
+/* Loop over all elements of a min/max set given by bit vector VEC and
their minimal and maximal values MIN and MAX. In each iteration, N
is set to the number of next allocno. ITER is an instance of
- ira_allocno_set_iterator used to iterate the allocnos in the set. */
-#define FOR_EACH_ALLOCNO_IN_SET(VEC, MIN, MAX, N, ITER) \
- for (ira_allocno_set_iter_init (&(ITER), (VEC), (MIN), (MAX)); \
- ira_allocno_set_iter_cond (&(ITER), &(N)); \
- ira_allocno_set_iter_next (&(ITER)))
-
+ minmax_set_iterator used to iterate over the set. */
+#define FOR_EACH_BIT_IN_MINMAX_SET(VEC, MIN, MAX, N, ITER) \
+ for (minmax_set_iter_init (&(ITER), (VEC), (MIN), (MAX)); \
+ minmax_set_iter_cond (&(ITER), &(N)); \
+ minmax_set_iter_next (&(ITER)))
+\f
/* ira.c: */
/* Map: hard regs X modes -> set of hard registers for storing value
Index: gcc/ira-build.c
===================================================================
--- gcc.orig/ira-build.c
+++ gcc/ira-build.c
@@ -670,7 +670,7 @@ add_to_allocno_conflicts (ira_allocno_t
}
ALLOCNO_MAX (a1) = id;
}
- SET_ALLOCNO_SET_BIT (vec, id, ALLOCNO_MIN (a1), ALLOCNO_MAX (a1));
+ SET_MINMAX_SET_BIT (vec, id, ALLOCNO_MIN (a1), ALLOCNO_MAX (a1));
}
}
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 5/9: rename allocno_set to minmax_set
2010-06-18 14:12 ` Patch 5/9: rename allocno_set to minmax_set Bernd Schmidt
@ 2010-06-18 14:42 ` Jeff Law
0 siblings, 0 replies; 42+ messages in thread
From: Jeff Law @ 2010-06-18 14:42 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches
On 06/18/10 08:08, Bernd Schmidt wrote:
> * ira-int.h (SET_MINMAX_SET_BIT, CLEAR_MINMAX_SET_BIT,
> TEST_MINMAX_SET_BIT, minmax_set_iterator, minmax_set_iter_init,
> minmax_set_iter_cond, minmax_set_iter_next,
> FOR_EACH_BIT_IN_MINMAX_SET): Renamed from SET_ALLOCNO_SET_BIT,
> CLEAR_ALLOCNO_SET_BIT, TEST_ALLOCNO_SET_BIT, ira_allocno_set_iterator,
> ira_allocno_set_iter_init, ira_allocno_set_iter_cond,
> ira_allocno_set_iter_Next and FOR_EACH_ALLOCNO_IN_ALLOCNO_SET. All
> uses changed.
OK.
Jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Patch 6/9: remove "allocno" from live_range_t
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
` (4 preceding siblings ...)
2010-06-18 14:12 ` Patch 5/9: rename allocno_set to minmax_set Bernd Schmidt
@ 2010-06-18 14:25 ` Bernd Schmidt
2010-06-18 18:16 ` Jeff Law
2010-06-18 14:37 ` Patch 7/9: Introduce ira_object_t Bernd Schmidt
` (4 subsequent siblings)
10 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-18 14:25 UTC (permalink / raw)
To: GCC Patches
[-- Attachment #1: Type: text/plain, Size: 0 bytes --]
[-- Attachment #2: live_range_t.diff --]
[-- Type: text/plain, Size: 16520 bytes --]
This is another patch to remove "allocno" from names, this time involving
live ranges. Subsequent patches will change IRA to associate live ranges
with other objects than allocnos; even without that motivation I believe
that the code becomes more readable due to the shorter identifiers. There's
really no need to call a live range an allocno_live_range.
* ira-int.h (struct live_range, live_range_t): Renamed from struct
ira_allocno_live_range and allocno_live_range_t; all uses changed.
* ira-build.c (live_range_pool): Renamed from allocno_live_range_pool.
All uses changed.
Index: gcc/ira-build.c
===================================================================
--- gcc.orig/ira-build.c
+++ gcc/ira-build.c
@@ -383,8 +383,8 @@ rebuild_regno_allocno_maps (void)
\f
-/* Pools for allocnos and allocno live ranges. */
-static alloc_pool allocno_pool, allocno_live_range_pool;
+/* Pools for allocnos and live ranges. */
+static alloc_pool allocno_pool, live_range_pool;
/* Vec containing references to all created allocnos. It is a
container of array allocnos. */
@@ -398,9 +398,9 @@ static VEC(ira_allocno_t,heap) *ira_conf
static void
initiate_allocnos (void)
{
- allocno_live_range_pool
- = create_alloc_pool ("allocno live ranges",
- sizeof (struct ira_allocno_live_range), 100);
+ live_range_pool
+ = create_alloc_pool ("live ranges",
+ sizeof (struct live_range), 100);
allocno_pool
= create_alloc_pool ("allocnos", sizeof (struct ira_allocno), 100);
allocno_vec = VEC_alloc (ira_allocno_t, heap, max_reg_num () * 2);
@@ -812,13 +812,13 @@ create_cap_allocno (ira_allocno_t a)
}
/* Create and return allocno live range with given attributes. */
-allocno_live_range_t
+live_range_t
ira_create_allocno_live_range (ira_allocno_t a, int start, int finish,
- allocno_live_range_t next)
+ live_range_t next)
{
- allocno_live_range_t p;
+ live_range_t p;
- p = (allocno_live_range_t) pool_alloc (allocno_live_range_pool);
+ p = (live_range_t) pool_alloc (live_range_pool);
p->allocno = a;
p->start = start;
p->finish = finish;
@@ -827,22 +827,22 @@ ira_create_allocno_live_range (ira_alloc
}
/* Copy allocno live range R and return the result. */
-static allocno_live_range_t
-copy_allocno_live_range (allocno_live_range_t r)
+static live_range_t
+copy_allocno_live_range (live_range_t r)
{
- allocno_live_range_t p;
+ live_range_t p;
- p = (allocno_live_range_t) pool_alloc (allocno_live_range_pool);
+ p = (live_range_t) pool_alloc (live_range_pool);
*p = *r;
return p;
}
/* Copy allocno live range list given by its head R and return the
result. */
-allocno_live_range_t
-ira_copy_allocno_live_range_list (allocno_live_range_t r)
+live_range_t
+ira_copy_allocno_live_range_list (live_range_t r)
{
- allocno_live_range_t p, first, last;
+ live_range_t p, first, last;
if (r == NULL)
return NULL;
@@ -861,11 +861,10 @@ ira_copy_allocno_live_range_list (allocn
/* Merge ranges R1 and R2 and returns the result. The function
maintains the order of ranges and tries to minimize number of the
result ranges. */
-allocno_live_range_t
-ira_merge_allocno_live_ranges (allocno_live_range_t r1,
- allocno_live_range_t r2)
+live_range_t
+ira_merge_allocno_live_ranges (live_range_t r1, live_range_t r2)
{
- allocno_live_range_t first, last, temp;
+ live_range_t first, last, temp;
if (r1 == NULL)
return r2;
@@ -939,8 +938,7 @@ ira_merge_allocno_live_ranges (allocno_l
/* Return TRUE if live ranges R1 and R2 intersect. */
bool
-ira_allocno_live_ranges_intersect_p (allocno_live_range_t r1,
- allocno_live_range_t r2)
+ira_allocno_live_ranges_intersect_p (live_range_t r1, live_range_t r2)
{
/* Remember the live ranges are always kept ordered. */
while (r1 != NULL && r2 != NULL)
@@ -957,16 +955,16 @@ ira_allocno_live_ranges_intersect_p (all
/* Free allocno live range R. */
void
-ira_finish_allocno_live_range (allocno_live_range_t r)
+ira_finish_allocno_live_range (live_range_t r)
{
- pool_free (allocno_live_range_pool, r);
+ pool_free (live_range_pool, r);
}
/* Free list of allocno live ranges starting with R. */
void
-ira_finish_allocno_live_range_list (allocno_live_range_t r)
+ira_finish_allocno_live_range_list (live_range_t r)
{
- allocno_live_range_t next_r;
+ live_range_t next_r;
for (; r != NULL; r = next_r)
{
@@ -1027,7 +1025,7 @@ finish_allocnos (void)
VEC_free (ira_allocno_t, heap, ira_conflict_id_allocno_map_vec);
VEC_free (ira_allocno_t, heap, allocno_vec);
free_alloc_pool (allocno_pool);
- free_alloc_pool (allocno_live_range_pool);
+ free_alloc_pool (live_range_pool);
}
\f
@@ -1658,7 +1656,7 @@ create_allocnos (void)
/* The function changes allocno in range list given by R onto A. */
static void
-change_allocno_in_range_list (allocno_live_range_t r, ira_allocno_t a)
+change_allocno_in_range_list (live_range_t r, ira_allocno_t a)
{
for (; r != NULL; r = r->next)
r->allocno = a;
@@ -1668,7 +1666,7 @@ change_allocno_in_range_list (allocno_li
static void
move_allocno_live_ranges (ira_allocno_t from, ira_allocno_t to)
{
- allocno_live_range_t lr = ALLOCNO_LIVE_RANGES (from);
+ live_range_t lr = ALLOCNO_LIVE_RANGES (from);
if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
{
@@ -1688,7 +1686,7 @@ move_allocno_live_ranges (ira_allocno_t
static void
copy_allocno_live_ranges (ira_allocno_t from, ira_allocno_t to)
{
- allocno_live_range_t lr = ALLOCNO_LIVE_RANGES (from);
+ live_range_t lr = ALLOCNO_LIVE_RANGES (from);
if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
{
@@ -2148,7 +2146,7 @@ update_bad_spill_attribute (void)
int i;
ira_allocno_t a;
ira_allocno_iterator ai;
- allocno_live_range_t r;
+ live_range_t r;
enum reg_class cover_class;
bitmap_head dead_points[N_REG_CLASSES];
@@ -2199,7 +2197,7 @@ setup_min_max_allocno_live_range_point (
int i;
ira_allocno_t a, parent_a, cap;
ira_allocno_iterator ai;
- allocno_live_range_t r;
+ live_range_t r;
ira_loop_tree_node_t parent;
FOR_EACH_ALLOCNO (a, ai)
@@ -2496,7 +2494,7 @@ ira_flattening (int max_regno_before_emi
ira_allocno_t a, parent_a, first, second, node_first, node_second;
ira_copy_t cp;
ira_loop_tree_node_t node;
- allocno_live_range_t r;
+ live_range_t r;
ira_allocno_iterator ai;
ira_copy_iterator ci;
sparseset allocnos_live;
@@ -2864,7 +2862,7 @@ ira_build (bool loops_p)
{
int n, nr;
ira_allocno_t a;
- allocno_live_range_t r;
+ live_range_t r;
ira_allocno_iterator ai;
n = 0;
Index: gcc/ira-color.c
===================================================================
--- gcc.orig/ira-color.c
+++ gcc/ira-color.c
@@ -2493,7 +2493,7 @@ collect_spilled_coalesced_allocnos (int
/* Array of live ranges of size IRA_ALLOCNOS_NUM. Live range for
given slot contains live ranges of coalesced allocnos assigned to
given slot. */
-static allocno_live_range_t *slot_coalesced_allocnos_live_ranges;
+static live_range_t *slot_coalesced_allocnos_live_ranges;
/* Return TRUE if coalesced allocnos represented by ALLOCNO has live
ranges intersected with live ranges of coalesced allocnos assigned
@@ -2522,7 +2522,7 @@ setup_slot_coalesced_allocno_live_ranges
{
int n;
ira_allocno_t a;
- allocno_live_range_t r;
+ live_range_t r;
n = ALLOCNO_TEMP (allocno);
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
@@ -2551,10 +2551,9 @@ coalesce_spill_slots (ira_allocno_t *spi
bitmap set_jump_crosses = regstat_get_setjmp_crosses ();
slot_coalesced_allocnos_live_ranges
- = (allocno_live_range_t *) ira_allocate (sizeof (allocno_live_range_t)
- * ira_allocnos_num);
+ = (live_range_t *) ira_allocate (sizeof (live_range_t) * ira_allocnos_num);
memset (slot_coalesced_allocnos_live_ranges, 0,
- sizeof (allocno_live_range_t) * ira_allocnos_num);
+ sizeof (live_range_t) * ira_allocnos_num);
last_coalesced_allocno_num = 0;
/* Coalesce non-conflicting spilled allocnos preferring most
frequently used. */
@@ -3244,7 +3243,7 @@ fast_allocation (void)
enum machine_mode mode;
ira_allocno_t a;
ira_allocno_iterator ai;
- allocno_live_range_t r;
+ live_range_t r;
HARD_REG_SET conflict_hard_regs, *used_hard_regs;
sorted_allocnos = (ira_allocno_t *) ira_allocate (sizeof (ira_allocno_t)
Index: gcc/ira-conflicts.c
===================================================================
--- gcc.orig/ira-conflicts.c
+++ gcc/ira-conflicts.c
@@ -71,7 +71,7 @@ build_conflict_bit_table (void)
unsigned int j;
enum reg_class cover_class;
ira_allocno_t allocno, live_a;
- allocno_live_range_t r;
+ live_range_t r;
ira_allocno_iterator ai;
sparseset allocnos_live;
int allocno_set_words;
Index: gcc/ira-emit.c
===================================================================
--- gcc.orig/ira-emit.c
+++ gcc/ira-emit.c
@@ -913,7 +913,7 @@ add_range_and_copies_from_move_list (mov
move_t move;
ira_allocno_t to, from, a;
ira_copy_t cp;
- allocno_live_range_t r;
+ live_range_t r;
bitmap_iterator bi;
HARD_REG_SET hard_regs_live;
Index: gcc/ira-int.h
===================================================================
--- gcc.orig/ira-int.h
+++ gcc/ira-int.h
@@ -59,7 +59,7 @@ extern FILE *ira_dump_file;
/* Typedefs for pointers to allocno live range, allocno, and copy of
allocnos. */
-typedef struct ira_allocno_live_range *allocno_live_range_t;
+typedef struct live_range *live_range_t;
typedef struct ira_allocno *ira_allocno_t;
typedef struct ira_allocno_copy *ira_copy_t;
@@ -196,7 +196,7 @@ extern ira_loop_tree_node_t ira_loop_nod
conflicts for other allocnos (e.g. to assign stack memory slot) we
use the live ranges. If the live ranges of two allocnos are
intersected, the allocnos are in conflict. */
-struct ira_allocno_live_range
+struct live_range
{
/* Allocno whose live range is described by given structure. */
ira_allocno_t allocno;
@@ -204,9 +204,9 @@ struct ira_allocno_live_range
int start, finish;
/* Next structure describing program points where the allocno
lives. */
- allocno_live_range_t next;
+ live_range_t next;
/* Pointer to structures with the same start/finish. */
- allocno_live_range_t start_next, finish_next;
+ live_range_t start_next, finish_next;
};
/* Program points are enumerated by numbers from range
@@ -220,7 +220,7 @@ extern int ira_max_point;
/* Arrays of size IRA_MAX_POINT mapping a program point to the allocno
live ranges with given start/finish point. */
-extern allocno_live_range_t *ira_start_point_ranges, *ira_finish_point_ranges;
+extern live_range_t *ira_start_point_ranges, *ira_finish_point_ranges;
/* A structure representing an allocno (allocation entity). Allocno
represents a pseudo-register in an allocation region. If
@@ -305,7 +305,7 @@ struct ira_allocno
allocno lives. We always maintain the list in such way that *the
ranges in the list are not intersected and ordered by decreasing
their program points*. */
- allocno_live_range_t live_ranges;
+ live_range_t live_ranges;
/* Before building conflicts the two member values are
correspondingly minimal and maximal points of the accumulated
allocno live ranges. After building conflicts the values are
@@ -845,16 +845,13 @@ extern void ira_allocate_allocno_conflic
extern void ira_allocate_allocno_conflicts (ira_allocno_t, int);
extern void ira_add_allocno_conflict (ira_allocno_t, ira_allocno_t);
extern void ira_print_expanded_allocno (ira_allocno_t);
-extern allocno_live_range_t ira_create_allocno_live_range
- (ira_allocno_t, int, int, allocno_live_range_t);
-extern allocno_live_range_t ira_copy_allocno_live_range_list
- (allocno_live_range_t);
-extern allocno_live_range_t ira_merge_allocno_live_ranges
- (allocno_live_range_t, allocno_live_range_t);
-extern bool ira_allocno_live_ranges_intersect_p (allocno_live_range_t,
- allocno_live_range_t);
-extern void ira_finish_allocno_live_range (allocno_live_range_t);
-extern void ira_finish_allocno_live_range_list (allocno_live_range_t);
+extern live_range_t ira_create_allocno_live_range (ira_allocno_t, int, int,
+ live_range_t);
+extern live_range_t ira_copy_allocno_live_range_list (live_range_t);
+extern live_range_t ira_merge_allocno_live_ranges (live_range_t, live_range_t);
+extern bool ira_allocno_live_ranges_intersect_p (live_range_t, live_range_t);
+extern void ira_finish_allocno_live_range (live_range_t);
+extern void ira_finish_allocno_live_range_list (live_range_t);
extern void ira_free_allocno_updated_costs (ira_allocno_t);
extern ira_copy_t ira_create_copy (ira_allocno_t, ira_allocno_t,
int, bool, rtx, ira_loop_tree_node_t);
@@ -881,8 +878,8 @@ extern void ira_tune_allocno_costs_and_c
/* ira-lives.c */
extern void ira_rebuild_start_finish_chains (void);
-extern void ira_print_live_range_list (FILE *, allocno_live_range_t);
-extern void ira_debug_live_range_list (allocno_live_range_t);
+extern void ira_print_live_range_list (FILE *, live_range_t);
+extern void ira_debug_live_range_list (live_range_t);
extern void ira_debug_allocno_live_ranges (ira_allocno_t);
extern void ira_debug_live_ranges (void);
extern void ira_create_allocno_live_ranges (void);
Index: gcc/ira-lives.c
===================================================================
--- gcc.orig/ira-lives.c
+++ gcc/ira-lives.c
@@ -54,7 +54,7 @@ int ira_max_point;
/* Arrays of size IRA_MAX_POINT mapping a program point to the allocno
live ranges with given start/finish point. */
-allocno_live_range_t *ira_start_point_ranges, *ira_finish_point_ranges;
+live_range_t *ira_start_point_ranges, *ira_finish_point_ranges;
/* Number of the current program point. */
static int curr_point;
@@ -112,7 +112,7 @@ make_hard_regno_dead (int regno)
static void
make_allocno_born (ira_allocno_t a)
{
- allocno_live_range_t p = ALLOCNO_LIVE_RANGES (a);
+ live_range_t p = ALLOCNO_LIVE_RANGES (a);
sparseset_set_bit (allocnos_live, ALLOCNO_NUM (a));
IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a), hard_regs_live);
@@ -131,7 +131,7 @@ update_allocno_pressure_excess_length (i
{
int start, i;
enum reg_class cover_class, cl;
- allocno_live_range_t p;
+ live_range_t p;
cover_class = ALLOCNO_COVER_CLASS (a);
for (i = 0;
@@ -153,7 +153,7 @@ update_allocno_pressure_excess_length (i
static void
make_allocno_dead (ira_allocno_t a)
{
- allocno_live_range_t p;
+ live_range_t p;
p = ALLOCNO_LIVE_RANGES (a);
ira_assert (p != NULL);
@@ -1140,18 +1140,18 @@ create_start_finish_chains (void)
{
ira_allocno_t a;
ira_allocno_iterator ai;
- allocno_live_range_t r;
+ live_range_t r;
ira_start_point_ranges
- = (allocno_live_range_t *) ira_allocate (ira_max_point
- * sizeof (allocno_live_range_t));
+ = (live_range_t *) ira_allocate (ira_max_point
+ * sizeof (live_range_t));
memset (ira_start_point_ranges, 0,
- ira_max_point * sizeof (allocno_live_range_t));
+ ira_max_point * sizeof (live_range_t));
ira_finish_point_ranges
- = (allocno_live_range_t *) ira_allocate (ira_max_point
- * sizeof (allocno_live_range_t));
+ = (live_range_t *) ira_allocate (ira_max_point
+ * sizeof (live_range_t));
memset (ira_finish_point_ranges, 0,
- ira_max_point * sizeof (allocno_live_range_t));
+ ira_max_point * sizeof (live_range_t));
FOR_EACH_ALLOCNO (a, ai)
{
for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
@@ -1185,7 +1185,7 @@ remove_some_program_points_and_update_li
int *map;
ira_allocno_t a;
ira_allocno_iterator ai;
- allocno_live_range_t r;
+ live_range_t r;
bitmap born_or_died;
bitmap_iterator bi;
@@ -1223,7 +1223,7 @@ remove_some_program_points_and_update_li
/* Print live ranges R to file F. */
void
-ira_print_live_range_list (FILE *f, allocno_live_range_t r)
+ira_print_live_range_list (FILE *f, live_range_t r)
{
for (; r != NULL; r = r->next)
fprintf (f, " [%d..%d]", r->start, r->finish);
@@ -1232,7 +1232,7 @@ ira_print_live_range_list (FILE *f, allo
/* Print live ranges R to stderr. */
void
-ira_debug_live_range_list (allocno_live_range_t r)
+ira_debug_live_range_list (live_range_t r)
{
ira_print_live_range_list (stderr, r);
}
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 6/9: remove "allocno" from live_range_t
2010-06-18 14:25 ` Patch 6/9: remove "allocno" from live_range_t Bernd Schmidt
@ 2010-06-18 18:16 ` Jeff Law
2010-06-25 2:22 ` Bernd Schmidt
0 siblings, 1 reply; 42+ messages in thread
From: Jeff Law @ 2010-06-18 18:16 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches
On 06/18/10 08:09, Bernd Schmidt wrote:
> This is another patch to remove "allocno" from names, this time involving
> live ranges. Subsequent patches will change IRA to associate live ranges
> with other objects than allocnos; even without that motivation I believe
> that the code becomes more readable due to the shorter identifiers.
> There's
> really no need to call a live range an allocno_live_range.
>
> * ira-int.h (struct live_range, live_range_t): Renamed from struct
> ira_allocno_live_range and allocno_live_range_t; all uses changed.
> * ira-build.c (live_range_pool): Renamed from allocno_live_range_pool.
> All uses changed.
OK
Jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 6/9: remove "allocno" from live_range_t
2010-06-18 18:16 ` Jeff Law
@ 2010-06-25 2:22 ` Bernd Schmidt
2010-06-25 3:16 ` Jeff Law
0 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-25 2:22 UTC (permalink / raw)
To: Jeff Law; +Cc: GCC Patches
On 06/18/2010 06:40 PM, Jeff Law wrote:
> On 06/18/10 08:09, Bernd Schmidt wrote:
>> This is another patch to remove "allocno" from names, this time involving
>> live ranges. Subsequent patches will change IRA to associate live ranges
>> with other objects than allocnos; even without that motivation I believe
>> that the code becomes more readable due to the shorter identifiers.
>> There's
>> really no need to call a live range an allocno_live_range.
>>
>> * ira-int.h (struct live_range, live_range_t): Renamed from struct
>> ira_allocno_live_range and allocno_live_range_t; all uses changed.
>> * ira-build.c (live_range_pool): Renamed from
>> allocno_live_range_pool.
>> All uses changed.
> OK
Thanks for the OKs. I've committed patches 1-6; I think I'll wait for a
conclusion about whether the final piece should go in before I commit
the conversion to objects (unless there's consensus that it's a good
idea on its own).
Bernd
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 6/9: remove "allocno" from live_range_t
2010-06-25 2:22 ` Bernd Schmidt
@ 2010-06-25 3:16 ` Jeff Law
0 siblings, 0 replies; 42+ messages in thread
From: Jeff Law @ 2010-06-25 3:16 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches
On 06/24/10 17:37, Bernd Schmidt wrote:
> On 06/18/2010 06:40 PM, Jeff Law wrote:
>
>> On 06/18/10 08:09, Bernd Schmidt wrote:
>>
>>> This is another patch to remove "allocno" from names, this time involving
>>> live ranges. Subsequent patches will change IRA to associate live ranges
>>> with other objects than allocnos; even without that motivation I believe
>>> that the code becomes more readable due to the shorter identifiers.
>>> There's
>>> really no need to call a live range an allocno_live_range.
>>>
>>> * ira-int.h (struct live_range, live_range_t): Renamed from struct
>>> ira_allocno_live_range and allocno_live_range_t; all uses changed.
>>> * ira-build.c (live_range_pool): Renamed from
>>> allocno_live_range_pool.
>>> All uses changed.
>>>
>> OK
>>
> Thanks for the OKs. I've committed patches 1-6; I think I'll wait for a
> conclusion about whether the final piece should go in before I commit
> the conversion to objects (unless there's consensus that it's a good
> idea on its own).
>
NP.
I haven't looked at the meat of the final piece (and can't until next
week), but I know that handling double-word allocations better is a
serious issue in IRA, perhaps the most serious issue from a code
generation standpoint. The direction taken so far seems quite
reasonable to me.
jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Patch 7/9: Introduce ira_object_t
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
` (5 preceding siblings ...)
2010-06-18 14:25 ` Patch 6/9: remove "allocno" from live_range_t Bernd Schmidt
@ 2010-06-18 14:37 ` Bernd Schmidt
2010-06-18 22:07 ` Jeff Law
2010-06-18 14:48 ` Patch 8/9: track live ranges for objects Bernd Schmidt
` (3 subsequent siblings)
10 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-18 14:37 UTC (permalink / raw)
To: GCC Patches
[-- Attachment #1: Type: text/plain, Size: 1 bytes --]
[-- Attachment #2: objects.diff --]
[-- Type: text/plain, Size: 78411 bytes --]
This introduces a new structure, ira_object_t, which is split off from
ira_allocno_t. Objects are used to track information related to
conflicts. There is at the moment a 1:1 correspondence between objects
and allocnos, but the plan is to introduce 2 objects for DImode allocnos
with a suitable cover class.
No code generation differences were observed.
* ira-int.h (struct ira_object): New.
(ira_object_t): New typedef. Add DEF_VEC_P and DEF_VEC_ALLOC_P
for it.
(struct ira_allocno): Remove members min, max,
conflict_allocno_array, conflict_id, conflict_allocno_array_size,
conflict_allocnos_num and conflict_vec_p. Add new member object.
(OBJECT_CONFLICT_ARRAY, OBJECT_CONFLICT_VEC_P,
OBJECT_NUM_CONFLICTS, OBJECT_CONFLICT_ARRAY_SIZE,
OBJECT_CONFLICT_HARD_REGS, OBJECT_TOTAL_CONFLICT_HARD_REGS,
OBJECT_MIN, OBJECT_MAX, OBJECT_CONFLICT_ID): Renamed from
ALLOCNO_CONFLICT_ALLOCNO_ARRAY, ALLOCNO_CONFLICT_VEC_P,
ALLOCNO_CONFLICT_ALLOCNOS_NUM, ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE,
ALLOCNO_CONFLICT_HARD_REGS, ALLOCNO_TOTAL_CONFLICT_HARD_REGS)
ALLOCNO_MIN, ALLOCNO_MAX, and ALLOCNO_CONFLICT_ID; now operate on
an ira_object_t rather than ira_allocno_t. All uses changed.
(ira_object_id_map): Renamed from ira_conflict_id_allocno_map; now
contains a vector of ira_object_t; all uses changed.
(ira_objects_num): Declare variable.
(ira_create_allocno_object): Declare function.
(ira_conflict_vector_profitable_p): Adjust prototype.
(ira_allocate_conflict_vec): Renamed from
ira_allocate_allocno_conflict_vec; first arg now ira_object_t.
(ira_allocate_object_conflicts): Renamed from
ira_allocate_allocno_conflicts; first arg now ira_object_t.
(struct ira_object_iterator): New.
(ira_object_iter_init, ira_object_iter_cond, FOR_EACH_OBJECT): New.
(ira_allocno_conflict_iterator): Renamed member allocno_conflict_vec_p
to conflict_vec_p. All uses changed.
(ira_allocno_conflict_iter_init, ira_allocno_conflict_iter_cond):
Changed to take into account that conflicts are now tracked for
objects.
* ira-conflicts.c (OBJECTS_CONFLICT_P): Renamed from
CONFLICT_ALLOCNO_P. Args changed to accept ira_object_t. All
uses changed.
(allocnos_conflict_p): New static function.
(collected_conflict_objects): Renamed from collected_allocno_objects;
now a vector of ira_object_t. All uses changed.
(build_conflict_bit_table): Changed to take into account that
conflicts are now tracked for objects.
(process_regs_for_copy, propagate_copies, build_allocno_conflicts)
(print_allocno_conflicts, ira_build_conflicts): Likewise.
* ira-color.c (assign_hard_reg, setup_allocno_available_regs_num)
setup_allocno_left_conflicts_size, allocno_reload_assign,
fast_allocation): Likewise.
* ira-lives.c (make_hard_regno_born, make_allocno_born)
process_single_reg_class_operands, process_bb_node_lives): Likewise.
* ira-emit.c (modify_move_list, add_range_and_copies_from_move_list):
Likewise.
* ira-build.c (ira_objects_num): New variable.
(ira_object_id_map): Renamed from ira_conflict_id_allocno_map; now
contains a vector of ira_object_t; all uses changed.
(ira_object_id_map_vec): Corresponding change.
(object_pool): New static variable.
(initiate_allocnos): Initialize it.
(finish_allocnos): Free it.
(ira_create_object, ira_create_allocno_object, create_allocno_objects):
New functions.
(ira_create_allocno): Don't set members that were removed.
(ira_set_allocno_cover_class): Don't change conflict hard regs.
(merge_hard_reg_conflicts): Changed to take into account that
conflicts are now tracked for objects.
(ira_conflict_vector_profitable_p, ira_allocate_conflict_vec,
allocate_conflict_bit_vec, ira_allocate_object_conflicts,
compress_conflict_vecs, remove_low_level_allocnos, ira_flattening,
setup_min_max_allocno_live_range_point, allocno_range_compare_func,
setup_min_max_conflict_allocno_ids, ): Likewise.
((add_to_conflicts): Renamed from add_to_allocno_conflicts, args changed
to ira_object_t; all callers changed.
(ira_add_conflict): Renamed from ira_add_allocno_conflict, args changed
to ira_object_t, all callers changed.
(clear_conflicts): Renamed from clear_allocno_conflicts, arg changed
to ira_object_t, all callers changed.
(conflict_check, curr_conflict_check_tick): Renamed from
allocno_conflict_check and curr_allocno_conflict_check_tick; all uses
changed.
(compress_conflict_vec): Renamed from compress_allocno_conflict_vec,
arg changed to ira_object_t, all callers changed.
(create_cap_allocno): Call ira_create_allocno_object.
(finish_allocno): Free the corresponding object.
(sort_conflict_id_map): Renamed from sort_conflict_id_allocno_map; all
callers changed. Adjusted for dealing with objects.
(ira_build): Call create_allocno_objects after ira_costs. Adjusted for
dealing with objects.
* ira.c (ira_bad_reload_regno_1): Adjusted for dealing with objects.
Index: ira-conflicts.c
===================================================================
--- ira-conflicts.c.orig
+++ ira-conflicts.c
@@ -50,40 +50,38 @@ along with GCC; see the file COPYING3.
corresponding allocnos see function build_allocno_conflicts. */
static IRA_INT_TYPE **conflicts;
-/* Macro to test a conflict of A1 and A2 in `conflicts'. */
-#define CONFLICT_ALLOCNO_P(A1, A2) \
- (ALLOCNO_MIN (A1) <= ALLOCNO_CONFLICT_ID (A2) \
- && ALLOCNO_CONFLICT_ID (A2) <= ALLOCNO_MAX (A1) \
- && TEST_MINMAX_SET_BIT (conflicts[ALLOCNO_NUM (A1)], \
- ALLOCNO_CONFLICT_ID (A2), \
- ALLOCNO_MIN (A1), \
- ALLOCNO_MAX (A1)))
+/* Macro to test a conflict of C1 and C2 in `conflicts'. */
+#define OBJECTS_CONFLICT_P(C1, C2) \
+ (OBJECT_MIN (C1) <= OBJECT_CONFLICT_ID (C2) \
+ && OBJECT_CONFLICT_ID (C2) <= OBJECT_MAX (C1) \
+ && TEST_MINMAX_SET_BIT (conflicts[OBJECT_CONFLICT_ID (C1)], \
+ OBJECT_CONFLICT_ID (C2), \
+ OBJECT_MIN (C1), OBJECT_MAX (C1)))
\f
-
/* Build allocno conflict table by processing allocno live ranges.
Return true if the table was built. The table is not built if it
is too big. */
static bool
build_conflict_bit_table (void)
{
- int i, num, id, allocated_words_num, conflict_bit_vec_words_num;
+ int i;
unsigned int j;
enum reg_class cover_class;
- ira_allocno_t allocno, live_a;
+ int object_set_words, allocated_words_num, conflict_bit_vec_words_num;
live_range_t r;
+ ira_allocno_t allocno;
ira_allocno_iterator ai;
- sparseset allocnos_live;
- int allocno_set_words;
+ sparseset objects_live;
- allocno_set_words = (ira_allocnos_num + IRA_INT_BITS - 1) / IRA_INT_BITS;
allocated_words_num = 0;
FOR_EACH_ALLOCNO (allocno, ai)
{
- if (ALLOCNO_MAX (allocno) < ALLOCNO_MIN (allocno))
+ ira_object_t obj = ALLOCNO_OBJECT (allocno);
+ if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
continue;
conflict_bit_vec_words_num
- = ((ALLOCNO_MAX (allocno) - ALLOCNO_MIN (allocno) + IRA_INT_BITS)
+ = ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
/ IRA_INT_BITS);
allocated_words_num += conflict_bit_vec_words_num;
if ((unsigned long long) allocated_words_num * sizeof (IRA_INT_TYPE)
@@ -97,70 +95,90 @@ build_conflict_bit_table (void)
return false;
}
}
- allocnos_live = sparseset_alloc (ira_allocnos_num);
+
conflicts = (IRA_INT_TYPE **) ira_allocate (sizeof (IRA_INT_TYPE *)
- * ira_allocnos_num);
+ * ira_objects_num);
allocated_words_num = 0;
FOR_EACH_ALLOCNO (allocno, ai)
{
- num = ALLOCNO_NUM (allocno);
- if (ALLOCNO_MAX (allocno) < ALLOCNO_MIN (allocno))
+ ira_object_t obj = ALLOCNO_OBJECT (allocno);
+ int id = OBJECT_CONFLICT_ID (obj);
+ if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
{
- conflicts[num] = NULL;
+ conflicts[id] = NULL;
continue;
}
conflict_bit_vec_words_num
- = ((ALLOCNO_MAX (allocno) - ALLOCNO_MIN (allocno) + IRA_INT_BITS)
+ = ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
/ IRA_INT_BITS);
allocated_words_num += conflict_bit_vec_words_num;
- conflicts[num]
+ conflicts[id]
= (IRA_INT_TYPE *) ira_allocate (sizeof (IRA_INT_TYPE)
* conflict_bit_vec_words_num);
- memset (conflicts[num], 0,
+ memset (conflicts[id], 0,
sizeof (IRA_INT_TYPE) * conflict_bit_vec_words_num);
}
+
+ object_set_words = (ira_objects_num + IRA_INT_BITS - 1) / IRA_INT_BITS;
if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
fprintf
(ira_dump_file,
"+++Allocating %ld bytes for conflict table (uncompressed size %ld)\n",
(long) allocated_words_num * sizeof (IRA_INT_TYPE),
- (long) allocno_set_words * ira_allocnos_num * sizeof (IRA_INT_TYPE));
+ (long) object_set_words * ira_objects_num * sizeof (IRA_INT_TYPE));
+
+ objects_live = sparseset_alloc (ira_objects_num);
for (i = 0; i < ira_max_point; i++)
{
for (r = ira_start_point_ranges[i]; r != NULL; r = r->start_next)
{
- allocno = r->allocno;
- num = ALLOCNO_NUM (allocno);
- id = ALLOCNO_CONFLICT_ID (allocno);
+ ira_allocno_t allocno = r->allocno;
+ ira_object_t obj = ALLOCNO_OBJECT (allocno);
+ int id = OBJECT_CONFLICT_ID (obj);
+
cover_class = ALLOCNO_COVER_CLASS (allocno);
- sparseset_set_bit (allocnos_live, num);
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, j)
+ sparseset_set_bit (objects_live, id);
+ EXECUTE_IF_SET_IN_SPARSESET (objects_live, j)
{
- live_a = ira_allocnos[j];
- if (ira_reg_classes_intersect_p
- [cover_class][ALLOCNO_COVER_CLASS (live_a)]
+ ira_object_t live_cr = ira_object_id_map[j];
+ ira_allocno_t live_a = OBJECT_ALLOCNO (live_cr);
+ enum reg_class live_cover_class = ALLOCNO_COVER_CLASS (live_a);
+
+ if (ira_reg_classes_intersect_p[cover_class][live_cover_class]
/* Don't set up conflict for the allocno with itself. */
- && num != (int) j)
+ && id != (int) j)
{
- SET_MINMAX_SET_BIT (conflicts[num],
- ALLOCNO_CONFLICT_ID (live_a),
- ALLOCNO_MIN (allocno),
- ALLOCNO_MAX (allocno));
+ SET_MINMAX_SET_BIT (conflicts[id], j,
+ OBJECT_MIN (obj),
+ OBJECT_MAX (obj));
SET_MINMAX_SET_BIT (conflicts[j], id,
- ALLOCNO_MIN (live_a),
- ALLOCNO_MAX (live_a));
+ OBJECT_MIN (live_cr),
+ OBJECT_MAX (live_cr));
}
}
}
for (r = ira_finish_point_ranges[i]; r != NULL; r = r->finish_next)
- sparseset_clear_bit (allocnos_live, ALLOCNO_NUM (r->allocno));
+ {
+ ira_allocno_t allocno = r->allocno;
+ ira_object_t obj = ALLOCNO_OBJECT (allocno);
+ sparseset_clear_bit (objects_live, OBJECT_CONFLICT_ID (obj));
+ }
}
- sparseset_free (allocnos_live);
+ sparseset_free (objects_live);
return true;
}
-
\f
+/* Return true iff allocnos A1 and A2 cannot be allocated to the same
+ register due to conflicts. */
+
+static bool
+allocnos_conflict_p (ira_allocno_t a1, ira_allocno_t a2)
+{
+ ira_object_t obj1 = ALLOCNO_OBJECT (a1);
+ ira_object_t obj2 = ALLOCNO_OBJECT (a2);
+ return OBJECTS_CONFLICT_P (obj1, obj2);
+}
/* Return TRUE if the operand constraint STR is commutative. */
static bool
@@ -366,19 +384,21 @@ process_regs_for_copy (rtx reg1, rtx reg
allocno_preferenced_hard_regno = REGNO (reg2) + offset2 - offset1;
a = ira_curr_regno_allocno_map[REGNO (reg1)];
}
- else if (!CONFLICT_ALLOCNO_P (ira_curr_regno_allocno_map[REGNO (reg1)],
- ira_curr_regno_allocno_map[REGNO (reg2)])
- && offset1 == offset2)
- {
- cp = ira_add_allocno_copy (ira_curr_regno_allocno_map[REGNO (reg1)],
- ira_curr_regno_allocno_map[REGNO (reg2)],
- freq, constraint_p, insn,
- ira_curr_loop_tree_node);
- bitmap_set_bit (ira_curr_loop_tree_node->local_copies, cp->num);
- return true;
- }
else
- return false;
+ {
+ ira_allocno_t a1 = ira_curr_regno_allocno_map[REGNO (reg1)];
+ ira_allocno_t a2 = ira_curr_regno_allocno_map[REGNO (reg2)];
+ if (!allocnos_conflict_p (a1, a2) && offset1 == offset2)
+ {
+ cp = ira_add_allocno_copy (a1, a2, freq, constraint_p, insn,
+ ira_curr_loop_tree_node);
+ bitmap_set_bit (ira_curr_loop_tree_node->local_copies, cp->num);
+ return true;
+ }
+ else
+ return false;
+ }
+
if (! IN_RANGE (allocno_preferenced_hard_regno, 0, FIRST_PSEUDO_REGISTER - 1))
/* Can not be tied. */
return false;
@@ -451,7 +471,7 @@ add_insn_allocno_copies (rtx insn)
const char *str;
bool commut_p, bound_p[MAX_RECOG_OPERANDS];
int i, j, n, freq;
-
+
freq = REG_FREQ_FROM_BB (BLOCK_FOR_INSN (insn));
if (freq == 0)
freq = 1;
@@ -548,55 +568,58 @@ propagate_copies (void)
parent_a2 = ira_parent_or_cap_allocno (a2);
ira_assert (parent_a1 != NULL && parent_a2 != NULL);
- if (! CONFLICT_ALLOCNO_P (parent_a1, parent_a2))
+ if (! allocnos_conflict_p (parent_a1, parent_a2))
ira_add_allocno_copy (parent_a1, parent_a2, cp->freq,
cp->constraint_p, cp->insn, cp->loop_tree_node);
}
}
/* Array used to collect all conflict allocnos for given allocno. */
-static ira_allocno_t *collected_conflict_allocnos;
+static ira_object_t *collected_conflict_objects;
/* Build conflict vectors or bit conflict vectors (whatever is more
profitable) for allocno A from the conflict table and propagate the
conflicts to upper level allocno. */
static void
build_allocno_conflicts (ira_allocno_t a)
{
int i, px, parent_num;
int conflict_bit_vec_words_num;
- ira_allocno_t parent_a, another_a, another_parent_a;
- ira_allocno_t *vec;
+ ira_allocno_t parent_a, another_parent_a;
+ ira_object_t *vec;
IRA_INT_TYPE *allocno_conflicts;
+ ira_object_t obj, parent_obj;
minmax_set_iterator asi;
- allocno_conflicts = conflicts[ALLOCNO_NUM (a)];
+ obj = ALLOCNO_OBJECT (a);
+ allocno_conflicts = conflicts[OBJECT_CONFLICT_ID (obj)];
px = 0;
FOR_EACH_BIT_IN_MINMAX_SET (allocno_conflicts,
- ALLOCNO_MIN (a), ALLOCNO_MAX (a), i, asi)
+ OBJECT_MIN (obj), OBJECT_MAX (obj), i, asi)
{
- another_a = ira_conflict_id_allocno_map[i];
+ ira_object_t another_obj = ira_object_id_map[i];
+ ira_allocno_t another_a = OBJECT_ALLOCNO (obj);
ira_assert (ira_reg_classes_intersect_p
[ALLOCNO_COVER_CLASS (a)][ALLOCNO_COVER_CLASS (another_a)]);
- collected_conflict_allocnos[px++] = another_a;
+ collected_conflict_objects[px++] = another_obj;
}
- if (ira_conflict_vector_profitable_p (a, px))
+ if (ira_conflict_vector_profitable_p (obj, px))
{
- ira_allocate_allocno_conflict_vec (a, px);
- vec = (ira_allocno_t*) ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a);
- memcpy (vec, collected_conflict_allocnos, sizeof (ira_allocno_t) * px);
+ ira_allocate_conflict_vec (obj, px);
+ vec = OBJECT_CONFLICT_VEC (obj);
+ memcpy (vec, collected_conflict_objects, sizeof (ira_object_t) * px);
vec[px] = NULL;
- ALLOCNO_CONFLICT_ALLOCNOS_NUM (a) = px;
+ OBJECT_NUM_CONFLICTS (obj) = px;
}
else
{
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a) = conflicts[ALLOCNO_NUM (a)];
- if (ALLOCNO_MAX (a) < ALLOCNO_MIN (a))
+ OBJECT_CONFLICT_ARRAY (obj) = allocno_conflicts;
+ if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
conflict_bit_vec_words_num = 0;
else
conflict_bit_vec_words_num
- = ((ALLOCNO_MAX (a) - ALLOCNO_MIN (a) + IRA_INT_BITS)
+ = ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
/ IRA_INT_BITS);
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a)
+ OBJECT_CONFLICT_ARRAY_SIZE (obj)
= conflict_bit_vec_words_num * sizeof (IRA_INT_TYPE);
}
parent_a = ira_parent_or_cap_allocno (a);
@@ -604,23 +626,26 @@
if (parent_a == NULL)
return;
ira_assert (ALLOCNO_COVER_CLASS (a) == ALLOCNO_COVER_CLASS (parent_a));
- parent_num = ALLOCNO_NUM (parent_a);
+ parent_obj = ALLOCNO_OBJECT (parent_a);
+ parent_num = OBJECT_CONFLICT_ID (parent_obj);
FOR_EACH_BIT_IN_MINMAX_SET (allocno_conflicts,
- ALLOCNO_MIN (a), ALLOCNO_MAX (a), i, asi)
+ OBJECT_MIN (obj), OBJECT_MAX (obj), i, asi)
{
- another_a = ira_conflict_id_allocno_map[i];
+ ira_object_t another_obj = ira_object_id_map[i];
+ ira_allocno_t another_a = OBJECT_ALLOCNO (another_obj);
+
ira_assert (ira_reg_classes_intersect_p
[ALLOCNO_COVER_CLASS (a)][ALLOCNO_COVER_CLASS (another_a)]);
another_parent_a = ira_parent_or_cap_allocno (another_a);
if (another_parent_a == NULL)
continue;
ira_assert (ALLOCNO_NUM (another_parent_a) >= 0);
ira_assert (ALLOCNO_COVER_CLASS (another_a)
== ALLOCNO_COVER_CLASS (another_parent_a));
SET_MINMAX_SET_BIT (conflicts[parent_num],
- ALLOCNO_CONFLICT_ID (another_parent_a),
- ALLOCNO_MIN (parent_a),
- ALLOCNO_MAX (parent_a));
+ OBJECT_CONFLICT_ID (ALLOCNO_OBJECT (another_parent_a)),
+ OBJECT_MIN (parent_obj),
+ OBJECT_MAX (parent_obj));
}
}
@@ -638,9 +667,9 @@ build_conflicts (void)
int i;
ira_allocno_t a, cap;
- collected_conflict_allocnos
- = (ira_allocno_t *) ira_allocate (sizeof (ira_allocno_t)
- * ira_allocnos_num);
+ collected_conflict_objects
+ = (ira_object_t *) ira_allocate (sizeof (ira_object_t)
+ * ira_objects_num);
for (i = max_reg_num () - 1; i >= FIRST_PSEUDO_REGISTER; i--)
for (a = ira_regno_allocno_map[i];
a != NULL;
@@ -650,7 +679,7 @@ build_conflicts (void)
for (cap = ALLOCNO_CAP (a); cap != NULL; cap = ALLOCNO_CAP (cap))
build_allocno_conflicts (cap);
}
- ira_free (collected_conflict_allocnos);
+ ira_free (collected_conflict_objects);
}
\f
@@ -688,6 +717,7 @@ static void
print_allocno_conflicts (FILE * file, bool reg_p, ira_allocno_t a)
{
HARD_REG_SET conflicting_hard_regs;
+ ira_object_t obj;
ira_allocno_t conflict_a;
ira_allocno_conflict_iterator aci;
basic_block bb;
@@ -703,8 +733,10 @@ print_allocno_conflicts (FILE * file, bo
fprintf (file, "l%d", ALLOCNO_LOOP_TREE_NODE (a)->loop->num);
putc (')', file);
}
+
fputs (" conflicts:", file);
- if (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a) != NULL)
+ obj = ALLOCNO_OBJECT (a);
+ if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
FOR_EACH_ALLOCNO_CONFLICT (a, conflict_a, aci)
{
if (reg_p)
@@ -720,15 +752,15 @@ print_allocno_conflicts (FILE * file, bo
ALLOCNO_LOOP_TREE_NODE (conflict_a)->loop->num);
}
}
- COPY_HARD_REG_SET (conflicting_hard_regs,
- ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
+
+ COPY_HARD_REG_SET (conflicting_hard_regs, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
AND_COMPL_HARD_REG_SET (conflicting_hard_regs, ira_no_alloc_regs);
AND_HARD_REG_SET (conflicting_hard_regs,
reg_class_contents[ALLOCNO_COVER_CLASS (a)]);
print_hard_reg_set (file, "\n;; total conflict hard regs:",
conflicting_hard_regs);
- COPY_HARD_REG_SET (conflicting_hard_regs,
- ALLOCNO_CONFLICT_HARD_REGS (a));
+
+ COPY_HARD_REG_SET (conflicting_hard_regs, OBJECT_CONFLICT_HARD_REGS (obj));
AND_COMPL_HARD_REG_SET (conflicting_hard_regs, ira_no_alloc_regs);
AND_HARD_REG_SET (conflicting_hard_regs,
reg_class_contents[ALLOCNO_COVER_CLASS (a)]);
@@ -773,19 +805,22 @@ ira_build_conflicts (void)
ira_conflicts_p = build_conflict_bit_table ();
if (ira_conflicts_p)
{
+ ira_object_t obj;
+ ira_object_iterator oi;
+
build_conflicts ();
ira_traverse_loop_tree (true, ira_loop_tree_root, NULL, add_copies);
/* We need finished conflict table for the subsequent call. */
if (flag_ira_region == IRA_REGION_ALL
|| flag_ira_region == IRA_REGION_MIXED)
propagate_copies ();
+
/* Now we can free memory for the conflict table (see function
build_allocno_conflicts for details). */
- FOR_EACH_ALLOCNO (a, ai)
+ FOR_EACH_OBJECT (obj, oi)
{
- if (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a)
- != conflicts[ALLOCNO_NUM (a)])
- ira_free (conflicts[ALLOCNO_NUM (a)]);
+ if (OBJECT_CONFLICT_ARRAY (obj) != conflicts[OBJECT_CONFLICT_ID (obj)])
+ ira_free (conflicts[OBJECT_CONFLICT_ID (obj)]);
}
ira_free (conflicts);
}
@@ -801,6 +836,7 @@ ira_build_conflicts (void)
}
FOR_EACH_ALLOCNO (a, ai)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
reg_attrs *attrs;
tree decl;
@@ -813,21 +849,16 @@ ira_build_conflicts (void)
&& VAR_OR_FUNCTION_DECL_P (decl)
&& ! DECL_ARTIFICIAL (decl)))
{
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a),
- call_used_reg_set);
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a),
- call_used_reg_set);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), call_used_reg_set);
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), call_used_reg_set);
}
else if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
{
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a),
- no_caller_save_reg_set);
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a),
- temp_hard_reg_set);
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a),
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
no_caller_save_reg_set);
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a),
- temp_hard_reg_set);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), temp_hard_reg_set);
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), no_caller_save_reg_set);
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), temp_hard_reg_set);
}
}
if (optimize && ira_conflicts_p
Index: ira-int.h
===================================================================
--- ira-int.h.orig
+++ ira-int.h
@@ -62,10 +62,13 @@ extern FILE *ira_dump_file;
typedef struct live_range *live_range_t;
typedef struct ira_allocno *ira_allocno_t;
typedef struct ira_allocno_copy *ira_copy_t;
+typedef struct ira_object *ira_object_t;
/* Definition of vector of allocnos and copies. */
DEF_VEC_P(ira_allocno_t);
DEF_VEC_ALLOC_P(ira_allocno_t, heap);
+DEF_VEC_P(ira_object_t);
+DEF_VEC_ALLOC_P(ira_object_t, heap);
DEF_VEC_P(ira_copy_t);
DEF_VEC_ALLOC_P(ira_copy_t, heap);
@@ -222,6 +225,43 @@ extern int ira_max_point;
live ranges with given start/finish point. */
extern live_range_t *ira_start_point_ranges, *ira_finish_point_ranges;
+/* A structure representing conflict information for an allocno
+ (or one of its subwords). */
+struct ira_object
+{
+ /* The allocno associated with this record. */
+ ira_allocno_t allocno;
+ /* Vector of accumulated conflicting conflict_redords with NULL end
+ marker (if OBJECT_CONFLICT_VEC_P is true) or conflict bit vector
+ otherwise. Only objects belonging to allocnos with the
+ same cover class are in the vector or in the bit vector. */
+ void *conflicts_array;
+ /* Allocated size of the previous array. */
+ unsigned int conflicts_array_size;
+ /* A unique number for every instance of this structure which is used
+ to represent it in conflict bit vectors. */
+ int id;
+ /* Before building conflicts, MIN and MAX are initialized to
+ correspondingly minimal and maximal points of the accumulated
+ allocno live ranges. Afterwards, they hold the minimal and
+ maximal ids of other objects that this one can conflict
+ with. */
+ int min, max;
+ /* Initial and accumulated hard registers conflicting with this
+ conflict record and as a consequences can not be assigned to the
+ allocno. All non-allocatable hard regs and hard regs of cover
+ classes different from given allocno one are included in the
+ sets. */
+ HARD_REG_SET conflict_hard_regs, total_conflict_hard_regs;
+ /* Number of accumulated conflicts in the vector of conflicting
+ conflict records. */
+ int num_accumulated_conflicts;
+ /* TRUE if conflicts are represented by a vector of pointers to
+ ira_object structures. Otherwise, we use a bit vector indexed
+ by conflict ID numbers. */
+ unsigned int conflict_vec_p : 1;
+};
+
/* A structure representing an allocno (allocation entity). Allocno
represents a pseudo-register in an allocation region. If
pseudo-register does not live in a region but it lives in the
@@ -306,30 +346,9 @@ struct ira_allocno
ranges in the list are not intersected and ordered by decreasing
their program points*. */
live_range_t live_ranges;
- /* Before building conflicts the two member values are
- correspondingly minimal and maximal points of the accumulated
- allocno live ranges. After building conflicts the values are
- correspondingly minimal and maximal conflict ids of allocnos with
- which given allocno can conflict. */
- int min, max;
- /* Vector of accumulated conflicting allocnos with NULL end marker
- (if CONFLICT_VEC_P is true) or conflict bit vector otherwise.
- Only allocnos with the same cover class are in the vector or in
- the bit vector. */
- void *conflict_allocno_array;
- /* The unique member value represents given allocno in conflict bit
- vectors. */
- int conflict_id;
- /* Allocated size of the previous array. */
- unsigned int conflict_allocno_array_size;
- /* Initial and accumulated hard registers conflicting with this
- allocno and as a consequences can not be assigned to the allocno.
- All non-allocatable hard regs and hard regs of cover classes
- different from given allocno one are included in the sets. */
- HARD_REG_SET conflict_hard_regs, total_conflict_hard_regs;
- /* Number of accumulated conflicts in the vector of conflicting
- allocnos. */
- int conflict_allocnos_num;
+ /* Pointer to a structure describing conflict information about this
+ allocno. */
+ ira_object_t object;
/* Accumulated frequency of calls which given allocno
intersects. */
int call_freq;
@@ -374,11 +393,6 @@ struct ira_allocno
/* TRUE if the allocno was removed from the splay tree used to
choose allocn for spilling (see ira-color.c::. */
unsigned int splay_removed_p : 1;
- /* TRUE if conflicts for given allocno are represented by vector of
- pointers to the conflicting allocnos. Otherwise, we use a bit
- vector where a bit with given index represents allocno with the
- same number. */
- unsigned int conflict_vec_p : 1;
/* Non NULL if we remove restoring value from given allocno to
MEM_OPTIMIZED_DEST at loop exit (see ira-emit.c) because the
allocno value is not changed inside the loop. */
@@ -429,13 +443,6 @@ struct ira_allocno
#define ALLOCNO_LOOP_TREE_NODE(A) ((A)->loop_tree_node)
#define ALLOCNO_CAP(A) ((A)->cap)
#define ALLOCNO_CAP_MEMBER(A) ((A)->cap_member)
-#define ALLOCNO_CONFLICT_ALLOCNO_ARRAY(A) ((A)->conflict_allocno_array)
-#define ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE(A) \
- ((A)->conflict_allocno_array_size)
-#define ALLOCNO_CONFLICT_ALLOCNOS_NUM(A) \
- ((A)->conflict_allocnos_num)
-#define ALLOCNO_CONFLICT_HARD_REGS(A) ((A)->conflict_hard_regs)
-#define ALLOCNO_TOTAL_CONFLICT_HARD_REGS(A) ((A)->total_conflict_hard_regs)
#define ALLOCNO_NREFS(A) ((A)->nrefs)
#define ALLOCNO_FREQ(A) ((A)->freq)
#define ALLOCNO_HARD_REGNO(A) ((A)->hard_regno)
@@ -455,7 +462,6 @@ struct ira_allocno
#define ALLOCNO_ASSIGNED_P(A) ((A)->assigned_p)
#define ALLOCNO_MAY_BE_SPILLED_P(A) ((A)->may_be_spilled_p)
#define ALLOCNO_SPLAY_REMOVED_P(A) ((A)->splay_removed_p)
-#define ALLOCNO_CONFLICT_VEC_P(A) ((A)->conflict_vec_p)
#define ALLOCNO_MODE(A) ((A)->mode)
#define ALLOCNO_COPIES(A) ((A)->allocno_copies)
#define ALLOCNO_HARD_REG_COSTS(A) ((A)->hard_reg_costs)
@@ -478,9 +484,20 @@ struct ira_allocno
#define ALLOCNO_FIRST_COALESCED_ALLOCNO(A) ((A)->first_coalesced_allocno)
#define ALLOCNO_NEXT_COALESCED_ALLOCNO(A) ((A)->next_coalesced_allocno)
#define ALLOCNO_LIVE_RANGES(A) ((A)->live_ranges)
-#define ALLOCNO_MIN(A) ((A)->min)
-#define ALLOCNO_MAX(A) ((A)->max)
-#define ALLOCNO_CONFLICT_ID(A) ((A)->conflict_id)
+#define ALLOCNO_OBJECT(A) ((A)->object)
+
+#define OBJECT_ALLOCNO(C) ((C)->allocno)
+#define OBJECT_CONFLICT_ARRAY(C) ((C)->conflicts_array)
+#define OBJECT_CONFLICT_VEC(C) ((ira_object_t *)(C)->conflicts_array)
+#define OBJECT_CONFLICT_BITVEC(C) ((IRA_INT_TYPE *)(C)->conflicts_array)
+#define OBJECT_CONFLICT_ARRAY_SIZE(C) ((C)->conflicts_array_size)
+#define OBJECT_CONFLICT_VEC_P(C) ((C)->conflict_vec_p)
+#define OBJECT_NUM_CONFLICTS(C) ((C)->num_accumulated_conflicts)
+#define OBJECT_CONFLICT_HARD_REGS(C) ((C)->conflict_hard_regs)
+#define OBJECT_TOTAL_CONFLICT_HARD_REGS(C) ((C)->total_conflict_hard_regs)
+#define OBJECT_MIN(C) ((C)->min)
+#define OBJECT_MAX(C) ((C)->max)
+#define OBJECT_CONFLICT_ID(C) ((C)->id)
/* Map regno -> allocnos with given regno (see comments for
allocno member `next_regno_allocno'). */
@@ -491,12 +508,14 @@ extern ira_allocno_t *ira_regno_allocno_
have NULL element value. */
extern ira_allocno_t *ira_allocnos;
-/* Sizes of the previous array. */
+/* The size of the previous array. */
extern int ira_allocnos_num;
-/* Map conflict id -> allocno with given conflict id (see comments for
- allocno member `conflict_id'). */
-extern ira_allocno_t *ira_conflict_id_allocno_map;
+/* Map a conflict id to its corresponding ira_object structure. */
+extern ira_object_t *ira_object_id_map;
+
+/* The size of the previous array. */
+extern int ira_objects_num;
/* The following structure represents a copy of two allocnos. The
copies represent move insns or potential move insns usually because
@@ -839,11 +858,11 @@ extern void ira_traverse_loop_tree (bool
void (*) (ira_loop_tree_node_t),
void (*) (ira_loop_tree_node_t));
extern ira_allocno_t ira_create_allocno (int, bool, ira_loop_tree_node_t);
+extern void ira_create_allocno_object (ira_allocno_t);
extern void ira_set_allocno_cover_class (ira_allocno_t, enum reg_class);
-extern bool ira_conflict_vector_profitable_p (ira_allocno_t, int);
-extern void ira_allocate_allocno_conflict_vec (ira_allocno_t, int);
-extern void ira_allocate_allocno_conflicts (ira_allocno_t, int);
-extern void ira_add_allocno_conflict (ira_allocno_t, ira_allocno_t);
+extern bool ira_conflict_vector_profitable_p (ira_object_t, int);
+extern void ira_allocate_conflict_vec (ira_object_t, int);
+extern void ira_allocate_object_conflicts (ira_object_t, int);
extern void ira_print_expanded_allocno (ira_allocno_t);
extern live_range_t ira_create_allocno_live_range (ira_allocno_t, int, int,
live_range_t);
@@ -966,8 +985,43 @@ ira_allocno_iter_cond (ira_allocno_itera
#define FOR_EACH_ALLOCNO(A, ITER) \
for (ira_allocno_iter_init (&(ITER)); \
ira_allocno_iter_cond (&(ITER), &(A));)
+\f
+/* The iterator for all objects. */
+typedef struct {
+ /* The number of the current element in IRA_OBJECT_ID_MAP. */
+ int n;
+} ira_object_iterator;
+/* Initialize the iterator I. */
+static inline void
+ira_object_iter_init (ira_object_iterator *i)
+{
+ i->n = 0;
+}
+
+/* Return TRUE if we have more objects to visit, in which case *OBJ is
+ set to the object to be visited. Otherwise, return FALSE. */
+static inline bool
+ira_object_iter_cond (ira_object_iterator *i, ira_object_t *obj)
+{
+ int n;
+ for (n = i->n; n < ira_objects_num; n++)
+ if (ira_object_id_map[n] != NULL)
+ {
+ *obj = ira_object_id_map[n];
+ i->n = n + 1;
+ return true;
+ }
+ return false;
+}
+
+/* Loop over all objects. In each iteration, A is set to the next
+ conflict. ITER is an instance of ira_object_iterator used to iterate
+ the objects. */
+#define FOR_EACH_OBJECT(OBJ, ITER) \
+ for (ira_object_iter_init (&(ITER)); \
+ ira_object_iter_cond (&(ITER), &(OBJ));)
\f
/* The iterator for copies. */
@@ -1006,38 +1060,33 @@ ira_copy_iter_cond (ira_copy_iterator *i
#define FOR_EACH_COPY(C, ITER) \
for (ira_copy_iter_init (&(ITER)); \
ira_copy_iter_cond (&(ITER), &(C));)
-
-
\f
-
-/* The iterator for allocno conflicts. */
+/* The iterator for allocno conflicts. */
typedef struct {
-
- /* TRUE if the conflicts are represented by vector of allocnos. */
- bool allocno_conflict_vec_p;
+ /* TRUE if the conflicts are represented by vector of objects. */
+ bool conflict_vec_p;
/* The conflict vector or conflict bit vector. */
void *vec;
/* The number of the current element in the vector (of type
- ira_allocno_t or IRA_INT_TYPE). */
+ ira_object_t or IRA_INT_TYPE). */
unsigned int word_num;
/* The bit vector size. It is defined only if
- ALLOCNO_CONFLICT_VEC_P is FALSE. */
+ OBJECT_CONFLICT_VEC_P is FALSE. */
unsigned int size;
/* The current bit index of bit vector. It is defined only if
- ALLOCNO_CONFLICT_VEC_P is FALSE. */
+ OBJECT_CONFLICT_VEC_P is FALSE. */
unsigned int bit_num;
- /* Allocno conflict id corresponding to the 1st bit of the bit
- vector. It is defined only if ALLOCNO_CONFLICT_VEC_P is
- FALSE. */
+ /* The object id corresponding to the 1st bit of the bit vector. It
+ is defined only if OBJECT_CONFLICT_VEC_P is FALSE. */
int base_conflict_id;
/* The word of bit vector currently visited. It is defined only if
- ALLOCNO_CONFLICT_VEC_P is FALSE. */
+ OBJECT_CONFLICT_VEC_P is FALSE. */
unsigned IRA_INT_TYPE word;
} ira_allocno_conflict_iterator;
@@ -1046,21 +1095,22 @@ static inline void
ira_allocno_conflict_iter_init (ira_allocno_conflict_iterator *i,
ira_allocno_t allocno)
{
- i->allocno_conflict_vec_p = ALLOCNO_CONFLICT_VEC_P (allocno);
- i->vec = ALLOCNO_CONFLICT_ALLOCNO_ARRAY (allocno);
+ ira_object_t obj = ALLOCNO_OBJECT (allocno);
+ i->conflict_vec_p = OBJECT_CONFLICT_VEC_P (obj);
+ i->vec = OBJECT_CONFLICT_ARRAY (obj);
i->word_num = 0;
- if (i->allocno_conflict_vec_p)
+ if (i->conflict_vec_p)
i->size = i->bit_num = i->base_conflict_id = i->word = 0;
else
{
- if (ALLOCNO_MIN (allocno) > ALLOCNO_MAX (allocno))
+ if (OBJECT_MIN (obj) > OBJECT_MAX (obj))
i->size = 0;
else
- i->size = ((ALLOCNO_MAX (allocno) - ALLOCNO_MIN (allocno)
+ i->size = ((OBJECT_MAX (obj) - OBJECT_MIN (obj)
+ IRA_INT_BITS)
/ IRA_INT_BITS) * sizeof (IRA_INT_TYPE);
i->bit_num = 0;
- i->base_conflict_id = ALLOCNO_MIN (allocno);
+ i->base_conflict_id = OBJECT_MIN (obj);
i->word = (i->size == 0 ? 0 : ((IRA_INT_TYPE *) i->vec)[0]);
}
}
@@ -1072,15 +1122,13 @@ static inline bool
ira_allocno_conflict_iter_cond (ira_allocno_conflict_iterator *i,
ira_allocno_t *a)
{
- ira_allocno_t conflict_allocno;
+ ira_object_t obj;
- if (i->allocno_conflict_vec_p)
+ if (i->conflict_vec_p)
{
- conflict_allocno = ((ira_allocno_t *) i->vec)[i->word_num];
- if (conflict_allocno == NULL)
+ obj = ((ira_object_t *) i->vec)[i->word_num];
+ if (obj == NULL)
return false;
- *a = conflict_allocno;
- return true;
}
else
{
@@ -1100,17 +1148,18 @@ ira_allocno_conflict_iter_cond (ira_allo
for (; (i->word & 1) == 0; i->word >>= 1)
i->bit_num++;
- *a = ira_conflict_id_allocno_map[i->bit_num + i->base_conflict_id];
-
- return true;
+ obj = ira_object_id_map[i->bit_num + i->base_conflict_id];
}
+
+ *a = OBJECT_ALLOCNO (obj);
+ return true;
}
/* Advance to the next conflicting allocno. */
static inline void
ira_allocno_conflict_iter_next (ira_allocno_conflict_iterator *i)
{
- if (i->allocno_conflict_vec_p)
+ if (i->conflict_vec_p)
i->word_num++;
else
{
Index: ira-color.c
===================================================================
--- ira-color.c.orig
+++ ira-color.c
@@ -476,9 +476,11 @@ assign_hard_reg (ira_allocno_t allocno,
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+
mem_cost += ALLOCNO_UPDATED_MEMORY_COST (a);
IOR_HARD_REG_SET (conflicting_regs,
- ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
+ OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
ira_allocate_and_copy_costs (&ALLOCNO_UPDATED_HARD_REG_COSTS (a),
cover_class, ALLOCNO_HARD_REG_COSTS (a));
a_costs = ALLOCNO_UPDATED_HARD_REG_COSTS (a);
@@ -1375,7 +1377,8 @@ setup_allocno_available_regs_num (ira_al
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- IOR_HARD_REG_SET (temp_set, ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ IOR_HARD_REG_SET (temp_set, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
if (a == allocno)
break;
}
@@ -1411,7 +1414,8 @@ setup_allocno_left_conflicts_size (ira_a
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- IOR_HARD_REG_SET (temp_set, ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ IOR_HARD_REG_SET (temp_set, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
if (a == allocno)
break;
}
@@ -2790,11 +2794,12 @@ allocno_reload_assign (ira_allocno_t a,
enum reg_class cover_class;
int regno = ALLOCNO_REGNO (a);
HARD_REG_SET saved;
+ ira_object_t obj = ALLOCNO_OBJECT (a);
- COPY_HARD_REG_SET (saved, ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a), forbidden_regs);
+ COPY_HARD_REG_SET (saved, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), forbidden_regs);
if (! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a), call_used_reg_set);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), call_used_reg_set);
ALLOCNO_ASSIGNED_P (a) = false;
cover_class = ALLOCNO_COVER_CLASS (a);
update_curr_costs (a);
@@ -2833,7 +2838,7 @@ allocno_reload_assign (ira_allocno_t a,
}
else if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
fprintf (ira_dump_file, "\n");
- COPY_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a), saved);
+ COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), saved);
return reg_renumber[regno] >= 0;
}
@@ -3261,8 +3266,10 @@ fast_allocation (void)
allocno_priority_compare_func);
for (i = 0; i < num; i++)
{
+ ira_object_t obj;
a = sorted_allocnos[i];
- COPY_HARD_REG_SET (conflict_hard_regs, ALLOCNO_CONFLICT_HARD_REGS (a));
+ obj = ALLOCNO_OBJECT (a);
+ COPY_HARD_REG_SET (conflict_hard_regs, OBJECT_CONFLICT_HARD_REGS (obj));
for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
for (j = r->start; j <= r->finish; j++)
IOR_HARD_REG_SET (conflict_hard_regs, used_hard_regs[j]);
Index: ira-lives.c
===================================================================
--- ira-lives.c.orig
+++ ira-lives.c
@@ -91,10 +91,10 @@ make_hard_regno_born (int regno)
SET_HARD_REG_BIT (hard_regs_live, regno);
EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, i)
{
- SET_HARD_REG_BIT (ALLOCNO_CONFLICT_HARD_REGS (ira_allocnos[i]),
- regno);
- SET_HARD_REG_BIT (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (ira_allocnos[i]),
- regno);
+ ira_allocno_t allocno = ira_allocnos[i];
+ ira_object_t obj = ALLOCNO_OBJECT (allocno);
+ SET_HARD_REG_BIT (OBJECT_CONFLICT_HARD_REGS (obj), regno);
+ SET_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno);
}
}
@@ -113,10 +113,11 @@ static void
make_allocno_born (ira_allocno_t a)
{
live_range_t p = ALLOCNO_LIVE_RANGES (a);
+ ira_object_t obj = ALLOCNO_OBJECT (a);
sparseset_set_bit (allocnos_live, ALLOCNO_NUM (a));
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a), hard_regs_live);
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a), hard_regs_live);
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), hard_regs_live);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), hard_regs_live);
if (p == NULL
|| (p->finish != curr_point && p->finish + 1 != curr_point))
@@ -839,12 +840,14 @@ process_single_reg_class_operands (bool
a = ira_allocnos[px];
if (a != operand_a)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+
/* We could increase costs of A instead of making it
conflicting with the hard register. But it works worse
because it will be spilled in reload in anyway. */
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a),
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
reg_class_contents[cl]);
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a),
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
reg_class_contents[cl]);
}
}
@@ -1029,14 +1032,16 @@ process_bb_node_lives (ira_loop_tree_nod
|| find_reg_note (insn, REG_SETJMP,
NULL_RTX) != NULL_RTX)
{
- SET_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a));
- SET_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ SET_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj));
+ SET_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
}
if (can_throw_internal (insn))
{
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a),
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a),
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
}
}
Index: ira-emit.c
===================================================================
--- ira-emit.c.orig
+++ ira-emit.c
@@ -646,7 +646,7 @@ static move_t
modify_move_list (move_t list)
{
int i, n, nregs, hard_regno;
- ira_allocno_t to, from, new_allocno;
+ ira_allocno_t to, from;
move_t move, new_move, set_move, first, last;
if (list == NULL)
@@ -715,6 +715,9 @@ modify_move_list (move_t list)
&& ALLOCNO_HARD_REGNO
(hard_regno_last_set[hard_regno + i]->to) >= 0)
{
+ ira_allocno_t new_allocno;
+ ira_object_t new_obj;
+
set_move = hard_regno_last_set[hard_regno + i];
/* It does not matter what loop_tree_node (of TO or
FROM) to use for the new allocno because of
@@ -726,16 +729,19 @@ modify_move_list (move_t list)
ALLOCNO_MODE (new_allocno) = ALLOCNO_MODE (set_move->to);
ira_set_allocno_cover_class
(new_allocno, ALLOCNO_COVER_CLASS (set_move->to));
+ ira_create_allocno_object (new_allocno);
ALLOCNO_ASSIGNED_P (new_allocno) = true;
ALLOCNO_HARD_REGNO (new_allocno) = -1;
ALLOCNO_REG (new_allocno)
= create_new_reg (ALLOCNO_REG (set_move->to));
- ALLOCNO_CONFLICT_ID (new_allocno) = ALLOCNO_NUM (new_allocno);
+
+ new_obj = ALLOCNO_OBJECT (new_allocno);
+
/* Make it possibly conflicting with all earlier
created allocnos. Cases where temporary allocnos
created to remove the cycles are quite rare. */
- ALLOCNO_MIN (new_allocno) = 0;
- ALLOCNO_MAX (new_allocno) = ira_allocnos_num - 1;
+ OBJECT_MIN (new_obj) = 0;
+ OBJECT_MAX (new_obj) = ira_objects_num - 1;
new_move = create_move (set_move->to, new_allocno);
set_move->to = new_allocno;
VEC_safe_push (move_t, heap, move_vec, new_move);
@@ -911,7 +917,7 @@ add_range_and_copies_from_move_list (mov
int start, n;
unsigned int regno;
move_t move;
- ira_allocno_t to, from, a;
+ ira_allocno_t a;
ira_copy_t cp;
live_range_t r;
bitmap_iterator bi;
@@ -929,22 +935,23 @@ add_range_and_copies_from_move_list (mov
start = ira_max_point;
for (move = list; move != NULL; move = move->next)
{
- from = move->from;
- to = move->to;
- if (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (to) == NULL)
+ ira_allocno_t from = move->from;
+ ira_allocno_t to = move->to;
+ ira_object_t from_obj = ALLOCNO_OBJECT (from);
+ ira_object_t to_obj = ALLOCNO_OBJECT (to);
+ if (OBJECT_CONFLICT_ARRAY (to_obj) == NULL)
{
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf (ira_dump_file, " Allocate conflicts for a%dr%d\n",
ALLOCNO_NUM (to), REGNO (ALLOCNO_REG (to)));
- ira_allocate_allocno_conflicts (to, n);
+ ira_allocate_object_conflicts (to_obj, n);
}
bitmap_clear_bit (live_through, ALLOCNO_REGNO (from));
bitmap_clear_bit (live_through, ALLOCNO_REGNO (to));
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (from), hard_regs_live);
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (to), hard_regs_live);
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (from),
- hard_regs_live);
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (to), hard_regs_live);
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (from_obj), hard_regs_live);
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (to_obj), hard_regs_live);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (from_obj), hard_regs_live);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (to_obj), hard_regs_live);
update_costs (from, true, freq);
update_costs (to, false, freq);
cp = ira_add_allocno_copy (from, to, freq, false, move->insn, NULL);
@@ -994,6 +1001,7 @@ add_range_and_copies_from_move_list (mov
}
EXECUTE_IF_SET_IN_BITMAP (live_through, FIRST_PSEUDO_REGISTER, regno, bi)
{
+ ira_allocno_t to;
a = node->regno_allocno_map[regno];
if ((to = ALLOCNO_MEM_OPTIMIZED_DEST (a)) != NULL)
a = to;
Index: ira-build.c
===================================================================
--- ira-build.c.orig
+++ ira-build.c
@@ -71,9 +71,12 @@ ira_allocno_t *ira_allocnos;
/* Sizes of the previous array. */
int ira_allocnos_num;
-/* Map conflict id -> allocno with given conflict id (see comments for
- allocno member `conflict_id'). */
-ira_allocno_t *ira_conflict_id_allocno_map;
+/* Count of conflict record structures we've created, used when creating
+ a new conflict id. */
+int ira_objects_num;
+
+/* Map a conflict id to its conflict record. */
+ira_object_t *ira_object_id_map;
/* Array of references to all copies. The order number of the copy
corresponds to the index in the array. Removed copies have NULL
@@ -380,19 +383,18 @@ rebuild_regno_allocno_maps (void)
loop_tree_node->regno_allocno_map[regno] = a;
}
}
-
\f
-/* Pools for allocnos and live ranges. */
-static alloc_pool allocno_pool, live_range_pool;
+/* Pools for allocnos, allocno live ranges and objects. */
+static alloc_pool allocno_pool, live_range_pool, object_pool;
/* Vec containing references to all created allocnos. It is a
container of array allocnos. */
static VEC(ira_allocno_t,heap) *allocno_vec;
-/* Vec containing references to all created allocnos. It is a
- container of ira_conflict_id_allocno_map. */
-static VEC(ira_allocno_t,heap) *ira_conflict_id_allocno_map_vec;
+/* Vec containing references to all created ira_objects. It is a
+ container of ira_object_id_map. */
+static VEC(ira_object_t,heap) *ira_object_id_map_vec;
/* Initialize data concerning allocnos. */
static void
@@ -403,17 +405,48 @@ initiate_allocnos (void)
sizeof (struct live_range), 100);
allocno_pool
= create_alloc_pool ("allocnos", sizeof (struct ira_allocno), 100);
+ object_pool
+ = create_alloc_pool ("objects", sizeof (struct ira_object), 100);
allocno_vec = VEC_alloc (ira_allocno_t, heap, max_reg_num () * 2);
ira_allocnos = NULL;
ira_allocnos_num = 0;
- ira_conflict_id_allocno_map_vec
- = VEC_alloc (ira_allocno_t, heap, max_reg_num () * 2);
- ira_conflict_id_allocno_map = NULL;
+ ira_objects_num = 0;
+ ira_object_id_map_vec
+ = VEC_alloc (ira_object_t, heap, max_reg_num () * 2);
+ ira_object_id_map = NULL;
ira_regno_allocno_map
= (ira_allocno_t *) ira_allocate (max_reg_num () * sizeof (ira_allocno_t));
memset (ira_regno_allocno_map, 0, max_reg_num () * sizeof (ira_allocno_t));
}
+/* Create and return an object corresponding to a new allocno A. */
+static ira_object_t
+ira_create_object (ira_allocno_t a)
+{
+ enum reg_class cover_class = ALLOCNO_COVER_CLASS (a);
+ ira_object_t obj = (ira_object_t) pool_alloc (object_pool);
+
+ OBJECT_ALLOCNO (obj) = a;
+ OBJECT_CONFLICT_ID (obj) = ira_objects_num;
+ OBJECT_CONFLICT_VEC_P (obj) = false;
+ OBJECT_CONFLICT_ARRAY (obj) = NULL;
+ OBJECT_NUM_CONFLICTS (obj) = 0;
+ COPY_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), ira_no_alloc_regs);
+ COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), ira_no_alloc_regs);
+ IOR_COMPL_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
+ reg_class_contents[cover_class]);
+ IOR_COMPL_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
+ reg_class_contents[cover_class]);
+ OBJECT_MIN (obj) = INT_MAX;
+ OBJECT_MAX (obj) = -1;
+
+ VEC_safe_push (ira_object_t, heap, ira_object_id_map_vec, obj);
+ ira_object_id_map
+ = VEC_address (ira_object_t, ira_object_id_map_vec);
+ ira_objects_num = VEC_length (ira_object_t, ira_object_id_map_vec);
+ return obj;
+}
+
/* Create and return the allocno corresponding to REGNO in
LOOP_TREE_NODE. Add the allocno to the list of allocnos with the
same regno if CAP_P is FALSE. */
@@ -439,10 +472,6 @@ ira_create_allocno (int regno, bool cap_
ALLOCNO_CAP_MEMBER (a) = NULL;
ALLOCNO_NUM (a) = ira_allocnos_num;
bitmap_set_bit (loop_tree_node->all_allocnos, ALLOCNO_NUM (a));
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a) = NULL;
- ALLOCNO_CONFLICT_ALLOCNOS_NUM (a) = 0;
- COPY_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a), ira_no_alloc_regs);
- COPY_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a), ira_no_alloc_regs);
ALLOCNO_NREFS (a) = 0;
ALLOCNO_FREQ (a) = 0;
ALLOCNO_HARD_REGNO (a) = -1;
@@ -462,7 +491,6 @@ ira_create_allocno (int regno, bool cap_
ALLOCNO_ASSIGNED_P (a) = false;
ALLOCNO_MAY_BE_SPILLED_P (a) = false;
ALLOCNO_SPLAY_REMOVED_P (a) = false;
- ALLOCNO_CONFLICT_VEC_P (a) = false;
ALLOCNO_MODE (a) = (regno < 0 ? VOIDmode : PSEUDO_REGNO_MODE (regno));
ALLOCNO_COPIES (a) = NULL;
ALLOCNO_HARD_REG_COSTS (a) = NULL;
@@ -481,15 +509,10 @@ ira_create_allocno (int regno, bool cap_
ALLOCNO_FIRST_COALESCED_ALLOCNO (a) = a;
ALLOCNO_NEXT_COALESCED_ALLOCNO (a) = a;
ALLOCNO_LIVE_RANGES (a) = NULL;
- ALLOCNO_MIN (a) = INT_MAX;
- ALLOCNO_MAX (a) = -1;
- ALLOCNO_CONFLICT_ID (a) = ira_allocnos_num;
+
VEC_safe_push (ira_allocno_t, heap, allocno_vec, a);
ira_allocnos = VEC_address (ira_allocno_t, allocno_vec);
ira_allocnos_num = VEC_length (ira_allocno_t, allocno_vec);
- VEC_safe_push (ira_allocno_t, heap, ira_conflict_id_allocno_map_vec, a);
- ira_conflict_id_allocno_map
- = VEC_address (ira_allocno_t, ira_conflict_id_allocno_map_vec);
return a;
}
@@ -498,10 +521,24 @@ void
ira_set_allocno_cover_class (ira_allocno_t a, enum reg_class cover_class)
{
ALLOCNO_COVER_CLASS (a) = cover_class;
- IOR_COMPL_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a),
- reg_class_contents[cover_class]);
- IOR_COMPL_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a),
- reg_class_contents[cover_class]);
+}
+
+/* Allocate an object for allocno A and set ALLOCNO_OBJECT. */
+void
+ira_create_allocno_object (ira_allocno_t a)
+{
+ ALLOCNO_OBJECT (a) = ira_create_object (a);
+}
+
+/* For each allocno, create the corresponding ALLOCNO_OBJECT structure. */
+static void
+create_allocno_objects (void)
+{
+ ira_allocno_t a;
+ ira_allocno_iterator ai;
+
+ FOR_EACH_ALLOCNO (a, ai)
+ ira_create_allocno_object (a);
}
/* Merge hard register conflicts from allocno FROM into allocno TO. If
@@ -510,11 +547,13 @@ static void
merge_hard_reg_conflicts (ira_allocno_t from, ira_allocno_t to,
bool total_only)
{
+ ira_object_t from_obj = ALLOCNO_OBJECT (from);
+ ira_object_t to_obj = ALLOCNO_OBJECT (to);
if (!total_only)
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (to),
- ALLOCNO_CONFLICT_HARD_REGS (from));
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (to),
- ALLOCNO_TOTAL_CONFLICT_HARD_REGS (from));
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (to_obj),
+ OBJECT_CONFLICT_HARD_REGS (from_obj));
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (to_obj),
+ OBJECT_TOTAL_CONFLICT_HARD_REGS (from_obj));
#ifdef STACK_REGS
if (!total_only && ALLOCNO_NO_STACK_REG_P (from))
ALLOCNO_NO_STACK_REG_P (to) = true;
@@ -523,111 +562,110 @@ merge_hard_reg_conflicts (ira_allocno_t
#endif
}
-/* Return TRUE if the conflict vector with NUM elements is more
- profitable than conflict bit vector for A. */
+/* Return TRUE if a conflict vector with NUM elements is more
+ profitable than a conflict bit vector for OBJ. */
bool
-ira_conflict_vector_profitable_p (ira_allocno_t a, int num)
+ira_conflict_vector_profitable_p (ira_object_t obj, int num)
{
int nw;
+ int max = OBJECT_MAX (obj);
+ int min = OBJECT_MIN (obj);
- if (ALLOCNO_MAX (a) < ALLOCNO_MIN (a))
- /* We prefer bit vector in such case because it does not result in
- allocation. */
+ if (max < min)
+ /* We prefer a bit vector in such case because it does not result
+ in allocation. */
return false;
- nw = (ALLOCNO_MAX (a) - ALLOCNO_MIN (a) + IRA_INT_BITS) / IRA_INT_BITS;
- return (2 * sizeof (ira_allocno_t) * (num + 1)
+ nw = (max - min + IRA_INT_BITS) / IRA_INT_BITS;
+ return (2 * sizeof (ira_object_t) * (num + 1)
< 3 * nw * sizeof (IRA_INT_TYPE));
}
-/* Allocates and initialize the conflict vector of A for NUM
- conflicting allocnos. */
+/* Allocates and initialize the conflict vector of OBJ for NUM
+ conflicting objects. */
void
-ira_allocate_allocno_conflict_vec (ira_allocno_t a, int num)
+ira_allocate_conflict_vec (ira_object_t obj, int num)
{
int size;
- ira_allocno_t *vec;
+ ira_object_t *vec;
- ira_assert (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a) == NULL);
+ ira_assert (OBJECT_CONFLICT_ARRAY (obj) == NULL);
num++; /* for NULL end marker */
- size = sizeof (ira_allocno_t) * num;
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a) = ira_allocate (size);
- vec = (ira_allocno_t *) ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a);
+ size = sizeof (ira_object_t) * num;
+ OBJECT_CONFLICT_ARRAY (obj) = ira_allocate (size);
+ vec = (ira_object_t *) OBJECT_CONFLICT_ARRAY (obj);
vec[0] = NULL;
- ALLOCNO_CONFLICT_ALLOCNOS_NUM (a) = 0;
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a) = size;
- ALLOCNO_CONFLICT_VEC_P (a) = true;
+ OBJECT_NUM_CONFLICTS (obj) = 0;
+ OBJECT_CONFLICT_ARRAY_SIZE (obj) = size;
+ OBJECT_CONFLICT_VEC_P (obj) = true;
}
-/* Allocate and initialize the conflict bit vector of A. */
+/* Allocate and initialize the conflict bit vector of OBJ. */
static void
-allocate_allocno_conflict_bit_vec (ira_allocno_t a)
+allocate_conflict_bit_vec (ira_object_t obj)
{
unsigned int size;
- ira_assert (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a) == NULL);
- size = ((ALLOCNO_MAX (a) - ALLOCNO_MIN (a) + IRA_INT_BITS)
+ ira_assert (OBJECT_CONFLICT_ARRAY (obj) == NULL);
+ size = ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
/ IRA_INT_BITS * sizeof (IRA_INT_TYPE));
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a) = ira_allocate (size);
- memset (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a), 0, size);
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a) = size;
- ALLOCNO_CONFLICT_VEC_P (a) = false;
+ OBJECT_CONFLICT_ARRAY (obj) = ira_allocate (size);
+ memset (OBJECT_CONFLICT_ARRAY (obj), 0, size);
+ OBJECT_CONFLICT_ARRAY_SIZE (obj) = size;
+ OBJECT_CONFLICT_VEC_P (obj) = false;
}
/* Allocate and initialize the conflict vector or conflict bit vector
of A for NUM conflicting allocnos whatever is more profitable. */
void
-ira_allocate_allocno_conflicts (ira_allocno_t a, int num)
+ira_allocate_object_conflicts (ira_object_t a, int num)
{
if (ira_conflict_vector_profitable_p (a, num))
- ira_allocate_allocno_conflict_vec (a, num);
+ ira_allocate_conflict_vec (a, num);
else
- allocate_allocno_conflict_bit_vec (a);
+ allocate_conflict_bit_vec (a);
}
-/* Add A2 to the conflicts of A1. */
+/* Add OBJ2 to the conflicts of OBJ1. */
static void
-add_to_allocno_conflicts (ira_allocno_t a1, ira_allocno_t a2)
+add_to_conflicts (ira_object_t obj1, ira_object_t obj2)
{
int num;
unsigned int size;
- if (ALLOCNO_CONFLICT_VEC_P (a1))
+ if (OBJECT_CONFLICT_VEC_P (obj1))
{
- ira_allocno_t *vec;
-
- num = ALLOCNO_CONFLICT_ALLOCNOS_NUM (a1) + 2;
- if (ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a1)
- >= num * sizeof (ira_allocno_t))
- vec = (ira_allocno_t *) ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1);
- else
+ ira_object_t *vec = OBJECT_CONFLICT_VEC (obj1);
+ int curr_num = OBJECT_NUM_CONFLICTS (obj1);
+ num = curr_num + 2;
+ if (OBJECT_CONFLICT_ARRAY_SIZE (obj1) < num * sizeof (ira_object_t))
{
+ ira_object_t *newvec;
size = (3 * num / 2 + 1) * sizeof (ira_allocno_t);
- vec = (ira_allocno_t *) ira_allocate (size);
- memcpy (vec, ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1),
- sizeof (ira_allocno_t) * ALLOCNO_CONFLICT_ALLOCNOS_NUM (a1));
- ira_free (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1));
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1) = vec;
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a1) = size;
+ newvec = (ira_object_t *) ira_allocate (size);
+ memcpy (newvec, vec, curr_num * sizeof (ira_object_t));
+ ira_free (vec);
+ vec = newvec;
+ OBJECT_CONFLICT_ARRAY (obj1) = vec;
+ OBJECT_CONFLICT_ARRAY_SIZE (obj1) = size;
}
- vec[num - 2] = a2;
+ vec[num - 2] = obj2;
vec[num - 1] = NULL;
- ALLOCNO_CONFLICT_ALLOCNOS_NUM (a1)++;
+ OBJECT_NUM_CONFLICTS (obj1)++;
}
else
{
int nw, added_head_nw, id;
- IRA_INT_TYPE *vec;
+ IRA_INT_TYPE *vec = OBJECT_CONFLICT_BITVEC (obj1);
- id = ALLOCNO_CONFLICT_ID (a2);
- vec = (IRA_INT_TYPE *) ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1);
- if (ALLOCNO_MIN (a1) > id)
+ id = OBJECT_CONFLICT_ID (obj2);
+ if (OBJECT_MIN (obj1) > id)
{
/* Expand head of the bit vector. */
- added_head_nw = (ALLOCNO_MIN (a1) - id - 1) / IRA_INT_BITS + 1;
- nw = (ALLOCNO_MAX (a1) - ALLOCNO_MIN (a1)) / IRA_INT_BITS + 1;
+ added_head_nw = (OBJECT_MIN (obj1) - id - 1) / IRA_INT_BITS + 1;
+ nw = (OBJECT_MAX (obj1) - OBJECT_MIN (obj1)) / IRA_INT_BITS + 1;
size = (nw + added_head_nw) * sizeof (IRA_INT_TYPE);
- if (ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a1) >= size)
+ if (OBJECT_CONFLICT_ARRAY_SIZE (obj1) >= size)
{
memmove ((char *) vec + added_head_nw * sizeof (IRA_INT_TYPE),
vec, nw * sizeof (IRA_INT_TYPE));
@@ -639,97 +677,93 @@ add_to_allocno_conflicts (ira_allocno_t
= (3 * (nw + added_head_nw) / 2 + 1) * sizeof (IRA_INT_TYPE);
vec = (IRA_INT_TYPE *) ira_allocate (size);
memcpy ((char *) vec + added_head_nw * sizeof (IRA_INT_TYPE),
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1),
- nw * sizeof (IRA_INT_TYPE));
+ OBJECT_CONFLICT_ARRAY (obj1), nw * sizeof (IRA_INT_TYPE));
memset (vec, 0, added_head_nw * sizeof (IRA_INT_TYPE));
memset ((char *) vec
+ (nw + added_head_nw) * sizeof (IRA_INT_TYPE),
0, size - (nw + added_head_nw) * sizeof (IRA_INT_TYPE));
- ira_free (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1));
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1) = vec;
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a1) = size;
+ ira_free (OBJECT_CONFLICT_ARRAY (obj1));
+ OBJECT_CONFLICT_ARRAY (obj1) = vec;
+ OBJECT_CONFLICT_ARRAY_SIZE (obj1) = size;
}
- ALLOCNO_MIN (a1) -= added_head_nw * IRA_INT_BITS;
+ OBJECT_MIN (obj1) -= added_head_nw * IRA_INT_BITS;
}
- else if (ALLOCNO_MAX (a1) < id)
+ else if (OBJECT_MAX (obj1) < id)
{
- nw = (id - ALLOCNO_MIN (a1)) / IRA_INT_BITS + 1;
+ nw = (id - OBJECT_MIN (obj1)) / IRA_INT_BITS + 1;
size = nw * sizeof (IRA_INT_TYPE);
- if (ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a1) < size)
+ if (OBJECT_CONFLICT_ARRAY_SIZE (obj1) < size)
{
/* Expand tail of the bit vector. */
size = (3 * nw / 2 + 1) * sizeof (IRA_INT_TYPE);
vec = (IRA_INT_TYPE *) ira_allocate (size);
- memcpy (vec, ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1),
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a1));
- memset ((char *) vec + ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a1),
- 0, size - ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a1));
- ira_free (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1));
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a1) = vec;
- ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a1) = size;
+ memcpy (vec, OBJECT_CONFLICT_ARRAY (obj1), OBJECT_CONFLICT_ARRAY_SIZE (obj1));
+ memset ((char *) vec + OBJECT_CONFLICT_ARRAY_SIZE (obj1),
+ 0, size - OBJECT_CONFLICT_ARRAY_SIZE (obj1));
+ ira_free (OBJECT_CONFLICT_ARRAY (obj1));
+ OBJECT_CONFLICT_ARRAY (obj1) = vec;
+ OBJECT_CONFLICT_ARRAY_SIZE (obj1) = size;
}
- ALLOCNO_MAX (a1) = id;
+ OBJECT_MAX (obj1) = id;
}
- SET_MINMAX_SET_BIT (vec, id, ALLOCNO_MIN (a1), ALLOCNO_MAX (a1));
+ SET_MINMAX_SET_BIT (vec, id, OBJECT_MIN (obj1), OBJECT_MAX (obj1));
}
}
-/* Add A1 to the conflicts of A2 and vise versa. */
-void
-ira_add_allocno_conflict (ira_allocno_t a1, ira_allocno_t a2)
+/* Add OBJ1 to the conflicts of OBJ2 and vice versa. */
+static void
+ira_add_conflict (ira_object_t obj1, ira_object_t obj2)
{
- add_to_allocno_conflicts (a1, a2);
- add_to_allocno_conflicts (a2, a1);
+ add_to_conflicts (obj1, obj2);
+ add_to_conflicts (obj2, obj1);
}
-/* Clear all conflicts of allocno A. */
+/* Clear all conflicts of OBJ. */
static void
-clear_allocno_conflicts (ira_allocno_t a)
+clear_conflicts (ira_object_t obj)
{
- if (ALLOCNO_CONFLICT_VEC_P (a))
+ if (OBJECT_CONFLICT_VEC_P (obj))
{
- ALLOCNO_CONFLICT_ALLOCNOS_NUM (a) = 0;
- ((ira_allocno_t *) ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a))[0] = NULL;
+ OBJECT_NUM_CONFLICTS (obj) = 0;
+ OBJECT_CONFLICT_VEC (obj)[0] = NULL;
}
- else if (ALLOCNO_CONFLICT_ALLOCNO_ARRAY_SIZE (a) != 0)
+ else if (OBJECT_CONFLICT_ARRAY_SIZE (obj) != 0)
{
int nw;
- nw = (ALLOCNO_MAX (a) - ALLOCNO_MIN (a)) / IRA_INT_BITS + 1;
- memset (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a), 0,
- nw * sizeof (IRA_INT_TYPE));
+ nw = (OBJECT_MAX (obj) - OBJECT_MIN (obj)) / IRA_INT_BITS + 1;
+ memset (OBJECT_CONFLICT_BITVEC (obj), 0, nw * sizeof (IRA_INT_TYPE));
}
}
/* The array used to find duplications in conflict vectors of
allocnos. */
-static int *allocno_conflict_check;
+static int *conflict_check;
/* The value used to mark allocation presence in conflict vector of
the current allocno. */
-static int curr_allocno_conflict_check_tick;
+static int curr_conflict_check_tick;
-/* Remove duplications in conflict vector of A. */
+/* Remove duplications in conflict vector of OBJ. */
static void
-compress_allocno_conflict_vec (ira_allocno_t a)
+compress_conflict_vec (ira_object_t obj)
{
- ira_allocno_t *vec, conflict_a;
+ ira_object_t *vec, conflict_obj;
int i, j;
- ira_assert (ALLOCNO_CONFLICT_VEC_P (a));
- vec = (ira_allocno_t *) ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a);
- curr_allocno_conflict_check_tick++;
- for (i = j = 0; (conflict_a = vec[i]) != NULL; i++)
+ ira_assert (OBJECT_CONFLICT_VEC_P (obj));
+ vec = OBJECT_CONFLICT_VEC (obj);
+ curr_conflict_check_tick++;
+ for (i = j = 0; (conflict_obj = vec[i]) != NULL; i++)
{
- if (allocno_conflict_check[ALLOCNO_NUM (conflict_a)]
- != curr_allocno_conflict_check_tick)
+ int id = OBJECT_CONFLICT_ID (conflict_obj);
+ if (conflict_check[id] != curr_conflict_check_tick)
{
- allocno_conflict_check[ALLOCNO_NUM (conflict_a)]
- = curr_allocno_conflict_check_tick;
- vec[j++] = conflict_a;
+ conflict_check[id] = curr_conflict_check_tick;
+ vec[j++] = conflict_obj;
}
}
- ALLOCNO_CONFLICT_ALLOCNOS_NUM (a) = j;
+ OBJECT_NUM_CONFLICTS (obj) = j;
vec[j] = NULL;
}
@@ -740,14 +774,16 @@ compress_conflict_vecs (void)
ira_allocno_t a;
ira_allocno_iterator ai;
- allocno_conflict_check
- = (int *) ira_allocate (sizeof (int) * ira_allocnos_num);
- memset (allocno_conflict_check, 0, sizeof (int) * ira_allocnos_num);
- curr_allocno_conflict_check_tick = 0;
+ conflict_check = (int *) ira_allocate (sizeof (int) * ira_objects_num);
+ memset (conflict_check, 0, sizeof (int) * ira_objects_num);
+ curr_conflict_check_tick = 0;
FOR_EACH_ALLOCNO (a, ai)
- if (ALLOCNO_CONFLICT_VEC_P (a))
- compress_allocno_conflict_vec (a);
- ira_free (allocno_conflict_check);
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ if (OBJECT_CONFLICT_VEC_P (obj))
+ compress_conflict_vec (obj);
+ }
+ ira_free (conflict_check);
}
/* This recursive function outputs allocno A and if it is a cap the
@@ -786,6 +822,7 @@ create_cap_allocno (ira_allocno_t a)
ALLOCNO_MODE (cap) = ALLOCNO_MODE (a);
cover_class = ALLOCNO_COVER_CLASS (a);
ira_set_allocno_cover_class (cap, cover_class);
+ ira_create_allocno_object (cap);
ALLOCNO_AVAILABLE_REGS_NUM (cap) = ALLOCNO_AVAILABLE_REGS_NUM (a);
ALLOCNO_CAP_MEMBER (cap) = a;
ALLOCNO_CAP (a) = cap;
@@ -994,11 +1031,9 @@ static void
finish_allocno (ira_allocno_t a)
{
enum reg_class cover_class = ALLOCNO_COVER_CLASS (a);
+ ira_object_t obj = ALLOCNO_OBJECT (a);
ira_allocnos[ALLOCNO_NUM (a)] = NULL;
- ira_conflict_id_allocno_map[ALLOCNO_CONFLICT_ID (a)] = NULL;
- if (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a) != NULL)
- ira_free (ALLOCNO_CONFLICT_ALLOCNO_ARRAY (a));
if (ALLOCNO_HARD_REG_COSTS (a) != NULL)
ira_free_cost_vector (ALLOCNO_HARD_REG_COSTS (a), cover_class);
if (ALLOCNO_CONFLICT_HARD_REG_COSTS (a) != NULL)
@@ -1010,6 +1045,11 @@ finish_allocno (ira_allocno_t a)
cover_class);
ira_finish_allocno_live_range_list (ALLOCNO_LIVE_RANGES (a));
pool_free (allocno_pool, a);
+
+ ira_object_id_map[OBJECT_CONFLICT_ID (obj)] = NULL;
+ if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
+ ira_free (OBJECT_CONFLICT_ARRAY (obj));
+ pool_free (object_pool, obj);
}
/* Free the memory allocated for all allocnos. */
@@ -1022,9 +1062,10 @@ finish_allocnos (void)
FOR_EACH_ALLOCNO (a, ai)
finish_allocno (a);
ira_free (ira_regno_allocno_map);
- VEC_free (ira_allocno_t, heap, ira_conflict_id_allocno_map_vec);
+ VEC_free (ira_object_t, heap, ira_object_id_map_vec);
VEC_free (ira_allocno_t, heap, allocno_vec);
free_alloc_pool (allocno_pool);
+ free_alloc_pool (object_pool);
free_alloc_pool (live_range_pool);
}
@@ -2079,11 +2120,13 @@ remove_low_level_allocnos (void)
regno = ALLOCNO_REGNO (a);
if (ira_loop_tree_root->regno_allocno_map[regno] == a)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+
ira_regno_allocno_map[regno] = a;
ALLOCNO_NEXT_REGNO_ALLOCNO (a) = NULL;
ALLOCNO_CAP_MEMBER (a) = NULL;
- COPY_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a),
- ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a));
+ COPY_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
+ OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
#ifdef STACK_REGS
if (ALLOCNO_TOTAL_NO_STACK_REG_P (a))
ALLOCNO_NO_STACK_REG_P (a) = true;
@@ -2202,20 +2245,24 @@ setup_min_max_allocno_live_range_point (
FOR_EACH_ALLOCNO (a, ai)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
r = ALLOCNO_LIVE_RANGES (a);
if (r == NULL)
continue;
- ALLOCNO_MAX (a) = r->finish;
+ OBJECT_MAX (obj) = r->finish;
for (; r->next != NULL; r = r->next)
;
- ALLOCNO_MIN (a) = r->start;
+ OBJECT_MIN (obj) = r->start;
}
for (i = max_reg_num () - 1; i >= FIRST_PSEUDO_REGISTER; i--)
for (a = ira_regno_allocno_map[i];
a != NULL;
a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
{
- if (ALLOCNO_MAX (a) < 0)
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_object_t parent_obj;
+
+ if (OBJECT_MAX (obj) < 0)
continue;
ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
/* Accumulation of range info. */
@@ -2223,26 +2270,29 @@ setup_min_max_allocno_live_range_point (
{
for (cap = ALLOCNO_CAP (a); cap != NULL; cap = ALLOCNO_CAP (cap))
{
- if (ALLOCNO_MAX (cap) < ALLOCNO_MAX (a))
- ALLOCNO_MAX (cap) = ALLOCNO_MAX (a);
- if (ALLOCNO_MIN (cap) > ALLOCNO_MIN (a))
- ALLOCNO_MIN (cap) = ALLOCNO_MIN (a);
+ ira_object_t cap_obj = ALLOCNO_OBJECT (cap);
+ if (OBJECT_MAX (cap_obj) < OBJECT_MAX (obj))
+ OBJECT_MAX (cap_obj) = OBJECT_MAX (obj);
+ if (OBJECT_MIN (cap_obj) > OBJECT_MIN (obj))
+ OBJECT_MIN (cap_obj) = OBJECT_MIN (obj);
}
continue;
}
if ((parent = ALLOCNO_LOOP_TREE_NODE (a)->parent) == NULL)
continue;
parent_a = parent->regno_allocno_map[i];
- if (ALLOCNO_MAX (parent_a) < ALLOCNO_MAX (a))
- ALLOCNO_MAX (parent_a) = ALLOCNO_MAX (a);
- if (ALLOCNO_MIN (parent_a) > ALLOCNO_MIN (a))
- ALLOCNO_MIN (parent_a) = ALLOCNO_MIN (a);
+ parent_obj = ALLOCNO_OBJECT (parent_a);
+ if (OBJECT_MAX (parent_obj) < OBJECT_MAX (obj))
+ OBJECT_MAX (parent_obj) = OBJECT_MAX (obj);
+ if (OBJECT_MIN (parent_obj) > OBJECT_MIN (obj))
+ OBJECT_MIN (parent_obj) = OBJECT_MIN (obj);
}
#ifdef ENABLE_IRA_CHECKING
FOR_EACH_ALLOCNO (a, ai)
{
- if ((0 <= ALLOCNO_MIN (a) && ALLOCNO_MIN (a) <= ira_max_point)
- && (0 <= ALLOCNO_MAX (a) && ALLOCNO_MAX (a) <= ira_max_point))
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ if ((0 <= OBJECT_MIN (obj) && OBJECT_MIN (obj) <= ira_max_point)
+ && (0 <= OBJECT_MAX (obj) && OBJECT_MAX (obj) <= ira_max_point))
continue;
gcc_unreachable ();
}
@@ -2251,30 +2301,31 @@ setup_min_max_allocno_live_range_point (
/* Sort allocnos according to their live ranges. Allocnos with
smaller cover class are put first unless we use priority coloring.
- Allocnos with the same cove class are ordered according their start
+ Allocnos with the same cover class are ordered according their start
(min). Allocnos with the same start are ordered according their
finish (max). */
static int
allocno_range_compare_func (const void *v1p, const void *v2p)
{
int diff;
- ira_allocno_t a1 = *(const ira_allocno_t *) v1p;
- ira_allocno_t a2 = *(const ira_allocno_t *) v2p;
+ ira_object_t obj1 = *(const ira_object_t *) v1p;
+ ira_object_t obj2 = *(const ira_object_t *) v2p;
+ ira_allocno_t a1 = OBJECT_ALLOCNO (obj1);
+ ira_allocno_t a2 = OBJECT_ALLOCNO (obj2);
if (flag_ira_algorithm != IRA_ALGORITHM_PRIORITY
&& (diff = ALLOCNO_COVER_CLASS (a1) - ALLOCNO_COVER_CLASS (a2)) != 0)
return diff;
- if ((diff = ALLOCNO_MIN (a1) - ALLOCNO_MIN (a2)) != 0)
+ if ((diff = OBJECT_MIN (obj1) - OBJECT_MIN (obj2)) != 0)
return diff;
- if ((diff = ALLOCNO_MAX (a1) - ALLOCNO_MAX (a2)) != 0)
+ if ((diff = OBJECT_MAX (obj1) - OBJECT_MAX (obj2)) != 0)
return diff;
return ALLOCNO_NUM (a1) - ALLOCNO_NUM (a2);
}
-/* Sort ira_conflict_id_allocno_map and set up conflict id of
- allocnos. */
+/* Sort ira_object_id_map and set up conflict id of allocnos. */
static void
-sort_conflict_id_allocno_map (void)
+sort_conflict_id_map (void)
{
int i, num;
ira_allocno_t a;
@@ -2282,14 +2333,17 @@ sort_conflict_id_allocno_map (void)
num = 0;
FOR_EACH_ALLOCNO (a, ai)
- ira_conflict_id_allocno_map[num++] = a;
- qsort (ira_conflict_id_allocno_map, num, sizeof (ira_allocno_t),
+ ira_object_id_map[num++] = ALLOCNO_OBJECT (a);
+ qsort (ira_object_id_map, num, sizeof (ira_object_t),
allocno_range_compare_func);
for (i = 0; i < num; i++)
- if ((a = ira_conflict_id_allocno_map[i]) != NULL)
- ALLOCNO_CONFLICT_ID (a) = i;
- for (i = num; i < ira_allocnos_num; i++)
- ira_conflict_id_allocno_map[i] = NULL;
+ {
+ ira_object_t obj = ira_object_id_map[i];
+ gcc_assert (obj != NULL);
+ OBJECT_CONFLICT_ID (obj) = i;
+ }
+ for (i = num; i < ira_objects_num; i++)
+ ira_object_id_map[i] = NULL;
}
/* Set up minimal and maximal conflict ids of allocnos with which
@@ -2302,14 +2356,17 @@ setup_min_max_conflict_allocno_ids (void
int *live_range_min, *last_lived;
ira_allocno_t a;
- live_range_min = (int *) ira_allocate (sizeof (int) * ira_allocnos_num);
+ live_range_min = (int *) ira_allocate (sizeof (int) * ira_objects_num);
cover_class = -1;
first_not_finished = -1;
- for (i = 0; i < ira_allocnos_num; i++)
+ for (i = 0; i < ira_objects_num; i++)
{
- a = ira_conflict_id_allocno_map[i];
- if (a == NULL)
+ ira_object_t obj = ira_object_id_map[i];
+ if (obj == NULL)
continue;
+
+ a = OBJECT_ALLOCNO (obj);
+
if (cover_class < 0
|| (flag_ira_algorithm != IRA_ALGORITHM_PRIORITY
&& cover_class != (int) ALLOCNO_COVER_CLASS (a)))
@@ -2320,13 +2377,13 @@ setup_min_max_conflict_allocno_ids (void
}
else
{
- start = ALLOCNO_MIN (a);
+ start = OBJECT_MIN (obj);
/* If we skip an allocno, the allocno with smaller ids will
be also skipped because of the secondary sorting the
range finishes (see function
allocno_range_compare_func). */
while (first_not_finished < i
- && start > ALLOCNO_MAX (ira_conflict_id_allocno_map
+ && start > OBJECT_MAX (ira_object_id_map
[first_not_finished]))
first_not_finished++;
min = first_not_finished;
@@ -2335,17 +2392,19 @@ setup_min_max_conflict_allocno_ids (void
/* We could increase min further in this case but it is good
enough. */
min++;
- live_range_min[i] = ALLOCNO_MIN (a);
- ALLOCNO_MIN (a) = min;
+ live_range_min[i] = OBJECT_MIN (obj);
+ OBJECT_MIN (obj) = min;
}
last_lived = (int *) ira_allocate (sizeof (int) * ira_max_point);
cover_class = -1;
filled_area_start = -1;
- for (i = ira_allocnos_num - 1; i >= 0; i--)
+ for (i = ira_objects_num - 1; i >= 0; i--)
{
- a = ira_conflict_id_allocno_map[i];
- if (a == NULL)
+ ira_object_t obj = ira_object_id_map[i];
+ if (obj == NULL)
continue;
+
+ a = OBJECT_ALLOCNO (obj);
if (cover_class < 0
|| (flag_ira_algorithm != IRA_ALGORITHM_PRIORITY
&& cover_class != (int) ALLOCNO_COVER_CLASS (a)))
@@ -2356,13 +2415,13 @@ setup_min_max_conflict_allocno_ids (void
filled_area_start = ira_max_point;
}
min = live_range_min[i];
- finish = ALLOCNO_MAX (a);
+ finish = OBJECT_MAX (obj);
max = last_lived[finish];
if (max < 0)
/* We could decrease max further in this case but it is good
enough. */
- max = ALLOCNO_CONFLICT_ID (a) - 1;
- ALLOCNO_MAX (a) = max;
+ max = OBJECT_CONFLICT_ID (obj) - 1;
+ OBJECT_MAX (obj) = max;
/* In filling, we can go further A range finish to recognize
intersection quickly because if the finish of subsequently
processed allocno (it has smaller conflict id) range is
@@ -2506,13 +2565,14 @@ ira_flattening (int max_regno_before_emi
new_pseudos_p = merged_p = false;
FOR_EACH_ALLOCNO (a, ai)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
if (ALLOCNO_CAP_MEMBER (a) != NULL)
/* Caps are not in the regno allocno maps and they are never
will be transformed into allocnos existing after IR
flattening. */
continue;
- COPY_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a),
- ALLOCNO_CONFLICT_HARD_REGS (a));
+ COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
+ OBJECT_CONFLICT_HARD_REGS (obj));
#ifdef STACK_REGS
ALLOCNO_TOTAL_NO_STACK_REG_P (a) = ALLOCNO_NO_STACK_REG_P (a);
#endif
@@ -2600,7 +2660,7 @@ ira_flattening (int max_regno_before_emi
continue;
for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
ira_assert (r->allocno == a);
- clear_allocno_conflicts (a);
+ clear_conflicts (ALLOCNO_OBJECT (a));
}
allocnos_live = sparseset_alloc (ira_allocnos_num);
for (i = 0; i < ira_max_point; i++)
@@ -2622,7 +2682,11 @@ ira_flattening (int max_regno_before_emi
[cover_class][ALLOCNO_COVER_CLASS (live_a)]
/* Don't set up conflict for the allocno with itself. */
&& num != (int) n)
- ira_add_allocno_conflict (a, live_a);
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_object_t live_obj = ALLOCNO_OBJECT (live_a);
+ ira_add_conflict (obj, live_obj);
+ }
}
}
@@ -2814,6 +2878,7 @@ ira_build (bool loops_p)
form_loop_tree ();
create_allocnos ();
ira_costs ();
+ create_allocno_objects ();
ira_create_allocno_live_ranges ();
remove_unnecessary_regions (false);
ira_compress_allocno_live_ranges ();
@@ -2829,7 +2894,7 @@ ira_build (bool loops_p)
check_allocno_creation ();
#endif
setup_min_max_allocno_live_range_point ();
- sort_conflict_id_allocno_map ();
+ sort_conflict_id_map ();
setup_min_max_conflict_allocno_ids ();
ira_build_conflicts ();
update_conflict_hard_reg_costs ();
@@ -2850,9 +2915,10 @@ ira_build (bool loops_p)
FOR_EACH_ALLOCNO (a, ai)
if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
{
- IOR_HARD_REG_SET (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a),
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
- IOR_HARD_REG_SET (ALLOCNO_CONFLICT_HARD_REGS (a),
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
}
}
@@ -2867,7 +2933,10 @@ ira_build (bool loops_p)
n = 0;
FOR_EACH_ALLOCNO (a, ai)
- n += ALLOCNO_CONFLICT_ALLOCNOS_NUM (a);
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ n += OBJECT_NUM_CONFLICTS (obj);
+ }
nr = 0;
FOR_EACH_ALLOCNO (a, ai)
for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
Index: ira.c
===================================================================
--- ira.c.orig
+++ ira.c
@@ -1378,6 +1378,7 @@ ira_bad_reload_regno_1 (int regno, rtx x
{
int x_regno;
ira_allocno_t a;
+ ira_object_t obj;
enum reg_class pref;
/* We only deal with pseudo regs. */
@@ -1397,7 +1398,8 @@ ira_bad_reload_regno_1 (int regno, rtx x
/* If the pseudo conflicts with REGNO, then we consider REGNO a
poor choice for a reload regno. */
a = ira_regno_allocno_map[x_regno];
- if (TEST_HARD_REG_BIT (ALLOCNO_TOTAL_CONFLICT_HARD_REGS (a), regno))
+ obj = ALLOCNO_OBJECT (a);
+ if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno))
return true;
return false;
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 7/9: Introduce ira_object_t
2010-06-18 14:37 ` Patch 7/9: Introduce ira_object_t Bernd Schmidt
@ 2010-06-18 22:07 ` Jeff Law
0 siblings, 0 replies; 42+ messages in thread
From: Jeff Law @ 2010-06-18 22:07 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches
On 06/18/10 08:10, Bernd Schmidt wrote:
> This introduces a new structure, ira_object_t, which is split off from
> ira_allocno_t. Objects are used to track information related to
> conflicts. There is at the moment a 1:1 correspondence between objects
> and allocnos, but the plan is to introduce 2 objects for DImode allocnos
> with a suitable cover class.
I presume you're not going to try and deal with cases where the
allocno's mode requires > 2 registers (most obvious example is XFmode in
general registers on x86, but there are others). I suspect that the
gain for handling > 2 hard regs is so small that it's not worth the effort.
This patch is fine.
Jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Patch 8/9: track live ranges for objects
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
` (6 preceding siblings ...)
2010-06-18 14:37 ` Patch 7/9: Introduce ira_object_t Bernd Schmidt
@ 2010-06-18 14:48 ` Bernd Schmidt
2010-06-18 22:41 ` Jeff Law
2010-06-18 15:26 ` Patch 9/9: change FOR_EACH_ALLOCNO_CONFLICT to use objects Bernd Schmidt
` (2 subsequent siblings)
10 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-18 14:48 UTC (permalink / raw)
To: GCC Patches
[-- Attachment #1: Type: text/plain, Size: 0 bytes --]
[-- Attachment #2: object-liveranges.diff --]
[-- Type: text/plain, Size: 28325 bytes --]
This moves the remaining piece of conflict information, live ranges, into the
object structure.
* ira-int.h (struct live_range): Rename allocno member to object and change
type to ira_object_t.
(struct ira_object): New member live_ranges.
(struct ira_allocno): Remove member live_ranges.
(ALLOCNO_LIVE_RANGES): Remove.
(OBJECT_LIVE_RANGES): New macro.
(ira_create_live_range, ira_copy_live_range_list,
ira_merge_live_range_list, ira_live_ranges_intersect_p,
ira_finish_live_range, ira_finish_live_range_list): Adjust declarations.
* ira-build.c (ira_create_object): Initialize live ranges here.
(ira_create_allocno): Not here.
(ira_create_live_range): Rename from ira_create_allocno_live_range, arg
changed to ira_object_t, all callers changed.
(copy_live_range): Rename from copy_allocno_live_range, all callers
changed.
(ira_copy_live_range_list): Rename from ira_copy_allocno_live_range_list,
all callers changed.
(ira_merge_live_ranges): Rename from ira_merge_allocno_live_range_list,
all callers changed.
(ira_live_ranges_intersect_p): Rename from
ira_allocno_live_ranges_intersect_p, all callers changed.
(ira_finish_live_range): Rename from ira_finish_allocno_live_range, all
callers changed.
(ira_finish_live_range_list): Rename from
ira_finish_allocno_live_range_list, all callers changed.
(change_object_in_range_list): Rename from change_allocno_in_range_list,
last arg changed to ira_object_t, all callers changed.
(finish_allocno): Changed to expect live ranges in the allocno's object.
(move_allocno_live_ranges, copy_allocno_live_ranges,
update_bad_spill_attribute, setup_min_max_allocno_live_range_point,
ira_flattening, ira_build): Likewise.
* ira-color.c (allocnos_have_intersected_live_ranges_p,
slot_coalesced_allocno_live_ranges_intersect,
setup_slot_coalesced_allocno_live_ranges, fast_allocation): Likewise.
* ira-conflicts.c (build_conflict_bit_table): Likewise.
* ira-emit.c (add_range_and_copies_from_move_list): Likewise.
* ira-lives.c (make_allocno_born, update_allocno_pressure_excess_length,
make_allocno_dead, create_start_finish_chains,
remove_some_program_points_and_update_live_ranges,
ira_debug_live_range_list): Likewise.
Index: gcc/ira-build.c
===================================================================
--- gcc.orig/ira-build.c
+++ gcc/ira-build.c
@@ -439,6 +439,7 @@ ira_create_object (ira_allocno_t a)
reg_class_contents[cover_class]);
OBJECT_MIN (obj) = INT_MAX;
OBJECT_MAX (obj) = -1;
+ OBJECT_LIVE_RANGES (obj) = NULL;
VEC_safe_push (ira_object_t, heap, ira_object_id_map_vec, obj);
ira_object_id_map
@@ -508,7 +509,6 @@ ira_create_allocno (int regno, bool cap_
ALLOCNO_PREV_BUCKET_ALLOCNO (a) = NULL;
ALLOCNO_FIRST_COALESCED_ALLOCNO (a) = a;
ALLOCNO_NEXT_COALESCED_ALLOCNO (a) = a;
- ALLOCNO_LIVE_RANGES (a) = NULL;
VEC_safe_push (ira_allocno_t, heap, allocno_vec, a);
ira_allocnos = VEC_address (ira_allocno_t, allocno_vec);
@@ -850,13 +850,13 @@ create_cap_allocno (ira_allocno_t a)
/* Create and return allocno live range with given attributes. */
live_range_t
-ira_create_allocno_live_range (ira_allocno_t a, int start, int finish,
- live_range_t next)
+ira_create_live_range (ira_object_t obj, int start, int finish,
+ live_range_t next)
{
live_range_t p;
p = (live_range_t) pool_alloc (live_range_pool);
- p->allocno = a;
+ p->object = obj;
p->start = start;
p->finish = finish;
p->next = next;
@@ -865,7 +865,7 @@ ira_create_allocno_live_range (ira_alloc
/* Copy allocno live range R and return the result. */
static live_range_t
-copy_allocno_live_range (live_range_t r)
+copy_live_range (live_range_t r)
{
live_range_t p;
@@ -877,7 +877,7 @@ copy_allocno_live_range (live_range_t r)
/* Copy allocno live range list given by its head R and return the
result. */
live_range_t
-ira_copy_allocno_live_range_list (live_range_t r)
+ira_copy_live_range_list (live_range_t r)
{
live_range_t p, first, last;
@@ -885,7 +885,7 @@ ira_copy_allocno_live_range_list (live_r
return NULL;
for (first = last = NULL; r != NULL; r = r->next)
{
- p = copy_allocno_live_range (r);
+ p = copy_live_range (r);
if (first == NULL)
first = p;
else
@@ -899,7 +899,7 @@ ira_copy_allocno_live_range_list (live_r
maintains the order of ranges and tries to minimize number of the
result ranges. */
live_range_t
-ira_merge_allocno_live_ranges (live_range_t r1, live_range_t r2)
+ira_merge_live_ranges (live_range_t r1, live_range_t r2)
{
live_range_t first, last, temp;
@@ -923,7 +923,7 @@ ira_merge_allocno_live_ranges (live_rang
r1->finish = r2->finish;
temp = r2;
r2 = r2->next;
- ira_finish_allocno_live_range (temp);
+ ira_finish_live_range (temp);
if (r2 == NULL)
{
/* To try to merge with subsequent ranges in r1. */
@@ -975,7 +975,7 @@ ira_merge_allocno_live_ranges (live_rang
/* Return TRUE if live ranges R1 and R2 intersect. */
bool
-ira_allocno_live_ranges_intersect_p (live_range_t r1, live_range_t r2)
+ira_live_ranges_intersect_p (live_range_t r1, live_range_t r2)
{
/* Remember the live ranges are always kept ordered. */
while (r1 != NULL && r2 != NULL)
@@ -992,21 +992,21 @@ ira_allocno_live_ranges_intersect_p (liv
/* Free allocno live range R. */
void
-ira_finish_allocno_live_range (live_range_t r)
+ira_finish_live_range (live_range_t r)
{
pool_free (live_range_pool, r);
}
/* Free list of allocno live ranges starting with R. */
void
-ira_finish_allocno_live_range_list (live_range_t r)
+ira_finish_live_range_list (live_range_t r)
{
live_range_t next_r;
for (; r != NULL; r = next_r)
{
next_r = r->next;
- ira_finish_allocno_live_range (r);
+ ira_finish_live_range (r);
}
}
@@ -1033,6 +1033,12 @@ finish_allocno (ira_allocno_t a)
enum reg_class cover_class = ALLOCNO_COVER_CLASS (a);
ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_finish_live_range_list (OBJECT_LIVE_RANGES (obj));
+ ira_object_id_map[OBJECT_CONFLICT_ID (obj)] = NULL;
+ if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
+ ira_free (OBJECT_CONFLICT_ARRAY (obj));
+ pool_free (object_pool, obj);
+
ira_allocnos[ALLOCNO_NUM (a)] = NULL;
if (ALLOCNO_HARD_REG_COSTS (a) != NULL)
ira_free_cost_vector (ALLOCNO_HARD_REG_COSTS (a), cover_class);
@@ -1043,13 +1049,7 @@ finish_allocno (ira_allocno_t a)
if (ALLOCNO_UPDATED_CONFLICT_HARD_REG_COSTS (a) != NULL)
ira_free_cost_vector (ALLOCNO_UPDATED_CONFLICT_HARD_REG_COSTS (a),
cover_class);
- ira_finish_allocno_live_range_list (ALLOCNO_LIVE_RANGES (a));
pool_free (allocno_pool, a);
-
- ira_object_id_map[OBJECT_CONFLICT_ID (obj)] = NULL;
- if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
- ira_free (OBJECT_CONFLICT_ARRAY (obj));
- pool_free (object_pool, obj);
}
/* Free the memory allocated for all allocnos. */
@@ -1695,19 +1695,21 @@ create_allocnos (void)
will hardly improve the result. As a result we speed up regional
register allocation. */
-/* The function changes allocno in range list given by R onto A. */
+/* The function changes the object in range list given by R to OBJ. */
static void
-change_allocno_in_range_list (live_range_t r, ira_allocno_t a)
+change_object_in_range_list (live_range_t r, ira_object_t obj)
{
for (; r != NULL; r = r->next)
- r->allocno = a;
+ r->object = obj;
}
/* Move all live ranges associated with allocno A to allocno OTHER_A. */
static void
move_allocno_live_ranges (ira_allocno_t from, ira_allocno_t to)
{
- live_range_t lr = ALLOCNO_LIVE_RANGES (from);
+ ira_object_t from_obj = ALLOCNO_OBJECT (from);
+ ira_object_t to_obj = ALLOCNO_OBJECT (to);
+ live_range_t lr = OBJECT_LIVE_RANGES (from_obj);
if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
{
@@ -1717,17 +1719,19 @@ move_allocno_live_ranges (ira_allocno_t
ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
ira_print_live_range_list (ira_dump_file, lr);
}
- change_allocno_in_range_list (lr, to);
- ALLOCNO_LIVE_RANGES (to)
- = ira_merge_allocno_live_ranges (lr, ALLOCNO_LIVE_RANGES (to));
- ALLOCNO_LIVE_RANGES (from) = NULL;
+ change_object_in_range_list (lr, to_obj);
+ OBJECT_LIVE_RANGES (to_obj)
+ = ira_merge_live_ranges (lr, OBJECT_LIVE_RANGES (to_obj));
+ OBJECT_LIVE_RANGES (from_obj) = NULL;
}
/* Copy all live ranges associated with allocno A to allocno OTHER_A. */
static void
copy_allocno_live_ranges (ira_allocno_t from, ira_allocno_t to)
{
- live_range_t lr = ALLOCNO_LIVE_RANGES (from);
+ ira_object_t from_obj = ALLOCNO_OBJECT (from);
+ ira_object_t to_obj = ALLOCNO_OBJECT (to);
+ live_range_t lr = OBJECT_LIVE_RANGES (from_obj);
if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
{
@@ -1737,10 +1741,10 @@ copy_allocno_live_ranges (ira_allocno_t
ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
ira_print_live_range_list (ira_dump_file, lr);
}
- lr = ira_copy_allocno_live_range_list (lr);
- change_allocno_in_range_list (lr, to);
- ALLOCNO_LIVE_RANGES (to)
- = ira_merge_allocno_live_ranges (lr, ALLOCNO_LIVE_RANGES (to));
+ lr = ira_copy_live_range_list (lr);
+ change_object_in_range_list (lr, to_obj);
+ OBJECT_LIVE_RANGES (to_obj)
+ = ira_merge_live_ranges (lr, OBJECT_LIVE_RANGES (to_obj));
}
/* Return TRUE if NODE represents a loop with low register
@@ -2200,20 +2204,22 @@ update_bad_spill_attribute (void)
}
FOR_EACH_ALLOCNO (a, ai)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
cover_class = ALLOCNO_COVER_CLASS (a);
if (cover_class == NO_REGS)
continue;
- for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
bitmap_set_bit (&dead_points[cover_class], r->finish);
}
FOR_EACH_ALLOCNO (a, ai)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
cover_class = ALLOCNO_COVER_CLASS (a);
if (cover_class == NO_REGS)
continue;
if (! ALLOCNO_BAD_SPILL_P (a))
continue;
- for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
{
for (i = r->start + 1; i < r->finish; i++)
if (bitmap_bit_p (&dead_points[cover_class], i))
@@ -2246,7 +2252,7 @@ setup_min_max_allocno_live_range_point (
FOR_EACH_ALLOCNO (a, ai)
{
ira_object_t obj = ALLOCNO_OBJECT (a);
- r = ALLOCNO_LIVE_RANGES (a);
+ r = OBJECT_LIVE_RANGES (obj);
if (r == NULL)
continue;
OBJECT_MAX (obj) = r->finish;
@@ -2544,7 +2550,7 @@ copy_info_to_removed_store_destinations
void
ira_flattening (int max_regno_before_emit, int ira_max_point_before_emit)
{
- int i, j, num;
+ int i, j;
bool keep_p;
int hard_regs_num;
bool new_pseudos_p, merged_p, mem_dest_p;
@@ -2556,7 +2562,6 @@ ira_flattening (int max_regno_before_emi
live_range_t r;
ira_allocno_iterator ai;
ira_copy_iterator ci;
- sparseset allocnos_live;
regno_top_level_allocno_map
= (ira_allocno_t *) ira_allocate (max_reg_num () * sizeof (ira_allocno_t));
@@ -2652,48 +2657,48 @@ ira_flattening (int max_regno_before_emi
ira_rebuild_start_finish_chains ();
if (new_pseudos_p)
{
+ sparseset objects_live;
+
/* Rebuild conflicts. */
FOR_EACH_ALLOCNO (a, ai)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
if (a != regno_top_level_allocno_map[REGNO (ALLOCNO_REG (a))]
|| ALLOCNO_CAP_MEMBER (a) != NULL)
continue;
- for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
- ira_assert (r->allocno == a);
- clear_conflicts (ALLOCNO_OBJECT (a));
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ ira_assert (r->object == obj);
+ clear_conflicts (obj);
}
- allocnos_live = sparseset_alloc (ira_allocnos_num);
+ objects_live = sparseset_alloc (ira_objects_num);
for (i = 0; i < ira_max_point; i++)
{
for (r = ira_start_point_ranges[i]; r != NULL; r = r->start_next)
{
- a = r->allocno;
+ ira_object_t obj = r->object;
+ a = OBJECT_ALLOCNO (obj);
if (a != regno_top_level_allocno_map[REGNO (ALLOCNO_REG (a))]
|| ALLOCNO_CAP_MEMBER (a) != NULL)
continue;
- num = ALLOCNO_NUM (a);
cover_class = ALLOCNO_COVER_CLASS (a);
- sparseset_set_bit (allocnos_live, num);
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, n)
+ sparseset_set_bit (objects_live, OBJECT_CONFLICT_ID (obj));
+ EXECUTE_IF_SET_IN_SPARSESET (objects_live, n)
{
- ira_allocno_t live_a = ira_allocnos[n];
+ ira_object_t live_obj = ira_object_id_map[n];
+ ira_allocno_t live_a = OBJECT_ALLOCNO (live_obj);
+ enum reg_class live_cover = ALLOCNO_COVER_CLASS (live_a);
- if (ira_reg_classes_intersect_p
- [cover_class][ALLOCNO_COVER_CLASS (live_a)]
+ if (ira_reg_classes_intersect_p[cover_class][live_cover]
/* Don't set up conflict for the allocno with itself. */
- && num != (int) n)
- {
- ira_object_t obj = ALLOCNO_OBJECT (a);
- ira_object_t live_obj = ALLOCNO_OBJECT (live_a);
- ira_add_conflict (obj, live_obj);
- }
+ && live_a != a)
+ ira_add_conflict (obj, live_obj);
}
}
for (r = ira_finish_point_ranges[i]; r != NULL; r = r->finish_next)
- sparseset_clear_bit (allocnos_live, ALLOCNO_NUM (r->allocno));
+ sparseset_clear_bit (objects_live, OBJECT_CONFLICT_ID (r->object));
}
- sparseset_free (allocnos_live);
+ sparseset_free (objects_live);
compress_conflict_vecs ();
}
/* Mark some copies for removing and change allocnos in the rest
@@ -2939,7 +2944,8 @@ ira_build (bool loops_p)
}
nr = 0;
FOR_EACH_ALLOCNO (a, ai)
- for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
+ for (r = OBJECT_LIVE_RANGES (ALLOCNO_OBJECT (a)); r != NULL;
+ r = r->next)
nr++;
fprintf (ira_dump_file, " regions=%d, blocks=%d, points=%d\n",
VEC_length (loop_p, ira_loops.larray), n_basic_blocks,
Index: gcc/ira-color.c
===================================================================
--- gcc.orig/ira-color.c
+++ gcc/ira-color.c
@@ -93,14 +93,16 @@ static VEC(ira_allocno_t,heap) *removed_
static bool
allocnos_have_intersected_live_ranges_p (ira_allocno_t a1, ira_allocno_t a2)
{
+ ira_object_t obj1 = ALLOCNO_OBJECT (a1);
+ ira_object_t obj2 = ALLOCNO_OBJECT (a2);
if (a1 == a2)
return false;
if (ALLOCNO_REG (a1) != NULL && ALLOCNO_REG (a2) != NULL
&& (ORIGINAL_REGNO (ALLOCNO_REG (a1))
== ORIGINAL_REGNO (ALLOCNO_REG (a2))))
return false;
- return ira_allocno_live_ranges_intersect_p (ALLOCNO_LIVE_RANGES (a1),
- ALLOCNO_LIVE_RANGES (a2));
+ return ira_live_ranges_intersect_p (OBJECT_LIVE_RANGES (obj1),
+ OBJECT_LIVE_RANGES (obj2));
}
#ifdef ENABLE_IRA_CHECKING
@@ -2510,8 +2512,9 @@ slot_coalesced_allocno_live_ranges_inter
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- if (ira_allocno_live_ranges_intersect_p
- (slot_coalesced_allocnos_live_ranges[n], ALLOCNO_LIVE_RANGES (a)))
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ if (ira_live_ranges_intersect_p
+ (slot_coalesced_allocnos_live_ranges[n], OBJECT_LIVE_RANGES (obj)))
return true;
if (a == allocno)
break;
@@ -2532,9 +2535,10 @@ setup_slot_coalesced_allocno_live_ranges
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- r = ira_copy_allocno_live_range_list (ALLOCNO_LIVE_RANGES (a));
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ r = ira_copy_live_range_list (OBJECT_LIVE_RANGES (obj));
slot_coalesced_allocnos_live_ranges[n]
- = ira_merge_allocno_live_ranges
+ = ira_merge_live_ranges
(slot_coalesced_allocnos_live_ranges[n], r);
if (a == allocno)
break;
@@ -2605,8 +2609,7 @@ coalesce_spill_slots (ira_allocno_t *spi
}
}
for (i = 0; i < ira_allocnos_num; i++)
- ira_finish_allocno_live_range_list
- (slot_coalesced_allocnos_live_ranges[i]);
+ ira_finish_live_range_list (slot_coalesced_allocnos_live_ranges[i]);
ira_free (slot_coalesced_allocnos_live_ranges);
return merged_p;
}
@@ -3270,7 +3273,7 @@ fast_allocation (void)
a = sorted_allocnos[i];
obj = ALLOCNO_OBJECT (a);
COPY_HARD_REG_SET (conflict_hard_regs, OBJECT_CONFLICT_HARD_REGS (obj));
- for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
for (j = r->start; j <= r->finish; j++)
IOR_HARD_REG_SET (conflict_hard_regs, used_hard_regs[j]);
cover_class = ALLOCNO_COVER_CLASS (a);
@@ -3297,7 +3300,7 @@ fast_allocation (void)
(prohibited_class_mode_regs[cover_class][mode], hard_regno)))
continue;
ALLOCNO_HARD_REGNO (a) = hard_regno;
- for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
for (k = r->start; k <= r->finish; k++)
IOR_HARD_REG_SET (used_hard_regs[k],
ira_reg_mode_hard_regset[hard_regno][mode]);
Index: gcc/ira-conflicts.c
===================================================================
--- gcc.orig/ira-conflicts.c
+++ gcc/ira-conflicts.c
@@ -132,8 +132,8 @@ build_conflict_bit_table (void)
{
for (r = ira_start_point_ranges[i]; r != NULL; r = r->start_next)
{
- ira_allocno_t allocno = r->allocno;
- ira_object_t obj = ALLOCNO_OBJECT (allocno);
+ ira_object_t obj = r->object;
+ ira_allocno_t allocno = OBJECT_ALLOCNO (obj);
int id = OBJECT_CONFLICT_ID (obj);
cover_class = ALLOCNO_COVER_CLASS (allocno);
@@ -160,8 +160,7 @@ build_conflict_bit_table (void)
for (r = ira_finish_point_ranges[i]; r != NULL; r = r->finish_next)
{
- ira_allocno_t allocno = r->allocno;
- ira_object_t obj = ALLOCNO_OBJECT (allocno);
+ ira_object_t obj = r->object;
sparseset_clear_bit (objects_live, OBJECT_CONFLICT_ID (obj));
}
}
Index: gcc/ira-emit.c
===================================================================
--- gcc.orig/ira-emit.c
+++ gcc/ira-emit.c
@@ -960,11 +960,11 @@ add_range_and_copies_from_move_list (mov
cp->num, ALLOCNO_NUM (cp->first),
REGNO (ALLOCNO_REG (cp->first)), ALLOCNO_NUM (cp->second),
REGNO (ALLOCNO_REG (cp->second)));
- r = ALLOCNO_LIVE_RANGES (from);
+ r = OBJECT_LIVE_RANGES (from_obj);
if (r == NULL || r->finish >= 0)
{
- ALLOCNO_LIVE_RANGES (from)
- = ira_create_allocno_live_range (from, start, ira_max_point, r);
+ OBJECT_LIVE_RANGES (from_obj)
+ = ira_create_live_range (from_obj, start, ira_max_point, r);
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf (ira_dump_file,
" Adding range [%d..%d] to allocno a%dr%d\n",
@@ -981,14 +981,15 @@ add_range_and_copies_from_move_list (mov
REGNO (ALLOCNO_REG (from)));
}
ira_max_point++;
- ALLOCNO_LIVE_RANGES (to)
- = ira_create_allocno_live_range (to, ira_max_point, -1,
- ALLOCNO_LIVE_RANGES (to));
+ OBJECT_LIVE_RANGES (to_obj)
+ = ira_create_live_range (to_obj, ira_max_point, -1,
+ OBJECT_LIVE_RANGES (to_obj));
ira_max_point++;
}
for (move = list; move != NULL; move = move->next)
{
- r = ALLOCNO_LIVE_RANGES (move->to);
+ ira_object_t to_obj = ALLOCNO_OBJECT (move->to);
+ r = OBJECT_LIVE_RANGES (to_obj);
if (r->finish < 0)
{
r->finish = ira_max_point - 1;
@@ -1002,12 +1003,15 @@ add_range_and_copies_from_move_list (mov
EXECUTE_IF_SET_IN_BITMAP (live_through, FIRST_PSEUDO_REGISTER, regno, bi)
{
ira_allocno_t to;
+ ira_object_t obj;
a = node->regno_allocno_map[regno];
- if ((to = ALLOCNO_MEM_OPTIMIZED_DEST (a)) != NULL)
+ to = ALLOCNO_MEM_OPTIMIZED_DEST (a);
+ if (to != NULL)
a = to;
- ALLOCNO_LIVE_RANGES (a)
- = ira_create_allocno_live_range (a, start, ira_max_point - 1,
- ALLOCNO_LIVE_RANGES (a));
+ obj = ALLOCNO_OBJECT (a);
+ OBJECT_LIVE_RANGES (obj)
+ = ira_create_live_range (obj, start, ira_max_point - 1,
+ OBJECT_LIVE_RANGES (obj));
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf
(ira_dump_file,
Index: gcc/ira-int.h
===================================================================
--- gcc.orig/ira-int.h
+++ gcc/ira-int.h
@@ -202,7 +202,7 @@ extern ira_loop_tree_node_t ira_loop_nod
struct live_range
{
/* Allocno whose live range is described by given structure. */
- ira_allocno_t allocno;
+ ira_object_t object;
/* Program point range. */
int start, finish;
/* Next structure describing program points where the allocno
@@ -236,7 +236,12 @@ struct ira_object
otherwise. Only objects belonging to allocnos with the
same cover class are in the vector or in the bit vector. */
void *conflicts_array;
- /* Allocated size of the previous array. */
+ /* Pointer to structures describing at what program point the
+ object lives. We always maintain the list in such way that *the
+ ranges in the list are not intersected and ordered by decreasing
+ their program points*. */
+ live_range_t live_ranges;
+ /* Allocated size of the conflicts array. */
unsigned int conflicts_array_size;
/* A unique number for every instance of this structure which is used
to represent it in conflict bit vectors. */
@@ -341,11 +346,6 @@ struct ira_allocno
list is chained by NEXT_COALESCED_ALLOCNO. */
ira_allocno_t first_coalesced_allocno;
ira_allocno_t next_coalesced_allocno;
- /* Pointer to structures describing at what program point the
- allocno lives. We always maintain the list in such way that *the
- ranges in the list are not intersected and ordered by decreasing
- their program points*. */
- live_range_t live_ranges;
/* Pointer to a structure describing conflict information about this
allocno. */
ira_object_t object;
@@ -483,7 +483,6 @@ struct ira_allocno
#define ALLOCNO_TEMP(A) ((A)->temp)
#define ALLOCNO_FIRST_COALESCED_ALLOCNO(A) ((A)->first_coalesced_allocno)
#define ALLOCNO_NEXT_COALESCED_ALLOCNO(A) ((A)->next_coalesced_allocno)
-#define ALLOCNO_LIVE_RANGES(A) ((A)->live_ranges)
#define ALLOCNO_OBJECT(A) ((A)->object)
#define OBJECT_ALLOCNO(C) ((C)->allocno)
@@ -498,6 +497,7 @@ struct ira_allocno
#define OBJECT_MIN(C) ((C)->min)
#define OBJECT_MAX(C) ((C)->max)
#define OBJECT_CONFLICT_ID(C) ((C)->id)
+#define OBJECT_LIVE_RANGES(C) ((C)->live_ranges)
/* Map regno -> allocnos with given regno (see comments for
allocno member `next_regno_allocno'). */
@@ -864,13 +864,13 @@ extern bool ira_conflict_vector_profitab
extern void ira_allocate_conflict_vec (ira_object_t, int);
extern void ira_allocate_object_conflicts (ira_object_t, int);
extern void ira_print_expanded_allocno (ira_allocno_t);
-extern live_range_t ira_create_allocno_live_range (ira_allocno_t, int, int,
- live_range_t);
-extern live_range_t ira_copy_allocno_live_range_list (live_range_t);
-extern live_range_t ira_merge_allocno_live_ranges (live_range_t, live_range_t);
-extern bool ira_allocno_live_ranges_intersect_p (live_range_t, live_range_t);
-extern void ira_finish_allocno_live_range (live_range_t);
-extern void ira_finish_allocno_live_range_list (live_range_t);
+extern live_range_t ira_create_live_range (ira_object_t, int, int,
+ live_range_t);
+extern live_range_t ira_copy_live_range_list (live_range_t);
+extern live_range_t ira_merge_live_ranges (live_range_t, live_range_t);
+extern bool ira_live_ranges_intersect_p (live_range_t, live_range_t);
+extern void ira_finish_live_range (live_range_t);
+extern void ira_finish_live_range_list (live_range_t);
extern void ira_free_allocno_updated_costs (ira_allocno_t);
extern ira_copy_t ira_create_copy (ira_allocno_t, ira_allocno_t,
int, bool, rtx, ira_loop_tree_node_t);
Index: gcc/ira-lives.c
===================================================================
--- gcc.orig/ira-lives.c
+++ gcc/ira-lives.c
@@ -112,8 +112,8 @@ make_hard_regno_dead (int regno)
static void
make_allocno_born (ira_allocno_t a)
{
- live_range_t p = ALLOCNO_LIVE_RANGES (a);
ira_object_t obj = ALLOCNO_OBJECT (a);
+ live_range_t p = OBJECT_LIVE_RANGES (obj);
sparseset_set_bit (allocnos_live, ALLOCNO_NUM (a));
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), hard_regs_live);
@@ -121,9 +121,8 @@ make_allocno_born (ira_allocno_t a)
if (p == NULL
|| (p->finish != curr_point && p->finish + 1 != curr_point))
- ALLOCNO_LIVE_RANGES (a)
- = ira_create_allocno_live_range (a, curr_point, -1,
- ALLOCNO_LIVE_RANGES (a));
+ OBJECT_LIVE_RANGES (obj)
+ = ira_create_live_range (obj, curr_point, -1, p);
}
/* Update ALLOCNO_EXCESS_PRESSURE_POINTS_NUM for allocno A. */
@@ -139,9 +138,10 @@ update_allocno_pressure_excess_length (i
(cl = ira_reg_class_super_classes[cover_class][i]) != LIM_REG_CLASSES;
i++)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
if (high_pressure_start_point[cl] < 0)
continue;
- p = ALLOCNO_LIVE_RANGES (a);
+ p = OBJECT_LIVE_RANGES (obj);
ira_assert (p != NULL);
start = (high_pressure_start_point[cl] > p->start
? high_pressure_start_point[cl] : p->start);
@@ -154,9 +154,9 @@ update_allocno_pressure_excess_length (i
static void
make_allocno_dead (ira_allocno_t a)
{
- live_range_t p;
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ live_range_t p = OBJECT_LIVE_RANGES (obj);
- p = ALLOCNO_LIVE_RANGES (a);
ira_assert (p != NULL);
p->finish = curr_point;
update_allocno_pressure_excess_length (a);
@@ -1159,7 +1159,8 @@ create_start_finish_chains (void)
ira_max_point * sizeof (live_range_t));
FOR_EACH_ALLOCNO (a, ai)
{
- for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
{
r->start_next = ira_start_point_ranges[r->start];
ira_start_point_ranges[r->start] = r;
@@ -1188,22 +1189,21 @@ remove_some_program_points_and_update_li
unsigned i;
int n;
int *map;
- ira_allocno_t a;
- ira_allocno_iterator ai;
+ ira_object_t obj;
+ ira_object_iterator oi;
live_range_t r;
bitmap born_or_died;
bitmap_iterator bi;
born_or_died = ira_allocate_bitmap ();
- FOR_EACH_ALLOCNO (a, ai)
- {
- for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
- {
- ira_assert (r->start <= r->finish);
- bitmap_set_bit (born_or_died, r->start);
+ FOR_EACH_OBJECT (obj, oi)
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ {
+ ira_assert (r->start <= r->finish);
+ bitmap_set_bit (born_or_died, r->start);
bitmap_set_bit (born_or_died, r->finish);
- }
- }
+ }
+
map = (int *) ira_allocate (sizeof (int) * ira_max_point);
n = 0;
EXECUTE_IF_SET_IN_BITMAP(born_or_died, 0, i, bi)
@@ -1215,14 +1215,13 @@ remove_some_program_points_and_update_li
fprintf (ira_dump_file, "Compressing live ranges: from %d to %d - %d%%\n",
ira_max_point, n, 100 * n / ira_max_point);
ira_max_point = n;
- FOR_EACH_ALLOCNO (a, ai)
- {
- for (r = ALLOCNO_LIVE_RANGES (a); r != NULL; r = r->next)
- {
- r->start = map[r->start];
- r->finish = map[r->finish];
- }
- }
+
+ FOR_EACH_OBJECT (obj, oi)
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ {
+ r->start = map[r->start];
+ r->finish = map[r->finish];
+ }
ira_free (map);
}
@@ -1246,8 +1245,9 @@ ira_debug_live_range_list (live_range_t
static void
print_allocno_live_ranges (FILE *f, ira_allocno_t a)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
fprintf (f, " a%d(r%d):", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
- ira_print_live_range_list (f, ALLOCNO_LIVE_RANGES (a));
+ ira_print_live_range_list (f, OBJECT_LIVE_RANGES (obj));
}
/* Print live ranges of allocno A to stderr. */
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 8/9: track live ranges for objects
2010-06-18 14:48 ` Patch 8/9: track live ranges for objects Bernd Schmidt
@ 2010-06-18 22:41 ` Jeff Law
0 siblings, 0 replies; 42+ messages in thread
From: Jeff Law @ 2010-06-18 22:41 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches
On 06/18/10 08:11, Bernd Schmidt wrote:
Just a couple nits I noticed in ira-int.h:
> /* Pointer to structures describing at what program point the
> + object lives. We always maintain the list in such way that *the
> + ranges in the list are not intersected and ordered by decreasing
> + their program points*. */
Note the "points*" and "*the". Looks like a nit you copied from the
comment for live_ranges. Please fix at your leisure.
The patch itself is fine,
Thanks,
Jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Patch 9/9: change FOR_EACH_ALLOCNO_CONFLICT to use objects
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
` (7 preceding siblings ...)
2010-06-18 14:48 ` Patch 8/9: track live ranges for objects Bernd Schmidt
@ 2010-06-18 15:26 ` Bernd Schmidt
2010-06-22 1:45 ` Jeff Law
2010-06-18 20:02 ` Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Vladimir N. Makarov
2010-06-21 18:01 ` Patch 10/9: track subwords of DImode allocnos Bernd Schmidt
10 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-18 15:26 UTC (permalink / raw)
To: GCC Patches
[-- Attachment #1: Type: text/plain, Size: 1 bytes --]
[-- Attachment #2: object-conflicts.diff --]
[-- Type: text/plain, Size: 17563 bytes --]
Now that conflicts are tracked in objects rather than allocnos, this changes
FOR_EACH_ALLOCNO_CONFLICT to FOR_EACH_OBJECT_CONFLICT.
* ira-int.h (ira_object_conflict_iterator): Rename from
ira_allocno_conflict_iterator.
(ira_object_conflict_iter_init): Rename from
ira_allocno_conflict_iter_init, second arg changed to
* ira.c (check_allocation): Use FOR_EACH_OBJECT_CONFLICT rather than
FOR_EACH_ALLOCNO_CONFLICT.
* ira-color.c (assign_hard_reg, push_allocno_to_stack)
setup_allocno_left_conflicts_size, coalesced_allocno_conflict_p,
ira_reassign_conflict_allocnos, ira_reassign_pseudos): Likewise.
* ira-conflicts.c (print_allocno_conflicts): Likewise.
(Index: gcc/ira.c
===================================================================
--- gcc.orig/ira.c
+++ gcc/ira.c
@@ -1748,33 +1748,40 @@ calculate_allocation_cost (void)
static void
check_allocation (void)
{
- ira_allocno_t a, conflict_a;
- int hard_regno, conflict_hard_regno, nregs, conflict_nregs;
- ira_allocno_conflict_iterator aci;
+ ira_allocno_t a;
+ int hard_regno, nregs;
ira_allocno_iterator ai;
FOR_EACH_ALLOCNO (a, ai)
{
+ ira_object_t obj, conflict_obj;
+ ira_object_conflict_iterator oci;
+
if (ALLOCNO_CAP_MEMBER (a) != NULL
|| (hard_regno = ALLOCNO_HARD_REGNO (a)) < 0)
continue;
nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
- FOR_EACH_ALLOCNO_CONFLICT (a, conflict_a, aci)
- if ((conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a)) >= 0)
- {
- conflict_nregs
- = (hard_regno_nregs
- [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
- if ((conflict_hard_regno <= hard_regno
- && hard_regno < conflict_hard_regno + conflict_nregs)
- || (hard_regno <= conflict_hard_regno
- && conflict_hard_regno < hard_regno + nregs))
- {
- fprintf (stderr, "bad allocation for %d and %d\n",
- ALLOCNO_REGNO (a), ALLOCNO_REGNO (conflict_a));
- gcc_unreachable ();
- }
- }
+ obj = ALLOCNO_OBJECT (a);
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ {
+ ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
+ int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
+ if (conflict_hard_regno >= 0)
+ {
+ int conflict_nregs
+ = (hard_regno_nregs
+ [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
+ if ((conflict_hard_regno <= hard_regno
+ && hard_regno < conflict_hard_regno + conflict_nregs)
+ || (hard_regno <= conflict_hard_regno
+ && conflict_hard_regno < hard_regno + nregs))
+ {
+ fprintf (stderr, "bad allocation for %d and %d\n",
+ ALLOCNO_REGNO (a), ALLOCNO_REGNO (conflict_a));
+ gcc_unreachable ();
+ }
+ }
+ }
}
}
#endif
Index: gcc/ira-color.c
===================================================================
--- gcc.orig/ira-color.c
+++ gcc/ira-color.c
@@ -446,8 +446,7 @@ assign_hard_reg (ira_allocno_t allocno,
int *conflict_costs;
enum reg_class cover_class, conflict_cover_class;
enum machine_mode mode;
- ira_allocno_t a, conflict_allocno;
- ira_allocno_conflict_iterator aci;
+ ira_allocno_t a;
static int costs[FIRST_PSEUDO_REGISTER], full_costs[FIRST_PSEUDO_REGISTER];
#ifndef HONOR_REG_ALLOC_ORDER
enum reg_class rclass;
@@ -477,6 +476,8 @@ assign_hard_reg (ira_allocno_t allocno,
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_object_t conflict_obj;
+ ira_object_conflict_iterator oci;
mem_cost += ALLOCNO_UPDATED_MEMORY_COST (a);
IOR_HARD_REG_SET (conflicting_regs,
@@ -500,60 +501,64 @@ assign_hard_reg (ira_allocno_t allocno,
full_costs[i] += cost;
}
/* Take preferences of conflicting allocnos into account. */
- FOR_EACH_ALLOCNO_CONFLICT (a, conflict_allocno, aci)
- /* Reload can give another class so we need to check all
- allocnos. */
- if (retry_p || bitmap_bit_p (consideration_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno)))
- {
- conflict_cover_class = ALLOCNO_COVER_CLASS (conflict_allocno);
- ira_assert (ira_reg_classes_intersect_p
- [cover_class][conflict_cover_class]);
- if (allocno_coalesced_p)
- {
- if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno)))
- continue;
- bitmap_set_bit (processed_coalesced_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno));
- }
- if (ALLOCNO_ASSIGNED_P (conflict_allocno))
- {
- if ((hard_regno = ALLOCNO_HARD_REGNO (conflict_allocno)) >= 0
- && ira_class_hard_reg_index[cover_class][hard_regno] >= 0)
- {
- IOR_HARD_REG_SET
- (conflicting_regs,
- ira_reg_mode_hard_regset
- [hard_regno][ALLOCNO_MODE (conflict_allocno)]);
- if (hard_reg_set_subset_p (reg_class_contents[cover_class],
- conflicting_regs))
- goto fail;
- }
- }
- else if (! ALLOCNO_MAY_BE_SPILLED_P (ALLOCNO_FIRST_COALESCED_ALLOCNO
- (conflict_allocno)))
- {
- ira_allocate_and_copy_costs
- (&ALLOCNO_UPDATED_CONFLICT_HARD_REG_COSTS (conflict_allocno),
- conflict_cover_class,
- ALLOCNO_CONFLICT_HARD_REG_COSTS (conflict_allocno));
- conflict_costs
- = ALLOCNO_UPDATED_CONFLICT_HARD_REG_COSTS (conflict_allocno);
- if (conflict_costs != NULL)
- for (j = class_size - 1; j >= 0; j--)
- {
- hard_regno = ira_class_hard_regs[cover_class][j];
- ira_assert (hard_regno >= 0);
- k = (ira_class_hard_reg_index
- [conflict_cover_class][hard_regno]);
- if (k < 0)
- continue;
- full_costs[j] -= conflict_costs[k];
- }
- queue_update_cost (conflict_allocno, COST_HOP_DIVISOR);
- }
- }
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ {
+ ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
+
+ /* Reload can give another class so we need to check all
+ allocnos. */
+ if (retry_p || bitmap_bit_p (consideration_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno)))
+ {
+ conflict_cover_class = ALLOCNO_COVER_CLASS (conflict_allocno);
+ ira_assert (ira_reg_classes_intersect_p
+ [cover_class][conflict_cover_class]);
+ if (allocno_coalesced_p)
+ {
+ if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno)))
+ continue;
+ bitmap_set_bit (processed_coalesced_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno));
+ }
+ if (ALLOCNO_ASSIGNED_P (conflict_allocno))
+ {
+ if ((hard_regno = ALLOCNO_HARD_REGNO (conflict_allocno)) >= 0
+ && ira_class_hard_reg_index[cover_class][hard_regno] >= 0)
+ {
+ IOR_HARD_REG_SET
+ (conflicting_regs,
+ ira_reg_mode_hard_regset
+ [hard_regno][ALLOCNO_MODE (conflict_allocno)]);
+ if (hard_reg_set_subset_p (reg_class_contents[cover_class],
+ conflicting_regs))
+ goto fail;
+ }
+ }
+ else if (! ALLOCNO_MAY_BE_SPILLED_P (ALLOCNO_FIRST_COALESCED_ALLOCNO
+ (conflict_allocno)))
+ {
+ ira_allocate_and_copy_costs
+ (&ALLOCNO_UPDATED_CONFLICT_HARD_REG_COSTS (conflict_allocno),
+ conflict_cover_class,
+ ALLOCNO_CONFLICT_HARD_REG_COSTS (conflict_allocno));
+ conflict_costs
+ = ALLOCNO_UPDATED_CONFLICT_HARD_REG_COSTS (conflict_allocno);
+ if (conflict_costs != NULL)
+ for (j = class_size - 1; j >= 0; j--)
+ {
+ hard_regno = ira_class_hard_regs[cover_class][j];
+ ira_assert (hard_regno >= 0);
+ k = (ira_class_hard_reg_index
+ [conflict_cover_class][hard_regno]);
+ if (k < 0)
+ continue;
+ full_costs[j] -= conflict_costs[k];
+ }
+ queue_update_cost (conflict_allocno, COST_HOP_DIVISOR);
+ }
+ }
+ }
if (a == allocno)
break;
}
@@ -869,9 +874,8 @@ static void
push_allocno_to_stack (ira_allocno_t allocno)
{
int left_conflicts_size, conflict_size, size;
- ira_allocno_t a, conflict_allocno;
+ ira_allocno_t a;
enum reg_class cover_class;
- ira_allocno_conflict_iterator aci;
ALLOCNO_IN_GRAPH_P (allocno) = false;
VEC_safe_push (ira_allocno_t, heap, allocno_stack_vec, allocno);
@@ -884,8 +888,14 @@ push_allocno_to_stack (ira_allocno_t all
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- FOR_EACH_ALLOCNO_CONFLICT (a, conflict_allocno, aci)
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_object_t conflict_obj;
+ ira_object_conflict_iterator oci;
+
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
+ ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
+
conflict_allocno = ALLOCNO_FIRST_COALESCED_ALLOCNO (conflict_allocno);
if (bitmap_bit_p (coloring_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno)))
@@ -1402,10 +1412,9 @@ static void
setup_allocno_left_conflicts_size (ira_allocno_t allocno)
{
int i, hard_regs_num, hard_regno, conflict_allocnos_size;
- ira_allocno_t a, conflict_allocno;
+ ira_allocno_t a;
enum reg_class cover_class;
HARD_REG_SET temp_set;
- ira_allocno_conflict_iterator aci;
cover_class = ALLOCNO_COVER_CLASS (allocno);
hard_regs_num = ira_class_hard_regs_num[cover_class];
@@ -1441,8 +1450,14 @@ setup_allocno_left_conflicts_size (ira_a
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- FOR_EACH_ALLOCNO_CONFLICT (a, conflict_allocno, aci)
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_object_t conflict_obj;
+ ira_object_conflict_iterator oci;
+
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
+ ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
+
conflict_allocno
= ALLOCNO_FIRST_COALESCED_ALLOCNO (conflict_allocno);
if (bitmap_bit_p (consideration_allocno_bitmap,
@@ -1560,8 +1575,7 @@ static bool
coalesced_allocno_conflict_p (ira_allocno_t a1, ira_allocno_t a2,
bool reload_p)
{
- ira_allocno_t a, conflict_allocno;
- ira_allocno_conflict_iterator aci;
+ ira_allocno_t a;
if (allocno_coalesced_p)
{
@@ -1579,6 +1593,7 @@ coalesced_allocno_conflict_p (ira_allocn
{
if (reload_p)
{
+ ira_allocno_t conflict_allocno;
for (conflict_allocno = ALLOCNO_NEXT_COALESCED_ALLOCNO (a1);;
conflict_allocno
= ALLOCNO_NEXT_COALESCED_ALLOCNO (conflict_allocno))
@@ -1592,12 +1607,19 @@ coalesced_allocno_conflict_p (ira_allocn
}
else
{
- FOR_EACH_ALLOCNO_CONFLICT (a, conflict_allocno, aci)
- if (conflict_allocno == a1
- || (allocno_coalesced_p
- && bitmap_bit_p (processed_coalesced_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno))))
- return true;
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_object_t conflict_obj;
+ ira_object_conflict_iterator oci;
+
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ {
+ ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
+ if (conflict_allocno == a1
+ || (allocno_coalesced_p
+ && bitmap_bit_p (processed_coalesced_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno))))
+ return true;
+ }
}
if (a == a2)
break;
@@ -2288,8 +2310,7 @@ void
ira_reassign_conflict_allocnos (int start_regno)
{
int i, allocnos_to_color_num;
- ira_allocno_t a, conflict_a;
- ira_allocno_conflict_iterator aci;
+ ira_allocno_t a;
enum reg_class cover_class;
bitmap allocnos_to_color;
ira_allocno_iterator ai;
@@ -2298,6 +2319,10 @@ ira_reassign_conflict_allocnos (int star
allocnos_to_color_num = 0;
FOR_EACH_ALLOCNO (a, ai)
{
+ ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_object_t conflict_obj;
+ ira_object_conflict_iterator oci;
+
if (! ALLOCNO_ASSIGNED_P (a)
&& ! bitmap_bit_p (allocnos_to_color, ALLOCNO_NUM (a)))
{
@@ -2315,8 +2340,9 @@ ira_reassign_conflict_allocnos (int star
if (ALLOCNO_REGNO (a) < start_regno
|| (cover_class = ALLOCNO_COVER_CLASS (a)) == NO_REGS)
continue;
- FOR_EACH_ALLOCNO_CONFLICT (a, conflict_a, aci)
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
+ ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
ira_assert (ira_reg_classes_intersect_p
[cover_class][ALLOCNO_COVER_CLASS (conflict_a)]);
if (bitmap_bit_p (allocnos_to_color, ALLOCNO_NUM (conflict_a)))
@@ -2873,9 +2899,8 @@ ira_reassign_pseudos (int *spilled_pseud
{
int i, n, regno;
bool changed_p;
- ira_allocno_t a, conflict_a;
+ ira_allocno_t a;
HARD_REG_SET forbidden_regs;
- ira_allocno_conflict_iterator aci;
bitmap temp = BITMAP_ALLOC (NULL);
/* Add pseudos which conflict with pseudos already in
@@ -2887,21 +2912,27 @@ ira_reassign_pseudos (int *spilled_pseud
for (i = 0, n = num; i < n; i++)
{
+ ira_object_t obj, conflict_obj;
+ ira_object_conflict_iterator oci;
int regno = spilled_pseudo_regs[i];
bitmap_set_bit (temp, regno);
a = ira_regno_allocno_map[regno];
- FOR_EACH_ALLOCNO_CONFLICT (a, conflict_a, aci)
- if (ALLOCNO_HARD_REGNO (conflict_a) < 0
- && ! ALLOCNO_DONT_REASSIGN_P (conflict_a)
- && ! bitmap_bit_p (temp, ALLOCNO_REGNO (conflict_a)))
- {
- spilled_pseudo_regs[num++] = ALLOCNO_REGNO (conflict_a);
- bitmap_set_bit (temp, ALLOCNO_REGNO (conflict_a));
- /* ?!? This seems wrong. */
- bitmap_set_bit (consideration_allocno_bitmap,
- ALLOCNO_NUM (conflict_a));
- }
+ obj = ALLOCNO_OBJECT (a);
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ {
+ ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
+ if (ALLOCNO_HARD_REGNO (conflict_a) < 0
+ && ! ALLOCNO_DONT_REASSIGN_P (conflict_a)
+ && ! bitmap_bit_p (temp, ALLOCNO_REGNO (conflict_a)))
+ {
+ spilled_pseudo_regs[num++] = ALLOCNO_REGNO (conflict_a);
+ bitmap_set_bit (temp, ALLOCNO_REGNO (conflict_a));
+ /* ?!? This seems wrong. */
+ bitmap_set_bit (consideration_allocno_bitmap,
+ ALLOCNO_NUM (conflict_a));
+ }
+ }
}
if (num > 1)
Index: gcc/ira-conflicts.c
===================================================================
--- gcc.orig/ira-conflicts.c
+++ gcc/ira-conflicts.c
@@ -717,9 +717,8 @@ static void
print_allocno_conflicts (FILE * file, bool reg_p, ira_allocno_t a)
{
HARD_REG_SET conflicting_hard_regs;
- ira_object_t obj;
- ira_allocno_t conflict_a;
- ira_allocno_conflict_iterator aci;
+ ira_object_t obj, conflict_obj;
+ ira_object_conflict_iterator oci;
basic_block bb;
if (reg_p)
@@ -737,8 +736,9 @@ print_allocno_conflicts (FILE * file, bo
fputs (" conflicts:", file);
obj = ALLOCNO_OBJECT (a);
if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
- FOR_EACH_ALLOCNO_CONFLICT (a, conflict_a, aci)
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
+ ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
if (reg_p)
fprintf (file, " r%d,", ALLOCNO_REGNO (conflict_a));
else
Index: gcc/ira-int.h
===================================================================
--- gcc.orig/ira-int.h
+++ gcc/ira-int.h
@@ -1088,14 +1088,13 @@ typedef struct {
/* The word of bit vector currently visited. It is defined only if
OBJECT_CONFLICT_VEC_P is FALSE. */
unsigned IRA_INT_TYPE word;
-} ira_allocno_conflict_iterator;
+} ira_object_conflict_iterator;
/* Initialize the iterator I with ALLOCNO conflicts. */
static inline void
-ira_allocno_conflict_iter_init (ira_allocno_conflict_iterator *i,
- ira_allocno_t allocno)
+ira_object_conflict_iter_init (ira_object_conflict_iterator *i,
+ ira_object_t obj)
{
- ira_object_t obj = ALLOCNO_OBJECT (allocno);
i->conflict_vec_p = OBJECT_CONFLICT_VEC_P (obj);
i->vec = OBJECT_CONFLICT_ARRAY (obj);
i->word_num = 0;
@@ -1119,8 +1118,8 @@ ira_allocno_conflict_iter_init (ira_allo
case *A is set to the allocno to be visited. Otherwise, return
FALSE. */
static inline bool
-ira_allocno_conflict_iter_cond (ira_allocno_conflict_iterator *i,
- ira_allocno_t *a)
+ira_object_conflict_iter_cond (ira_object_conflict_iterator *i,
+ ira_object_t *pobj)
{
ira_object_t obj;
@@ -1151,13 +1150,13 @@ ira_allocno_conflict_iter_cond (ira_allo
obj = ira_object_id_map[i->bit_num + i->base_conflict_id];
}
- *a = OBJECT_ALLOCNO (obj);
+ *pobj = obj;
return true;
}
/* Advance to the next conflicting allocno. */
static inline void
-ira_allocno_conflict_iter_next (ira_allocno_conflict_iterator *i)
+ira_object_conflict_iter_next (ira_object_conflict_iterator *i)
{
if (i->conflict_vec_p)
i->word_num++;
@@ -1168,14 +1167,13 @@ ira_allocno_conflict_iter_next (ira_allo
}
}
-/* Loop over all allocnos conflicting with ALLOCNO. In each
- iteration, A is set to the next conflicting allocno. ITER is an
- instance of ira_allocno_conflict_iterator used to iterate the
- conflicts. */
-#define FOR_EACH_ALLOCNO_CONFLICT(ALLOCNO, A, ITER) \
- for (ira_allocno_conflict_iter_init (&(ITER), (ALLOCNO)); \
- ira_allocno_conflict_iter_cond (&(ITER), &(A)); \
- ira_allocno_conflict_iter_next (&(ITER)))
+/* Loop over all objects conflicting with OBJ. In each iteration,
+ CONF is set to the next conflicting object. ITER is an instance
+ of ira_object_conflict_iterator used to iterate the conflicts. */
+#define FOR_EACH_OBJECT_CONFLICT(OBJ, CONF, ITER) \
+ for (ira_object_conflict_iter_init (&(ITER), (OBJ)); \
+ ira_object_conflict_iter_cond (&(ITER), &(CONF)); \
+ ira_object_conflict_iter_next (&(ITER)))
\f
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 9/9: change FOR_EACH_ALLOCNO_CONFLICT to use objects
2010-06-18 15:26 ` Patch 9/9: change FOR_EACH_ALLOCNO_CONFLICT to use objects Bernd Schmidt
@ 2010-06-22 1:45 ` Jeff Law
0 siblings, 0 replies; 42+ messages in thread
From: Jeff Law @ 2010-06-22 1:45 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches
On 06/18/10 08:12, Bernd Schmidt wrote:
> Now that conflicts are tracked in objects rather than allocnos, this
> changes
> FOR_EACH_ALLOCNO_CONFLICT to FOR_EACH_OBJECT_CONFLICT.
>
> * ira-int.h (ira_object_conflict_iterator): Rename from
> ira_allocno_conflict_iterator.
> (ira_object_conflict_iter_init): Rename from
> ira_allocno_conflict_iter_init, second arg changed to
> * ira.c (check_allocation): Use FOR_EACH_OBJECT_CONFLICT rather than
> FOR_EACH_ALLOCNO_CONFLICT.
> * ira-color.c (assign_hard_reg, push_allocno_to_stack)
> setup_allocno_left_conflicts_size, coalesced_allocno_conflict_p,
> ira_reassign_conflict_allocnos, ira_reassign_pseudos): Likewise.
> * ira-conflicts.c (print_allocno_conflicts): Likewise.
>
OK
Thanks,
Jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
` (8 preceding siblings ...)
2010-06-18 15:26 ` Patch 9/9: change FOR_EACH_ALLOCNO_CONFLICT to use objects Bernd Schmidt
@ 2010-06-18 20:02 ` Vladimir N. Makarov
2010-06-18 20:11 ` Jeff Law
2010-06-21 18:01 ` Patch 10/9: track subwords of DImode allocnos Bernd Schmidt
10 siblings, 1 reply; 42+ messages in thread
From: Vladimir N. Makarov @ 2010-06-18 20:02 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches, Jeff Law
On 06/18/2010 09:59 AM, Bernd Schmidt wrote:
> I will post a series of patches as a followups to this message; the goal
> of it is to eventually enable us to accurately track lifetimes of
> subwords of DImode values in IRA. For an example, see PR42502. That
> testcase has one remaining problem: we generate unnecessary moves, since
> IRA thinks a hard reg conflicts with a DImode value when in fact it only
> conflicts with one subword.
>
> The problem goes back to
> http://gcc.gnu.org/ml/gcc-patches/2008-04/msg01990.html
> where Kenneth removed REG_NO_CONFLICT blocks, acknowledging that this
> would cause codegen regressions, but without providing any kind of
> replacement for the functionality other than a promise that Vlad would
> fix it in IRA. IMO that should never have been approved, but we'd
> probably have lost the functionality with the conversion to IRA anyway.
>
> The idea behind these patches is to create a new ira_object structure
> which tracks live ranges and conflicts. In a first step, there is one
> such object per allocno; the final patch will add two of them for
> suitable multiword allocnos.
>
> This patch queue is unfinished for now: the final piece, which adds
> ALLOCNO_NUM_OBJECTS and the possiblity of having more than one object
> per allocno, seems to be working, but I haven't tested it very much yet
> and I don't think I'll get it sufficiently cleaned up before the
> weekend. So I'm posting the patches that are already done for initial
> review now.
>
> I think the initial few cleanup patches should go in in any case so I'm
> asking for approval for them. The final three patches in this
> submission perform the conversion to use ira_objects and probably don't
> qualify on their own, but neither do they have any significant negative
> impact - only a few additional ALLOCNO_OBJECT/OBJECT_ALLOCNO
> conversions. It would be good to know if there are any objections in
> principle against the approach.
>
>
Thanks for addressing the problem, Bernd.
Personally I have no principle objections to the approach. I am only
not sure that it will help benchmarks a lot (after Ian introduced
lowering subreg pass although it is a bit conservative) and I am a bit
afraid that it makes IRA even more complicated. But in any case we
should try this patch may be it will work very well and will be not so
complicated as Ken's one.
> None of these preliminary patches have been observed to change code
> generation in any way. With the whole set applied I've bootstrapped and
> regression tested on i686-linux.
>
>
Bernd, I'll be on vacation next week and could start to review all
patches then if Jeff does not approve them when I am back.
As for patches already approved by Jeff. They are ok to me too.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode
2010-06-18 20:02 ` Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Vladimir N. Makarov
@ 2010-06-18 20:11 ` Jeff Law
0 siblings, 0 replies; 42+ messages in thread
From: Jeff Law @ 2010-06-18 20:11 UTC (permalink / raw)
To: Vladimir N. Makarov; +Cc: Bernd Schmidt, GCC Patches
On 06/18/10 11:59, Vladimir N. Makarov wrote:
> On 06/18/2010 09:59 AM, Bernd Schmidt wrote:
>> I will post a series of patches as a followups to this message; the goal
>> of it is to eventually enable us to accurately track lifetimes of
>> subwords of DImode values in IRA. For an example, see PR42502. That
>> testcase has one remaining problem: we generate unnecessary moves, since
>> IRA thinks a hard reg conflicts with a DImode value when in fact it only
>> conflicts with one subword.
>>
>> The problem goes back to
>> http://gcc.gnu.org/ml/gcc-patches/2008-04/msg01990.html
>> where Kenneth removed REG_NO_CONFLICT blocks, acknowledging that this
>> would cause codegen regressions, but without providing any kind of
>> replacement for the functionality other than a promise that Vlad would
>> fix it in IRA. IMO that should never have been approved, but we'd
>> probably have lost the functionality with the conversion to IRA anyway.
>>
>> The idea behind these patches is to create a new ira_object structure
>> which tracks live ranges and conflicts. In a first step, there is one
>> such object per allocno; the final patch will add two of them for
>> suitable multiword allocnos.
>>
>> This patch queue is unfinished for now: the final piece, which adds
>> ALLOCNO_NUM_OBJECTS and the possiblity of having more than one object
>> per allocno, seems to be working, but I haven't tested it very much yet
>> and I don't think I'll get it sufficiently cleaned up before the
>> weekend. So I'm posting the patches that are already done for initial
>> review now.
>>
>> I think the initial few cleanup patches should go in in any case so I'm
>> asking for approval for them. The final three patches in this
>> submission perform the conversion to use ira_objects and probably don't
>> qualify on their own, but neither do they have any significant negative
>> impact - only a few additional ALLOCNO_OBJECT/OBJECT_ALLOCNO
>> conversions. It would be good to know if there are any objections in
>> principle against the approach.
>>
> Thanks for addressing the problem, Bernd.
>
> Personally I have no principle objections to the approach. I am only
> not sure that it will help benchmarks a lot (after Ian introduced
> lowering subreg pass although it is a bit conservative) and I am a
> bit afraid that it makes IRA even more complicated. But in any case
> we should try this patch may be it will work very well and will be not
> so complicated as Ken's one.
I should note that I was interested in these patches as I consistently
see deficiencies in multi-reg pseudo handling in IRA. I'm hoping
Bernd's work will allow me to drop a hack I've been using to improve
handling of multi-reg pseudos.
Jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Patch 10/9: track subwords of DImode allocnos
2010-06-18 14:08 Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Bernd Schmidt
` (9 preceding siblings ...)
2010-06-18 20:02 ` Patch 0/9: IRA cleanups and preparations for tracking subwords of DImode Vladimir N. Makarov
@ 2010-06-21 18:01 ` Bernd Schmidt
2010-07-06 23:49 ` Ping: " Bernd Schmidt
2010-07-13 20:43 ` Jeff Law
10 siblings, 2 replies; 42+ messages in thread
From: Bernd Schmidt @ 2010-06-21 18:01 UTC (permalink / raw)
To: GCC Patches
[-- Attachment #1: Type: text/plain, Size: 2808 bytes --]
So here's the scary part. This adds ALLOCNO_NUM_OBJECTS and the
possibility that it may be larger than 1. Currently, it only tries to
do anything for two-word (i.e. DImode) allocnos; it should be possible
(and even relatively easy) to extend, but I'm not sure it's worthwhile.
Whether even this version is worthwhile is for others to decide.
I should explain what I've done with the conflict handling. Given two
DImode allocnos A and B with halves Ah, Al, Bh and Bl, we can encounter
four different conflicts: AhxBl, AhxBh, AlxBh and AlxBl. Of these, only
three are meaningful: AhxBh and AlxBl can be treated equivalently in
every place I found. This reduces the number of ways two such allocnos
can conflict to 3, and I've implemented this (as "conflict
canonicalization") by recording an AlxBl conflict instead of a AhxBh
conflict if one is found. This is meaningful for functions like
setup_allocno_left_conflicts_size: each of these three conflicts reduces
the number of registers available for allocation by 1.
There are some places in IRA that use conflict tests to determine
whether two allocnos can be given the same hard register; in these cases
it is sufficient to test the low-order objects for conflicts (given the
canonicalization described above). Any other type of conflict would not
prevent the allocnos from being given the same hard register (assuming
that both will be assigned two hard regs).
There is one place in the code where this canonicalization has an ugly
effect: in setup_min_max_conflict_allocno_ids, we have to extend the
min/max value for object 0 of each multi-word allocno, since we may
later record conflicts for them that are due to AhxBh and not apparent
at this point in the code.
Another possibly slightly ugly case is the handling of
ALLOCNO_EXCESS_PRESSURE_POINTS_NUM; it seemed easiest just to count
these points for each object separately, and then divide by
ALLOCNO_NUM_OBJECTS later on.
The test for conflicts in assign_hard_reg is quite complicated due to
the possibility Jeff mentioned: the value of hard_regno_nregs may differ
for some element regs of a cover class. I believe this is handled
correctly, but it really is quite tricky.
Even after more than a week of digging through IRA, I can't claim to
understand all of it. I've made sure that all the places I touched
looked sane afterwards, but - for example - I don't really know yet what
ira_emit is trying to do. There may be bad interactions.
Still, successfully bootstrapped and regression tested on i686-linux.
Last week I've used earlier versions with an ARM compiler and seemed to
get small code size improvements on Crafty; it also fixes the remaining
issue with PR42502. I'm also thinking of extending it further to do a
DCE of subreg stores which should help PR42575.
Bernd
[-- Attachment #2: finalpiece.diff --]
[-- Type: text/plain, Size: 117876 bytes --]
* ira-build.c (ira_create_object): New arg SUBWORD; all callers changed.
Initialize OBJECT_SUBWORD.
(ira_create_allocno): Clear ALLOCNO_NUM_OBJECTS.
(ira_create_allocno_objects): Renamed from ira_create_allocno_object;
all callers changed.
(merge_hard_reg_conflicts): Iterate over allocno subobjects.
(finish_allocno): Likewise.
(move_allocno_live_ranges, copy_allocno_live_ranges): Likewise.
(remove_low_level_allocnos): Likewise.
(update_bad_spill_attribute): Likewise.
(setup_min_max_allocno_live_range_point): Likewise.
(sort_conflict_id_map): Likewise.
(ira_flattening): Likewise. Use ior_hard_reg_conflicts.
(ior_hard_reg_conflicts): New function.
(ior_allocate_object_conflicts): Renamed first argument to OBJ.
(compress_conflict_vecs): Iterate over objects, not allocnos.
(ira_add_live_range_to_object): New function.
(object_range_compare_func): Renamed from allocno_range_compare_func.
All callers changed.
(setup_min_max_conflict_allocno_ids): For allocnos with multiple
subobjects, widen the min/max range of the lowest-order object to
potentially include all other such low-order objects.
* ira.c (ira_bad_reload_regno_1): Iterate over allocno subobjects.
(check_allocation): Likewise. Use more fine-grained tests for register
conflicts.
* ira-color.c (allocnos_have_intersected_live_ranges_p): Iterate over
allocno subobjects.
(assign_hard_reg): Keep multiple sets of conflicts. Make finer-grained
choices about which bits to set in each set. Don't use
ira_hard_reg_not_in_set_p, perform a more elaborate test for conflicts
using the multiple sets we computed.
(push_allocno_to_stack): Iterate over allocno subobjects.
(all_conflicting_hard_regs_coalesced): New static function.
(setup_allocno_available_regs_num): Use it.
(setup_allocno_left_conflicts_size): Likewise. Iterate over allocno
subobjects.
(coalesced_allocno_conflict): Test subobject 0 in each allocno.
(setup_allocno_priorities): Divide ALLOCNO_EXCESS_PRESSURE_POINTS_NUM
by ALLOCNO_NUM_OBJECTS.
(calculate_spill_cost): Likewise.
(color_pass): Express if statement in a more normal way.
(ira_reassign_conflict_allocnos): Iterate over allocno subobjects.
(slot_coalesced_allocno_live_ranges_intersect_p): Likewise.
(setup_slot_coalesced_allocno_live_ranges): Likewise.
(allocno_reload_assign): Likewise.
(ira_reassign_pseudos): Likewise.
(fast_allocation): Likewise.
* ira-conflicts.c (build_conflict_bit_table): Likewise.
(print_allocno_conflicts): Likewise.
(ira_build_conflicts): Likewise.
(allocnos_conflict_for_copy_p): Renamed from allocnos_conflict_p. All
callers changed. Test subword 0 of each allocno for conflicts.
(build_object_conflicts): Renamed from build_allocno_conflicts. All
callers changed. Iterate over allocno subobjects.
* ira-emit.c (modify_move_list): Iterate over allocno subobjects.
* ira-int.h (struct ira_allocno): New member. num_objects. Rename object
to objects and change it into an array.
(ALLOCNO_OBJECT): Add new argument N.
(ALLOCNO_NUM_OBJECTS, OBJECT_SUBWORD): New macros.
(ira_create_allocno_objects): Renamed from ira_create_allocno_object.
(ior_hard_reg_conflicts): Declare.
(ira_add_live_range_to_object): Declare.
(ira_allocno_object_iterator): New.
(ira_allocno_object_iter_init, ira_allocno_object_iter_cond): New.
(FOR_EACH_ALLOCNO_OBJECT): New macro.
* ira-lives.c (objects_live): Renamed from allocnos_live; all uses changed.
(allocnos_processed): New sparseset.
(make_object_born): Renamed from make_allocno_born; take an ira_object_t
argument. All callers changed.
(make_object_dead): Renamed from make_allocno_dead; take an ira_object t
argument. All callers changed.
(update_allocno_pressure_excess_length): Take an ira_obejct_t argument.
All callers changed.
(mark_pseudo_regno_live): Iterate over allocno subobjects.
(mark_pseudo_regno_dead): Likewise.
(mark_pseudo_regno_subword_live, mark_pseudo_regno_subword_dead): New
functions.
(mark_ref_live): Detect subword accesses and call
mark_pseudo_regno_subword_live as appropriate.
(mark_ref_dead): Likewise for mark_pseudo_regno_subword_dead.
(process_bb_nodes_live): Deal with object-related updates first; set
and test bits in allocnos_processed to avoid computing allocno
statistics more than once.
(create_start_finish_chains): Iterate over objects, not allocnos.
(print_object_live_ranges): New function.
(print_allocno_live_ranges): Use it.
(ira_create_allocno_live_ranges): Allocate and free allocnos_processed
and objects_live.
Index: gcc/ira-build.c
===================================================================
--- gcc.orig/ira-build.c
+++ gcc/ira-build.c
@@ -421,12 +421,13 @@ initiate_allocnos (void)
/* Create and return an object corresponding to a new allocno A. */
static ira_object_t
-ira_create_object (ira_allocno_t a)
+ira_create_object (ira_allocno_t a, int subword)
{
enum reg_class cover_class = ALLOCNO_COVER_CLASS (a);
ira_object_t obj = (ira_object_t) pool_alloc (object_pool);
OBJECT_ALLOCNO (obj) = a;
+ OBJECT_SUBWORD (obj) = subword;
OBJECT_CONFLICT_ID (obj) = ira_objects_num;
OBJECT_CONFLICT_VEC_P (obj) = false;
OBJECT_CONFLICT_ARRAY (obj) = NULL;
@@ -445,6 +446,7 @@ ira_create_object (ira_allocno_t a)
ira_object_id_map
= VEC_address (ira_object_t, ira_object_id_map_vec);
ira_objects_num = VEC_length (ira_object_t, ira_object_id_map_vec);
+
return obj;
}
@@ -509,10 +511,12 @@ ira_create_allocno (int regno, bool cap_
ALLOCNO_PREV_BUCKET_ALLOCNO (a) = NULL;
ALLOCNO_FIRST_COALESCED_ALLOCNO (a) = a;
ALLOCNO_NEXT_COALESCED_ALLOCNO (a) = a;
+ ALLOCNO_NUM_OBJECTS (a) = 0;
VEC_safe_push (ira_allocno_t, heap, allocno_vec, a);
ira_allocnos = VEC_address (ira_allocno_t, allocno_vec);
ira_allocnos_num = VEC_length (ira_allocno_t, allocno_vec);
+
return a;
}
@@ -523,14 +527,27 @@ ira_set_allocno_cover_class (ira_allocno
ALLOCNO_COVER_CLASS (a) = cover_class;
}
-/* Allocate an object for allocno A and set ALLOCNO_OBJECT. */
+/* Determine the number of objects we should associate with allocno A
+ and allocate them. */
void
-ira_create_allocno_object (ira_allocno_t a)
+ira_create_allocno_objects (ira_allocno_t a)
{
- ALLOCNO_OBJECT (a) = ira_create_object (a);
+ enum machine_mode mode = ALLOCNO_MODE (a);
+ enum reg_class cover_class = ALLOCNO_COVER_CLASS (a);
+ int n = ira_reg_class_nregs[cover_class][mode];
+ int i;
+
+ if (GET_MODE_SIZE (mode) != 2 * UNITS_PER_WORD || n != 2)
+ n = 1;
+
+ ALLOCNO_NUM_OBJECTS (a) = n;
+ for (i = 0; i < n; i++)
+ ALLOCNO_OBJECT (a, i) = ira_create_object (a, i);
}
-/* For each allocno, create the corresponding ALLOCNO_OBJECT structure. */
+/* For each allocno, set ALLOCNO_NUM_OBJECTS and create the
+ ALLOCNO_OBJECT structures. This must be called after the cover
+ classes are known. */
static void
create_allocno_objects (void)
{
@@ -538,22 +555,28 @@ create_allocno_objects (void)
ira_allocno_iterator ai;
FOR_EACH_ALLOCNO (a, ai)
- ira_create_allocno_object (a);
+ ira_create_allocno_objects (a);
}
-/* Merge hard register conflicts from allocno FROM into allocno TO. If
- TOTAL_ONLY is true, we ignore ALLOCNO_CONFLICT_HARD_REGS. */
+/* Merge hard register conflict information for all objects associated with
+ allocno TO into the corresponding objects associated with FROM.
+ If TOTAL_ONLY is true, we only merge OBJECT_TOTAL_CONFLICT_HARD_REGS. */
static void
merge_hard_reg_conflicts (ira_allocno_t from, ira_allocno_t to,
bool total_only)
{
- ira_object_t from_obj = ALLOCNO_OBJECT (from);
- ira_object_t to_obj = ALLOCNO_OBJECT (to);
- if (!total_only)
- IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (to_obj),
- OBJECT_CONFLICT_HARD_REGS (from_obj));
- IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (to_obj),
- OBJECT_TOTAL_CONFLICT_HARD_REGS (from_obj));
+ int i;
+ gcc_assert (ALLOCNO_NUM_OBJECTS (to) == ALLOCNO_NUM_OBJECTS (from));
+ for (i = 0; i < ALLOCNO_NUM_OBJECTS (to); i++)
+ {
+ ira_object_t from_obj = ALLOCNO_OBJECT (from, i);
+ ira_object_t to_obj = ALLOCNO_OBJECT (to, i);
+ if (!total_only)
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (to_obj),
+ OBJECT_CONFLICT_HARD_REGS (from_obj));
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (to_obj),
+ OBJECT_TOTAL_CONFLICT_HARD_REGS (from_obj));
+ }
#ifdef STACK_REGS
if (!total_only && ALLOCNO_NO_STACK_REG_P (from))
ALLOCNO_NO_STACK_REG_P (to) = true;
@@ -562,6 +585,20 @@ merge_hard_reg_conflicts (ira_allocno_t
#endif
}
+/* Update hard register conflict information for all objects associated with
+ A to include the regs in SET. */
+void
+ior_hard_reg_conflicts (ira_allocno_t a, HARD_REG_SET *set)
+{
+ ira_allocno_object_iterator i;
+ ira_object_t obj;
+ FOR_EACH_ALLOCNO_OBJECT (a, obj, i)
+ {
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), *set);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), *set);
+ }
+}
+
/* Return TRUE if a conflict vector with NUM elements is more
profitable than a conflict bit vector for OBJ. */
bool
@@ -616,14 +653,14 @@ allocate_conflict_bit_vec (ira_object_t
}
/* Allocate and initialize the conflict vector or conflict bit vector
- of A for NUM conflicting allocnos whatever is more profitable. */
+ of OBJ for NUM conflicting allocnos whatever is more profitable. */
void
-ira_allocate_object_conflicts (ira_object_t a, int num)
+ira_allocate_object_conflicts (ira_object_t obj, int num)
{
- if (ira_conflict_vector_profitable_p (a, num))
- ira_allocate_conflict_vec (a, num);
+ if (ira_conflict_vector_profitable_p (obj, num))
+ ira_allocate_conflict_vec (obj, num);
else
- allocate_conflict_bit_vec (a);
+ allocate_conflict_bit_vec (obj);
}
/* Add OBJ2 to the conflicts of OBJ1. */
@@ -771,15 +808,14 @@ compress_conflict_vec (ira_object_t obj)
static void
compress_conflict_vecs (void)
{
- ira_allocno_t a;
- ira_allocno_iterator ai;
+ ira_object_t obj;
+ ira_object_iterator oi;
conflict_check = (int *) ira_allocate (sizeof (int) * ira_objects_num);
memset (conflict_check, 0, sizeof (int) * ira_objects_num);
curr_conflict_check_tick = 0;
- FOR_EACH_ALLOCNO (a, ai)
+ FOR_EACH_OBJECT (obj, oi)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
if (OBJECT_CONFLICT_VEC_P (obj))
compress_conflict_vec (obj);
}
@@ -822,7 +858,7 @@ create_cap_allocno (ira_allocno_t a)
ALLOCNO_MODE (cap) = ALLOCNO_MODE (a);
cover_class = ALLOCNO_COVER_CLASS (a);
ira_set_allocno_cover_class (cap, cover_class);
- ira_create_allocno_object (cap);
+ ira_create_allocno_objects (cap);
ALLOCNO_AVAILABLE_REGS_NUM (cap) = ALLOCNO_AVAILABLE_REGS_NUM (a);
ALLOCNO_CAP_MEMBER (cap) = a;
ALLOCNO_CAP (a) = cap;
@@ -837,7 +873,9 @@ create_cap_allocno (ira_allocno_t a)
ALLOCNO_NREFS (cap) = ALLOCNO_NREFS (a);
ALLOCNO_FREQ (cap) = ALLOCNO_FREQ (a);
ALLOCNO_CALL_FREQ (cap) = ALLOCNO_CALL_FREQ (a);
+
merge_hard_reg_conflicts (a, cap, false);
+
ALLOCNO_CALLS_CROSSED_NUM (cap) = ALLOCNO_CALLS_CROSSED_NUM (a);
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
{
@@ -848,7 +886,7 @@ create_cap_allocno (ira_allocno_t a)
return cap;
}
-/* Create and return allocno live range with given attributes. */
+/* Create and return a live range for OBJECT with given attributes. */
live_range_t
ira_create_live_range (ira_object_t obj, int start, int finish,
live_range_t next)
@@ -863,6 +901,17 @@ ira_create_live_range (ira_object_t obj,
return p;
}
+/* Create a new live range for OBJECT and queue it at the head of its
+ live range list. */
+void
+ira_add_live_range_to_object (ira_object_t object, int start, int finish)
+{
+ live_range_t p;
+ p = ira_create_live_range (object, start, finish,
+ OBJECT_LIVE_RANGES (object));
+ OBJECT_LIVE_RANGES (object) = p;
+}
+
/* Copy allocno live range R and return the result. */
static live_range_t
copy_live_range (live_range_t r)
@@ -1031,13 +1080,17 @@ static void
finish_allocno (ira_allocno_t a)
{
enum reg_class cover_class = ALLOCNO_COVER_CLASS (a);
- ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_object_t obj;
+ ira_allocno_object_iterator oi;
- ira_finish_live_range_list (OBJECT_LIVE_RANGES (obj));
- ira_object_id_map[OBJECT_CONFLICT_ID (obj)] = NULL;
- if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
- ira_free (OBJECT_CONFLICT_ARRAY (obj));
- pool_free (object_pool, obj);
+ FOR_EACH_ALLOCNO_OBJECT (a, obj, oi)
+ {
+ ira_finish_live_range_list (OBJECT_LIVE_RANGES (obj));
+ ira_object_id_map[OBJECT_CONFLICT_ID (obj)] = NULL;
+ if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
+ ira_free (OBJECT_CONFLICT_ARRAY (obj));
+ pool_free (object_pool, obj);
+ }
ira_allocnos[ALLOCNO_NUM (a)] = NULL;
if (ALLOCNO_HARD_REG_COSTS (a) != NULL)
@@ -1703,48 +1756,62 @@ change_object_in_range_list (live_range_
r->object = obj;
}
-/* Move all live ranges associated with allocno A to allocno OTHER_A. */
+/* Move all live ranges associated with allocno FROM to allocno TO. */
static void
move_allocno_live_ranges (ira_allocno_t from, ira_allocno_t to)
{
- ira_object_t from_obj = ALLOCNO_OBJECT (from);
- ira_object_t to_obj = ALLOCNO_OBJECT (to);
- live_range_t lr = OBJECT_LIVE_RANGES (from_obj);
+ int i;
+ int n = ALLOCNO_NUM_OBJECTS (from);
+
+ gcc_assert (n == ALLOCNO_NUM_OBJECTS (to));
- if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
+ for (i = 0; i < n; i++)
{
- fprintf (ira_dump_file,
- " Moving ranges of a%dr%d to a%dr%d: ",
- ALLOCNO_NUM (from), ALLOCNO_REGNO (from),
- ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
- ira_print_live_range_list (ira_dump_file, lr);
- }
- change_object_in_range_list (lr, to_obj);
- OBJECT_LIVE_RANGES (to_obj)
- = ira_merge_live_ranges (lr, OBJECT_LIVE_RANGES (to_obj));
- OBJECT_LIVE_RANGES (from_obj) = NULL;
+ ira_object_t from_obj = ALLOCNO_OBJECT (from, i);
+ ira_object_t to_obj = ALLOCNO_OBJECT (to, i);
+ live_range_t lr = OBJECT_LIVE_RANGES (from_obj);
+
+ if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
+ {
+ fprintf (ira_dump_file,
+ " Moving ranges of a%dr%d to a%dr%d: ",
+ ALLOCNO_NUM (from), ALLOCNO_REGNO (from),
+ ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
+ ira_print_live_range_list (ira_dump_file, lr);
+ }
+ change_object_in_range_list (lr, to_obj);
+ OBJECT_LIVE_RANGES (to_obj)
+ = ira_merge_live_ranges (lr, OBJECT_LIVE_RANGES (to_obj));
+ OBJECT_LIVE_RANGES (from_obj) = NULL;
+ }
}
-/* Copy all live ranges associated with allocno A to allocno OTHER_A. */
static void
copy_allocno_live_ranges (ira_allocno_t from, ira_allocno_t to)
{
- ira_object_t from_obj = ALLOCNO_OBJECT (from);
- ira_object_t to_obj = ALLOCNO_OBJECT (to);
- live_range_t lr = OBJECT_LIVE_RANGES (from_obj);
+ int i;
+ int n = ALLOCNO_NUM_OBJECTS (from);
- if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
+ gcc_assert (n == ALLOCNO_NUM_OBJECTS (to));
+
+ for (i = 0; i < n; i++)
{
- fprintf (ira_dump_file,
- " Copying ranges of a%dr%d to a%dr%d: ",
- ALLOCNO_NUM (from), ALLOCNO_REGNO (from),
- ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
- ira_print_live_range_list (ira_dump_file, lr);
- }
- lr = ira_copy_live_range_list (lr);
- change_object_in_range_list (lr, to_obj);
- OBJECT_LIVE_RANGES (to_obj)
- = ira_merge_live_ranges (lr, OBJECT_LIVE_RANGES (to_obj));
+ ira_object_t from_obj = ALLOCNO_OBJECT (from, i);
+ ira_object_t to_obj = ALLOCNO_OBJECT (to, i);
+ live_range_t lr = OBJECT_LIVE_RANGES (from_obj);
+
+ if (internal_flag_ira_verbose > 4 && ira_dump_file != NULL)
+ {
+ fprintf (ira_dump_file, " Copying ranges of a%dr%d to a%dr%d: ",
+ ALLOCNO_NUM (from), ALLOCNO_REGNO (from),
+ ALLOCNO_NUM (to), ALLOCNO_REGNO (to));
+ ira_print_live_range_list (ira_dump_file, lr);
+ }
+ lr = ira_copy_live_range_list (lr);
+ change_object_in_range_list (lr, to_obj);
+ OBJECT_LIVE_RANGES (to_obj)
+ = ira_merge_live_ranges (lr, OBJECT_LIVE_RANGES (to_obj));
+ }
}
/* Return TRUE if NODE represents a loop with low register
@@ -2124,13 +2191,15 @@ remove_low_level_allocnos (void)
regno = ALLOCNO_REGNO (a);
if (ira_loop_tree_root->regno_allocno_map[regno] == a)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_object_t obj;
+ ira_allocno_object_iterator oi;
ira_regno_allocno_map[regno] = a;
ALLOCNO_NEXT_REGNO_ALLOCNO (a) = NULL;
ALLOCNO_CAP_MEMBER (a) = NULL;
- COPY_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
- OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
+ FOR_EACH_ALLOCNO_OBJECT (a, obj, oi)
+ COPY_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
+ OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
#ifdef STACK_REGS
if (ALLOCNO_TOTAL_NO_STACK_REG_P (a))
ALLOCNO_NO_STACK_REG_P (a) = true;
@@ -2193,6 +2262,8 @@ update_bad_spill_attribute (void)
int i;
ira_allocno_t a;
ira_allocno_iterator ai;
+ ira_allocno_object_iterator aoi;
+ ira_object_t obj;
live_range_t r;
enum reg_class cover_class;
bitmap_head dead_points[N_REG_CLASSES];
@@ -2204,31 +2275,36 @@ update_bad_spill_attribute (void)
}
FOR_EACH_ALLOCNO (a, ai)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
cover_class = ALLOCNO_COVER_CLASS (a);
if (cover_class == NO_REGS)
continue;
- for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
- bitmap_set_bit (&dead_points[cover_class], r->finish);
+ FOR_EACH_ALLOCNO_OBJECT (a, obj, aoi)
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ bitmap_set_bit (&dead_points[cover_class], r->finish);
}
FOR_EACH_ALLOCNO (a, ai)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
cover_class = ALLOCNO_COVER_CLASS (a);
if (cover_class == NO_REGS)
continue;
if (! ALLOCNO_BAD_SPILL_P (a))
continue;
- for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ FOR_EACH_ALLOCNO_OBJECT (a, obj, aoi)
{
- for (i = r->start + 1; i < r->finish; i++)
- if (bitmap_bit_p (&dead_points[cover_class], i))
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ {
+ for (i = r->start + 1; i < r->finish; i++)
+ if (bitmap_bit_p (&dead_points[cover_class], i))
+ break;
+ if (i < r->finish)
+ break;
+ }
+ if (r != NULL)
+ {
+ ALLOCNO_BAD_SPILL_P (a) = false;
break;
- if (i < r->finish)
- break;
+ }
}
- if (r != NULL)
- ALLOCNO_BAD_SPILL_P (a) = false;
}
for (i = 0; i < ira_reg_class_cover_size; i++)
{
@@ -2246,57 +2322,69 @@ setup_min_max_allocno_live_range_point (
int i;
ira_allocno_t a, parent_a, cap;
ira_allocno_iterator ai;
+#ifdef ENABLE_IRA_CHECKING
+ ira_object_iterator oi;
+ ira_object_t obj;
+#endif
live_range_t r;
ira_loop_tree_node_t parent;
FOR_EACH_ALLOCNO (a, ai)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- r = OBJECT_LIVE_RANGES (obj);
- if (r == NULL)
- continue;
- OBJECT_MAX (obj) = r->finish;
- for (; r->next != NULL; r = r->next)
- ;
- OBJECT_MIN (obj) = r->start;
+ int n = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < n; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ r = OBJECT_LIVE_RANGES (obj);
+ if (r == NULL)
+ continue;
+ OBJECT_MAX (obj) = r->finish;
+ for (; r->next != NULL; r = r->next)
+ ;
+ OBJECT_MIN (obj) = r->start;
+ }
}
for (i = max_reg_num () - 1; i >= FIRST_PSEUDO_REGISTER; i--)
for (a = ira_regno_allocno_map[i];
a != NULL;
a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- ira_object_t parent_obj;
-
- if (OBJECT_MAX (obj) < 0)
- continue;
- ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
- /* Accumulation of range info. */
- if (ALLOCNO_CAP (a) != NULL)
+ int j;
+ int n = ALLOCNO_NUM_OBJECTS (a);
+ for (j = 0; j < n; j++)
{
- for (cap = ALLOCNO_CAP (a); cap != NULL; cap = ALLOCNO_CAP (cap))
+ ira_object_t obj = ALLOCNO_OBJECT (a, j);
+ ira_object_t parent_obj;
+
+ if (OBJECT_MAX (obj) < 0)
+ continue;
+ ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
+ /* Accumulation of range info. */
+ if (ALLOCNO_CAP (a) != NULL)
{
- ira_object_t cap_obj = ALLOCNO_OBJECT (cap);
- if (OBJECT_MAX (cap_obj) < OBJECT_MAX (obj))
- OBJECT_MAX (cap_obj) = OBJECT_MAX (obj);
- if (OBJECT_MIN (cap_obj) > OBJECT_MIN (obj))
- OBJECT_MIN (cap_obj) = OBJECT_MIN (obj);
+ for (cap = ALLOCNO_CAP (a); cap != NULL; cap = ALLOCNO_CAP (cap))
+ {
+ ira_object_t cap_obj = ALLOCNO_OBJECT (cap, j);
+ if (OBJECT_MAX (cap_obj) < OBJECT_MAX (obj))
+ OBJECT_MAX (cap_obj) = OBJECT_MAX (obj);
+ if (OBJECT_MIN (cap_obj) > OBJECT_MIN (obj))
+ OBJECT_MIN (cap_obj) = OBJECT_MIN (obj);
+ }
+ continue;
}
- continue;
+ if ((parent = ALLOCNO_LOOP_TREE_NODE (a)->parent) == NULL)
+ continue;
+ parent_a = parent->regno_allocno_map[i];
+ parent_obj = ALLOCNO_OBJECT (parent_a, j);
+ if (OBJECT_MAX (parent_obj) < OBJECT_MAX (obj))
+ OBJECT_MAX (parent_obj) = OBJECT_MAX (obj);
+ if (OBJECT_MIN (parent_obj) > OBJECT_MIN (obj))
+ OBJECT_MIN (parent_obj) = OBJECT_MIN (obj);
}
- if ((parent = ALLOCNO_LOOP_TREE_NODE (a)->parent) == NULL)
- continue;
- parent_a = parent->regno_allocno_map[i];
- parent_obj = ALLOCNO_OBJECT (parent_a);
- if (OBJECT_MAX (parent_obj) < OBJECT_MAX (obj))
- OBJECT_MAX (parent_obj) = OBJECT_MAX (obj);
- if (OBJECT_MIN (parent_obj) > OBJECT_MIN (obj))
- OBJECT_MIN (parent_obj) = OBJECT_MIN (obj);
}
#ifdef ENABLE_IRA_CHECKING
- FOR_EACH_ALLOCNO (a, ai)
+ FOR_EACH_OBJECT (obj, oi)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
if ((0 <= OBJECT_MIN (obj) && OBJECT_MIN (obj) <= ira_max_point)
&& (0 <= OBJECT_MAX (obj) && OBJECT_MAX (obj) <= ira_max_point))
continue;
@@ -2311,7 +2399,7 @@ setup_min_max_allocno_live_range_point (
(min). Allocnos with the same start are ordered according their
finish (max). */
static int
-allocno_range_compare_func (const void *v1p, const void *v2p)
+object_range_compare_func (const void *v1p, const void *v2p)
{
int diff;
ira_object_t obj1 = *(const ira_object_t *) v1p;
@@ -2339,9 +2427,15 @@ sort_conflict_id_map (void)
num = 0;
FOR_EACH_ALLOCNO (a, ai)
- ira_object_id_map[num++] = ALLOCNO_OBJECT (a);
+ {
+ ira_allocno_object_iterator oi;
+ ira_object_t obj;
+
+ FOR_EACH_ALLOCNO_OBJECT (a, obj, oi)
+ ira_object_id_map[num++] = obj;
+ }
qsort (ira_object_id_map, num, sizeof (ira_object_t),
- allocno_range_compare_func);
+ object_range_compare_func);
for (i = 0; i < num; i++)
{
ira_object_t obj = ira_object_id_map[i];
@@ -2360,7 +2454,9 @@ setup_min_max_conflict_allocno_ids (void
int cover_class;
int i, j, min, max, start, finish, first_not_finished, filled_area_start;
int *live_range_min, *last_lived;
+ int word0_min, word0_max;
ira_allocno_t a;
+ ira_allocno_iterator ai;
live_range_min = (int *) ira_allocate (sizeof (int) * ira_objects_num);
cover_class = -1;
@@ -2387,10 +2483,10 @@ setup_min_max_conflict_allocno_ids (void
/* If we skip an allocno, the allocno with smaller ids will
be also skipped because of the secondary sorting the
range finishes (see function
- allocno_range_compare_func). */
+ object_range_compare_func). */
while (first_not_finished < i
&& start > OBJECT_MAX (ira_object_id_map
- [first_not_finished]))
+ [first_not_finished]))
first_not_finished++;
min = first_not_finished;
}
@@ -2441,6 +2537,38 @@ setup_min_max_conflict_allocno_ids (void
}
ira_free (last_lived);
ira_free (live_range_min);
+
+ /* For allocnos with more than one object, we may later record extra conflicts in
+ subobject 0 that we cannot really know about here.
+ For now, simply widen the min/max range of these subobjects. */
+
+ word0_min = INT_MAX;
+ word0_max = INT_MIN;
+
+ FOR_EACH_ALLOCNO (a, ai)
+ {
+ int n = ALLOCNO_NUM_OBJECTS (a);
+ ira_object_t obj0;
+ if (n < 2)
+ continue;
+ obj0 = ALLOCNO_OBJECT (a, 0);
+ if (OBJECT_CONFLICT_ID (obj0) < word0_min)
+ word0_min = OBJECT_CONFLICT_ID (obj0);
+ if (OBJECT_CONFLICT_ID (obj0) > word0_max)
+ word0_max = OBJECT_CONFLICT_ID (obj0);
+ }
+ FOR_EACH_ALLOCNO (a, ai)
+ {
+ int n = ALLOCNO_NUM_OBJECTS (a);
+ ira_object_t obj0;
+ if (n < 2)
+ continue;
+ obj0 = ALLOCNO_OBJECT (a, 0);
+ if (OBJECT_MIN (obj0) > word0_min)
+ OBJECT_MIN (obj0) = word0_min;
+ if (OBJECT_MAX (obj0) < word0_max)
+ OBJECT_MAX (obj0) = word0_max;
+ }
}
\f
@@ -2528,6 +2656,7 @@ copy_info_to_removed_store_destinations
if (a != regno_top_level_allocno_map[REGNO (ALLOCNO_REG (a))])
/* This allocno will be removed. */
continue;
+
/* Caps will be removed. */
ira_assert (ALLOCNO_CAP_MEMBER (a) == NULL);
for (parent = ALLOCNO_LOOP_TREE_NODE (a)->parent;
@@ -2540,8 +2669,10 @@ copy_info_to_removed_store_destinations
break;
if (parent == NULL || parent_a == NULL)
continue;
+
copy_allocno_live_ranges (a, parent_a);
merge_hard_reg_conflicts (a, parent_a, true);
+
ALLOCNO_CALL_FREQ (parent_a) += ALLOCNO_CALL_FREQ (a);
ALLOCNO_CALLS_CROSSED_NUM (parent_a)
+= ALLOCNO_CALLS_CROSSED_NUM (a);
@@ -2581,14 +2712,16 @@ ira_flattening (int max_regno_before_emi
new_pseudos_p = merged_p = false;
FOR_EACH_ALLOCNO (a, ai)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_allocno_object_iterator oi;
+ ira_object_t obj;
if (ALLOCNO_CAP_MEMBER (a) != NULL)
/* Caps are not in the regno allocno maps and they are never
will be transformed into allocnos existing after IR
flattening. */
continue;
- COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
- OBJECT_CONFLICT_HARD_REGS (obj));
+ FOR_EACH_ALLOCNO_OBJECT (a, obj, oi)
+ COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
+ OBJECT_CONFLICT_HARD_REGS (obj));
#ifdef STACK_REGS
ALLOCNO_TOTAL_NO_STACK_REG_P (a) = ALLOCNO_NO_STACK_REG_P (a);
#endif
@@ -2673,13 +2806,17 @@ ira_flattening (int max_regno_before_emi
/* Rebuild conflicts. */
FOR_EACH_ALLOCNO (a, ai)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_allocno_object_iterator oi;
+ ira_object_t obj;
if (a != regno_top_level_allocno_map[REGNO (ALLOCNO_REG (a))]
|| ALLOCNO_CAP_MEMBER (a) != NULL)
continue;
- for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
- ira_assert (r->object == obj);
- clear_conflicts (obj);
+ FOR_EACH_ALLOCNO_OBJECT (a, obj, oi)
+ {
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ ira_assert (r->object == obj);
+ clear_conflicts (obj);
+ }
}
objects_live = sparseset_alloc (ira_objects_num);
for (i = 0; i < ira_max_point; i++)
@@ -2691,6 +2828,7 @@ ira_flattening (int max_regno_before_emi
if (a != regno_top_level_allocno_map[REGNO (ALLOCNO_REG (a))]
|| ALLOCNO_CAP_MEMBER (a) != NULL)
continue;
+
cover_class = ALLOCNO_COVER_CLASS (a);
sparseset_set_bit (objects_live, OBJECT_CONFLICT_ID (obj));
EXECUTE_IF_SET_IN_SPARSESET (objects_live, n)
@@ -2698,7 +2836,6 @@ ira_flattening (int max_regno_before_emi
ira_object_t live_obj = ira_object_id_map[n];
ira_allocno_t live_a = OBJECT_ALLOCNO (live_obj);
enum reg_class live_cover = ALLOCNO_COVER_CLASS (live_a);
-
if (ira_reg_classes_intersect_p[cover_class][live_cover]
/* Don't set up conflict for the allocno with itself. */
&& live_a != a)
@@ -2930,40 +3067,39 @@ ira_build (bool loops_p)
allocno crossing calls. */
FOR_EACH_ALLOCNO (a, ai)
if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
- {
- ira_object_t obj = ALLOCNO_OBJECT (a);
- IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
- call_used_reg_set);
- IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
- call_used_reg_set);
- }
+ ior_hard_reg_conflicts (a, &call_used_reg_set);
}
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
print_copies (ira_dump_file);
if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
{
- int n, nr;
+ int n, nr, nr_big;
ira_allocno_t a;
live_range_t r;
ira_allocno_iterator ai;
n = 0;
+ nr = 0;
+ nr_big = 0;
FOR_EACH_ALLOCNO (a, ai)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- n += OBJECT_NUM_CONFLICTS (obj);
+ int j, nobj = ALLOCNO_NUM_OBJECTS (a);
+ if (nobj > 1)
+ nr_big++;
+ for (j = 0; j < nobj; j++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, j);
+ n += OBJECT_NUM_CONFLICTS (obj);
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ nr++;
+ }
}
- nr = 0;
- FOR_EACH_ALLOCNO (a, ai)
- for (r = OBJECT_LIVE_RANGES (ALLOCNO_OBJECT (a)); r != NULL;
- r = r->next)
- nr++;
fprintf (ira_dump_file, " regions=%d, blocks=%d, points=%d\n",
VEC_length (loop_p, ira_loops.larray), n_basic_blocks,
ira_max_point);
fprintf (ira_dump_file,
- " allocnos=%d, copies=%d, conflicts=%d, ranges=%d\n",
- ira_allocnos_num, ira_copies_num, n, nr);
+ " allocnos=%d (big %d), copies=%d, conflicts=%d, ranges=%d\n",
+ ira_allocnos_num, nr_big, ira_copies_num, n, nr);
}
return loops_p;
}
Index: gcc/ira.c
===================================================================
--- gcc.orig/ira.c
+++ gcc/ira.c
@@ -1376,9 +1376,8 @@ setup_prohibited_mode_move_regs (void)
static bool
ira_bad_reload_regno_1 (int regno, rtx x)
{
- int x_regno;
+ int x_regno, n, i;
ira_allocno_t a;
- ira_object_t obj;
enum reg_class pref;
/* We only deal with pseudo regs. */
@@ -1398,10 +1397,13 @@ ira_bad_reload_regno_1 (int regno, rtx x
/* If the pseudo conflicts with REGNO, then we consider REGNO a
poor choice for a reload regno. */
a = ira_regno_allocno_map[x_regno];
- obj = ALLOCNO_OBJECT (a);
- if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno))
- return true;
-
+ n = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < n; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ if (TEST_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno))
+ return true;
+ }
return false;
}
@@ -1749,32 +1751,60 @@ static void
check_allocation (void)
{
ira_allocno_t a;
- int hard_regno, nregs;
+ int hard_regno, nregs, conflict_nregs;
ira_allocno_iterator ai;
FOR_EACH_ALLOCNO (a, ai)
{
- ira_object_t obj, conflict_obj;
- ira_object_conflict_iterator oci;
+ int n = ALLOCNO_NUM_OBJECTS (a);
+ int i;
if (ALLOCNO_CAP_MEMBER (a) != NULL
|| (hard_regno = ALLOCNO_HARD_REGNO (a)) < 0)
continue;
nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
- obj = ALLOCNO_OBJECT (a);
- FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ if (n > 1)
+ {
+ gcc_assert (n == nregs);
+ nregs = 1;
+ }
+ for (i = 0; i < n; i++)
{
- ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
- int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
- if (conflict_hard_regno >= 0)
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ ira_object_t conflict_obj;
+ ira_object_conflict_iterator oci;
+ int this_regno = hard_regno;
+ if (n > 1)
{
- int conflict_nregs
- = (hard_regno_nregs
- [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
- if ((conflict_hard_regno <= hard_regno
- && hard_regno < conflict_hard_regno + conflict_nregs)
- || (hard_regno <= conflict_hard_regno
- && conflict_hard_regno < hard_regno + nregs))
+ if (WORDS_BIG_ENDIAN)
+ this_regno += n - i - 1;
+ else
+ this_regno += i;
+ }
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ {
+ ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
+ int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
+ if (conflict_hard_regno < 0)
+ continue;
+ if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1)
+ {
+ if (WORDS_BIG_ENDIAN)
+ conflict_hard_regno += (ALLOCNO_NUM_OBJECTS (conflict_a)
+ - OBJECT_SUBWORD (conflict_obj) - 1);
+ else
+ conflict_hard_regno += OBJECT_SUBWORD (conflict_obj);
+ conflict_nregs = 1;
+ }
+ else
+ conflict_nregs
+ = (hard_regno_nregs
+ [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
+
+ if ((conflict_hard_regno <= this_regno
+ && this_regno < conflict_hard_regno + conflict_nregs)
+ || (this_regno <= conflict_hard_regno
+ && conflict_hard_regno < this_regno + nregs))
{
fprintf (stderr, "bad allocation for %d and %d\n",
ALLOCNO_REGNO (a), ALLOCNO_REGNO (conflict_a));
Index: gcc/ira-color.c
===================================================================
--- gcc.orig/ira-color.c
+++ gcc/ira-color.c
@@ -93,16 +93,29 @@ static VEC(ira_allocno_t,heap) *removed_
static bool
allocnos_have_intersected_live_ranges_p (ira_allocno_t a1, ira_allocno_t a2)
{
- ira_object_t obj1 = ALLOCNO_OBJECT (a1);
- ira_object_t obj2 = ALLOCNO_OBJECT (a2);
+ int i, j;
+ int n1 = ALLOCNO_NUM_OBJECTS (a1);
+ int n2 = ALLOCNO_NUM_OBJECTS (a2);
+
if (a1 == a2)
return false;
if (ALLOCNO_REG (a1) != NULL && ALLOCNO_REG (a2) != NULL
&& (ORIGINAL_REGNO (ALLOCNO_REG (a1))
== ORIGINAL_REGNO (ALLOCNO_REG (a2))))
return false;
- return ira_live_ranges_intersect_p (OBJECT_LIVE_RANGES (obj1),
- OBJECT_LIVE_RANGES (obj2));
+
+ for (i = 0; i < n1; i++)
+ {
+ ira_object_t c1 = ALLOCNO_OBJECT (a1, i);
+ for (j = 0; j < n2; j++)
+ {
+ ira_object_t c2 = ALLOCNO_OBJECT (a2, j);
+ if (ira_live_ranges_intersect_p (OBJECT_LIVE_RANGES (c1),
+ OBJECT_LIVE_RANGES (c2)))
+ return true;
+ }
+ }
+ return false;
}
#ifdef ENABLE_IRA_CHECKING
@@ -441,12 +454,11 @@ print_coalesced_allocno (ira_allocno_t a
static bool
assign_hard_reg (ira_allocno_t allocno, bool retry_p)
{
- HARD_REG_SET conflicting_regs;
- int i, j, k, hard_regno, best_hard_regno, class_size;
- int cost, mem_cost, min_cost, full_cost, min_full_cost;
+ HARD_REG_SET conflicting_regs[2];
+ int i, j, hard_regno, nregs, best_hard_regno, class_size;
+ int cost, mem_cost, min_cost, full_cost, min_full_cost, nwords;
int *a_costs;
- int *conflict_costs;
- enum reg_class cover_class, conflict_cover_class;
+ enum reg_class cover_class;
enum machine_mode mode;
ira_allocno_t a;
static int costs[FIRST_PSEUDO_REGISTER], full_costs[FIRST_PSEUDO_REGISTER];
@@ -458,11 +470,13 @@ assign_hard_reg (ira_allocno_t allocno,
bool no_stack_reg_p;
#endif
+ nwords = ALLOCNO_NUM_OBJECTS (allocno);
ira_assert (! ALLOCNO_ASSIGNED_P (allocno));
cover_class = ALLOCNO_COVER_CLASS (allocno);
class_size = ira_class_hard_regs_num[cover_class];
mode = ALLOCNO_MODE (allocno);
- CLEAR_HARD_REG_SET (conflicting_regs);
+ for (i = 0; i < nwords; i++)
+ CLEAR_HARD_REG_SET (conflicting_regs[i]);
best_hard_regno = -1;
memset (full_costs, 0, sizeof (int) * class_size);
mem_cost = 0;
@@ -477,13 +491,9 @@ assign_hard_reg (ira_allocno_t allocno,
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- ira_object_t conflict_obj;
- ira_object_conflict_iterator oci;
-
+ int word;
mem_cost += ALLOCNO_UPDATED_MEMORY_COST (a);
- IOR_HARD_REG_SET (conflicting_regs,
- OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
+
ira_allocate_and_copy_costs (&ALLOCNO_UPDATED_HARD_REG_COSTS (a),
cover_class, ALLOCNO_HARD_REG_COSTS (a));
a_costs = ALLOCNO_UPDATED_HARD_REG_COSTS (a);
@@ -502,44 +512,68 @@ assign_hard_reg (ira_allocno_t allocno,
costs[i] += cost;
full_costs[i] += cost;
}
- /* Take preferences of conflicting allocnos into account. */
- FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ for (word = 0; word < nwords; word++)
{
- ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
+ ira_object_t conflict_obj;
+ ira_object_t obj = ALLOCNO_OBJECT (allocno, word);
+ ira_object_conflict_iterator oci;
- /* Reload can give another class so we need to check all
- allocnos. */
- if (retry_p || bitmap_bit_p (consideration_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno)))
+ IOR_HARD_REG_SET (conflicting_regs[word],
+ OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
+ /* Take preferences of conflicting allocnos into account. */
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
+ ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
+ enum reg_class conflict_cover_class;
+ /* Reload can give another class so we need to check all
+ allocnos. */
+ if (!retry_p && !bitmap_bit_p (consideration_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno)))
+ continue;
conflict_cover_class = ALLOCNO_COVER_CLASS (conflict_allocno);
ira_assert (ira_reg_classes_intersect_p
[cover_class][conflict_cover_class]);
- if (allocno_coalesced_p)
- {
- if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno)))
- continue;
- bitmap_set_bit (processed_coalesced_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno));
- }
if (ALLOCNO_ASSIGNED_P (conflict_allocno))
{
- if ((hard_regno = ALLOCNO_HARD_REGNO (conflict_allocno)) >= 0
+ hard_regno = ALLOCNO_HARD_REGNO (conflict_allocno);
+ if (hard_regno >= 0
&& ira_class_hard_reg_index[cover_class][hard_regno] >= 0)
{
- IOR_HARD_REG_SET
- (conflicting_regs,
- ira_reg_mode_hard_regset
- [hard_regno][ALLOCNO_MODE (conflict_allocno)]);
+ enum machine_mode mode = ALLOCNO_MODE (conflict_allocno);
+ int conflict_nregs = hard_regno_nregs[hard_regno][mode];
+ int n_objects = ALLOCNO_NUM_OBJECTS (conflict_allocno);
+ if (conflict_nregs == n_objects && conflict_nregs > 1)
+ {
+ int num = OBJECT_SUBWORD (conflict_obj);
+ if (WORDS_BIG_ENDIAN)
+ SET_HARD_REG_BIT (conflicting_regs[word],
+ hard_regno + n_objects - num - 1);
+ else
+ SET_HARD_REG_BIT (conflicting_regs[word],
+ hard_regno + num);
+ }
+ else
+ IOR_HARD_REG_SET (conflicting_regs[word],
+ ira_reg_mode_hard_regset[hard_regno][mode]);
if (hard_reg_set_subset_p (reg_class_contents[cover_class],
- conflicting_regs))
+ conflicting_regs[word]))
goto fail;
}
}
else if (! ALLOCNO_MAY_BE_SPILLED_P (ALLOCNO_FIRST_COALESCED_ALLOCNO
(conflict_allocno)))
{
+ int k, *conflict_costs;
+
+ if (allocno_coalesced_p)
+ {
+ if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno)))
+ continue;
+ bitmap_set_bit (processed_coalesced_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno));
+ }
+
ira_allocate_and_copy_costs
(&ALLOCNO_UPDATED_CONFLICT_HARD_REG_COSTS (conflict_allocno),
conflict_cover_class,
@@ -580,6 +614,7 @@ assign_hard_reg (ira_allocno_t allocno,
}
update_conflict_hard_regno_costs (full_costs, cover_class, false);
min_cost = min_full_cost = INT_MAX;
+
/* We don't care about giving callee saved registers to allocnos no
living through calls because call clobbered registers are
allocated first (it is usual practice to put them first in
@@ -587,14 +622,34 @@ assign_hard_reg (ira_allocno_t allocno,
for (i = 0; i < class_size; i++)
{
hard_regno = ira_class_hard_regs[cover_class][i];
+ nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (allocno)];
#ifdef STACK_REGS
if (no_stack_reg_p
&& FIRST_STACK_REG <= hard_regno && hard_regno <= LAST_STACK_REG)
continue;
#endif
- if (! ira_hard_reg_not_in_set_p (hard_regno, mode, conflicting_regs)
- || TEST_HARD_REG_BIT (prohibited_class_mode_regs[cover_class][mode],
- hard_regno))
+ if (TEST_HARD_REG_BIT (prohibited_class_mode_regs[cover_class][mode],
+ hard_regno))
+ continue;
+ for (j = 0; j < nregs; j++)
+ {
+ int k;
+ int set_to_test_start = 0, set_to_test_end = nwords;
+ if (nregs == nwords)
+ {
+ if (WORDS_BIG_ENDIAN)
+ set_to_test_start = nwords - j - 1;
+ else
+ set_to_test_start = j;
+ set_to_test_end = set_to_test_start + 1;
+ }
+ for (k = set_to_test_start; k < set_to_test_end; k++)
+ if (TEST_HARD_REG_BIT (conflicting_regs[k], hard_regno + j))
+ break;
+ if (k != set_to_test_end)
+ break;
+ }
+ if (j != nregs)
continue;
cost = costs[i];
full_cost = full_costs[i];
@@ -875,7 +930,7 @@ static splay_tree uncolorable_allocnos_s
static void
push_allocno_to_stack (ira_allocno_t allocno)
{
- int left_conflicts_size, conflict_size, size;
+ int size;
ira_allocno_t a;
enum reg_class cover_class;
@@ -885,77 +940,90 @@ push_allocno_to_stack (ira_allocno_t all
if (cover_class == NO_REGS)
return;
size = ira_reg_class_nregs[cover_class][ALLOCNO_MODE (allocno)];
+ if (ALLOCNO_NUM_OBJECTS (allocno) > 1)
+ {
+ /* We will deal with the subwords individually. */
+ gcc_assert (size == ALLOCNO_NUM_OBJECTS (allocno));
+ size = 1;
+ }
if (allocno_coalesced_p)
bitmap_clear (processed_coalesced_allocno_bitmap);
+
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- ira_object_t conflict_obj;
- ira_object_conflict_iterator oci;
-
- FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ int i, n = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < n; i++)
{
- ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ int conflict_size;
+ ira_object_t conflict_obj;
+ ira_object_conflict_iterator oci;
- conflict_allocno = ALLOCNO_FIRST_COALESCED_ALLOCNO (conflict_allocno);
- if (bitmap_bit_p (coloring_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno)))
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
+ ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
+ int left_conflicts_size;
+
+ conflict_allocno = ALLOCNO_FIRST_COALESCED_ALLOCNO (conflict_allocno);
+ if (!bitmap_bit_p (coloring_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno)))
+ continue;
+
ira_assert (cover_class
== ALLOCNO_COVER_CLASS (conflict_allocno));
if (allocno_coalesced_p)
{
+ conflict_obj = ALLOCNO_OBJECT (conflict_allocno,
+ OBJECT_SUBWORD (conflict_obj));
if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno)))
+ OBJECT_CONFLICT_ID (conflict_obj)))
continue;
bitmap_set_bit (processed_coalesced_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno));
+ OBJECT_CONFLICT_ID (conflict_obj));
}
- if (ALLOCNO_IN_GRAPH_P (conflict_allocno)
- && ! ALLOCNO_ASSIGNED_P (conflict_allocno))
+
+ if (!ALLOCNO_IN_GRAPH_P (conflict_allocno)
+ || ALLOCNO_ASSIGNED_P (conflict_allocno))
+ continue;
+
+ left_conflicts_size = ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno);
+ conflict_size
+ = (ira_reg_class_nregs
+ [cover_class][ALLOCNO_MODE (conflict_allocno)]);
+ ira_assert (left_conflicts_size >= size);
+ if (left_conflicts_size + conflict_size
+ <= ALLOCNO_AVAILABLE_REGS_NUM (conflict_allocno))
+ {
+ ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno) -= size;
+ continue;
+ }
+ left_conflicts_size -= size;
+ if (uncolorable_allocnos_splay_tree[cover_class] != NULL
+ && !ALLOCNO_SPLAY_REMOVED_P (conflict_allocno)
+ && USE_SPLAY_P (cover_class))
{
- left_conflicts_size
- = ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno);
- conflict_size
- = (ira_reg_class_nregs
- [cover_class][ALLOCNO_MODE (conflict_allocno)]);
ira_assert
- (ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno) >= size);
- if (left_conflicts_size + conflict_size
- <= ALLOCNO_AVAILABLE_REGS_NUM (conflict_allocno))
- {
- ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno) -= size;
- continue;
- }
- left_conflicts_size
- = ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno) - size;
- if (uncolorable_allocnos_splay_tree[cover_class] != NULL
- && !ALLOCNO_SPLAY_REMOVED_P (conflict_allocno)
- && USE_SPLAY_P (cover_class))
- {
- ira_assert
- (splay_tree_lookup
- (uncolorable_allocnos_splay_tree[cover_class],
- (splay_tree_key) conflict_allocno) != NULL);
- splay_tree_remove
- (uncolorable_allocnos_splay_tree[cover_class],
- (splay_tree_key) conflict_allocno);
- ALLOCNO_SPLAY_REMOVED_P (conflict_allocno) = true;
- VEC_safe_push (ira_allocno_t, heap,
- removed_splay_allocno_vec,
- conflict_allocno);
- }
- ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno)
- = left_conflicts_size;
- if (left_conflicts_size + conflict_size
- <= ALLOCNO_AVAILABLE_REGS_NUM (conflict_allocno))
- {
- delete_allocno_from_bucket
- (conflict_allocno, &uncolorable_allocno_bucket);
- add_allocno_to_ordered_bucket
- (conflict_allocno, &colorable_allocno_bucket);
- }
+ (splay_tree_lookup
+ (uncolorable_allocnos_splay_tree[cover_class],
+ (splay_tree_key) conflict_allocno) != NULL);
+ splay_tree_remove
+ (uncolorable_allocnos_splay_tree[cover_class],
+ (splay_tree_key) conflict_allocno);
+ ALLOCNO_SPLAY_REMOVED_P (conflict_allocno) = true;
+ VEC_safe_push (ira_allocno_t, heap,
+ removed_splay_allocno_vec,
+ conflict_allocno);
+ }
+ ALLOCNO_LEFT_CONFLICTS_SIZE (conflict_allocno)
+ = left_conflicts_size;
+ if (left_conflicts_size + conflict_size
+ <= ALLOCNO_AVAILABLE_REGS_NUM (conflict_allocno))
+ {
+ delete_allocno_from_bucket
+ (conflict_allocno, &uncolorable_allocno_bucket);
+ add_allocno_to_ordered_bucket
+ (conflict_allocno, &colorable_allocno_bucket);
}
}
}
@@ -1369,6 +1437,28 @@ pop_allocnos_from_stack (void)
}
}
+/* Loop over all coalesced allocnos of ALLOCNO and their subobjects, collecting
+ total hard register conflicts in PSET (which the caller must initialize). */
+static void
+all_conflicting_hard_regs_coalesced (ira_allocno_t allocno, HARD_REG_SET *pset)
+{
+ ira_allocno_t a;
+
+ for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
+ a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
+ {
+ int i;
+ int n = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < n; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ IOR_HARD_REG_SET (*pset, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
+ }
+ if (a == allocno)
+ break;
+ }
+}
+
/* Set up number of available hard registers for ALLOCNO. */
static void
setup_allocno_available_regs_num (ira_allocno_t allocno)
@@ -1376,7 +1466,6 @@ setup_allocno_available_regs_num (ira_al
int i, n, hard_regs_num, hard_regno;
enum machine_mode mode;
enum reg_class cover_class;
- ira_allocno_t a;
HARD_REG_SET temp_set;
cover_class = ALLOCNO_COVER_CLASS (allocno);
@@ -1386,14 +1475,8 @@ setup_allocno_available_regs_num (ira_al
CLEAR_HARD_REG_SET (temp_set);
ira_assert (ALLOCNO_FIRST_COALESCED_ALLOCNO (allocno) == allocno);
hard_regs_num = ira_class_hard_regs_num[cover_class];
- for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
- a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
- {
- ira_object_t obj = ALLOCNO_OBJECT (a);
- IOR_HARD_REG_SET (temp_set, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
- if (a == allocno)
- break;
- }
+ all_conflicting_hard_regs_coalesced (allocno, &temp_set);
+
mode = ALLOCNO_MODE (allocno);
for (n = 0, i = hard_regs_num - 1; i >= 0; i--)
{
@@ -1422,16 +1505,11 @@ setup_allocno_left_conflicts_size (ira_a
hard_regs_num = ira_class_hard_regs_num[cover_class];
CLEAR_HARD_REG_SET (temp_set);
ira_assert (ALLOCNO_FIRST_COALESCED_ALLOCNO (allocno) == allocno);
- for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
- a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
- {
- ira_object_t obj = ALLOCNO_OBJECT (a);
- IOR_HARD_REG_SET (temp_set, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
- if (a == allocno)
- break;
- }
+ all_conflicting_hard_regs_coalesced (allocno, &temp_set);
+
AND_HARD_REG_SET (temp_set, reg_class_contents[cover_class]);
AND_COMPL_HARD_REG_SET (temp_set, ira_no_alloc_regs);
+
conflict_allocnos_size = 0;
if (! hard_reg_set_empty_p (temp_set))
for (i = 0; i < (int) hard_regs_num; i++)
@@ -1452,19 +1530,23 @@ setup_allocno_left_conflicts_size (ira_a
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- ira_object_t conflict_obj;
- ira_object_conflict_iterator oci;
-
- FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ int n = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < n; i++)
{
- ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ ira_object_t conflict_obj;
+ ira_object_conflict_iterator oci;
- conflict_allocno
- = ALLOCNO_FIRST_COALESCED_ALLOCNO (conflict_allocno);
- if (bitmap_bit_p (consideration_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno)))
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
{
+ ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
+
+ conflict_allocno
+ = ALLOCNO_FIRST_COALESCED_ALLOCNO (conflict_allocno);
+ if (!bitmap_bit_p (consideration_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno)))
+ continue;
+
ira_assert (cover_class
== ALLOCNO_COVER_CLASS (conflict_allocno));
if (allocno_coalesced_p)
@@ -1475,6 +1557,7 @@ setup_allocno_left_conflicts_size (ira_a
bitmap_set_bit (processed_coalesced_allocno_bitmap,
ALLOCNO_NUM (conflict_allocno));
}
+
if (! ALLOCNO_ASSIGNED_P (conflict_allocno))
conflict_allocnos_size
+= (ira_reg_class_nregs
@@ -1484,7 +1567,7 @@ setup_allocno_left_conflicts_size (ira_a
{
int last = (hard_regno
+ hard_regno_nregs
- [hard_regno][ALLOCNO_MODE (conflict_allocno)]);
+ [hard_regno][ALLOCNO_MODE (conflict_allocno)]);
while (hard_regno < last)
{
@@ -1567,9 +1650,9 @@ merge_allocnos (ira_allocno_t a1, ira_al
ALLOCNO_NEXT_COALESCED_ALLOCNO (last) = next;
}
-/* Return TRUE if there are conflicting allocnos from two sets of
- coalesced allocnos given correspondingly by allocnos A1 and A2. If
- RELOAD_P is TRUE, we use live ranges to find conflicts because
+/* Given two sets of coalesced sets of allocnos, A1 and A2, this
+ function determines if any conflicts exist between the two sets.
+ If RELOAD_P is TRUE, we use live ranges to find conflicts because
conflicts are represented only for allocnos of the same cover class
and during the reload pass we coalesce allocnos for sharing stack
memory slots. */
@@ -1577,15 +1660,20 @@ static bool
coalesced_allocno_conflict_p (ira_allocno_t a1, ira_allocno_t a2,
bool reload_p)
{
- ira_allocno_t a;
+ ira_allocno_t a, conflict_allocno;
+
+ /* When testing for conflicts, it is sufficient to examine only the
+ subobjects of order 0, due to the canonicalization of conflicts
+ we do in record_object_conflict. */
+ bitmap_clear (processed_coalesced_allocno_bitmap);
if (allocno_coalesced_p)
{
- bitmap_clear (processed_coalesced_allocno_bitmap);
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a1);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- bitmap_set_bit (processed_coalesced_allocno_bitmap, ALLOCNO_NUM (a));
+ bitmap_set_bit (processed_coalesced_allocno_bitmap,
+ OBJECT_CONFLICT_ID (ALLOCNO_OBJECT (a, 0)));
if (a == a1)
break;
}
@@ -1595,7 +1683,6 @@ coalesced_allocno_conflict_p (ira_allocn
{
if (reload_p)
{
- ira_allocno_t conflict_allocno;
for (conflict_allocno = ALLOCNO_NEXT_COALESCED_ALLOCNO (a1);;
conflict_allocno
= ALLOCNO_NEXT_COALESCED_ALLOCNO (conflict_allocno))
@@ -1609,20 +1696,17 @@ coalesced_allocno_conflict_p (ira_allocn
}
else
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
+ ira_object_t a_obj = ALLOCNO_OBJECT (a, 0);
ira_object_t conflict_obj;
ira_object_conflict_iterator oci;
-
- FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
- {
- ira_allocno_t conflict_allocno = OBJECT_ALLOCNO (conflict_obj);
- if (conflict_allocno == a1
- || (allocno_coalesced_p
- && bitmap_bit_p (processed_coalesced_allocno_bitmap,
- ALLOCNO_NUM (conflict_allocno))))
- return true;
- }
+ FOR_EACH_OBJECT_CONFLICT (a_obj, conflict_obj, oci)
+ if (conflict_obj == ALLOCNO_OBJECT (a1, 0)
+ || (allocno_coalesced_p
+ && bitmap_bit_p (processed_coalesced_allocno_bitmap,
+ OBJECT_CONFLICT_ID (conflict_obj))))
+ return true;
}
+
if (a == a2)
break;
}
@@ -1759,6 +1843,8 @@ setup_allocno_priorities (ira_allocno_t
{
a = consideration_allocnos[i];
length = ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (a);
+ if (ALLOCNO_NUM_OBJECTS (a) > 1)
+ length /= ALLOCNO_NUM_OBJECTS (a);
if (length <= 0)
length = 1;
allocno_priorities[ALLOCNO_NUM (a)]
@@ -1968,9 +2054,8 @@ color_pass (ira_loop_tree_node_t loop_tr
EXECUTE_IF_SET_IN_BITMAP (consideration_allocno_bitmap, 0, j, bi)
{
a = ira_allocnos[j];
- if (! ALLOCNO_ASSIGNED_P (a))
- continue;
- bitmap_clear_bit (coloring_allocno_bitmap, ALLOCNO_NUM (a));
+ if (ALLOCNO_ASSIGNED_P (a))
+ bitmap_clear_bit (coloring_allocno_bitmap, ALLOCNO_NUM (a));
}
/* Color all mentioned allocnos including transparent ones. */
color_allocnos ();
@@ -2321,9 +2406,7 @@ ira_reassign_conflict_allocnos (int star
allocnos_to_color_num = 0;
FOR_EACH_ALLOCNO (a, ai)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- ira_object_t conflict_obj;
- ira_object_conflict_iterator oci;
+ int n = ALLOCNO_NUM_OBJECTS (a);
if (! ALLOCNO_ASSIGNED_P (a)
&& ! bitmap_bit_p (allocnos_to_color, ALLOCNO_NUM (a)))
@@ -2342,15 +2425,21 @@ ira_reassign_conflict_allocnos (int star
if (ALLOCNO_REGNO (a) < start_regno
|| (cover_class = ALLOCNO_COVER_CLASS (a)) == NO_REGS)
continue;
- FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ for (i = 0; i < n; i++)
{
- ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
- ira_assert (ira_reg_classes_intersect_p
- [cover_class][ALLOCNO_COVER_CLASS (conflict_a)]);
- if (bitmap_bit_p (allocnos_to_color, ALLOCNO_NUM (conflict_a)))
- continue;
- bitmap_set_bit (allocnos_to_color, ALLOCNO_NUM (conflict_a));
- sorted_allocnos[allocnos_to_color_num++] = conflict_a;
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ ira_object_t conflict_obj;
+ ira_object_conflict_iterator oci;
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ {
+ ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
+ ira_assert (ira_reg_classes_intersect_p
+ [cover_class][ALLOCNO_COVER_CLASS (conflict_a)]);
+ if (bitmap_bit_p (allocnos_to_color, ALLOCNO_NUM (conflict_a)))
+ continue;
+ bitmap_set_bit (allocnos_to_color, ALLOCNO_NUM (conflict_a));
+ sorted_allocnos[allocnos_to_color_num++] = conflict_a;
+ }
}
}
ira_free_bitmap (allocnos_to_color);
@@ -2538,10 +2627,15 @@ slot_coalesced_allocno_live_ranges_inter
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- if (ira_live_ranges_intersect_p
- (slot_coalesced_allocnos_live_ranges[n], OBJECT_LIVE_RANGES (obj)))
- return true;
+ int i;
+ int nr = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < nr; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ if (ira_live_ranges_intersect_p (slot_coalesced_allocnos_live_ranges[n],
+ OBJECT_LIVE_RANGES (obj)))
+ return true;
+ }
if (a == allocno)
break;
}
@@ -2553,7 +2647,7 @@ slot_coalesced_allocno_live_ranges_inter
static void
setup_slot_coalesced_allocno_live_ranges (ira_allocno_t allocno)
{
- int n;
+ int i, n;
ira_allocno_t a;
live_range_t r;
@@ -2561,11 +2655,15 @@ setup_slot_coalesced_allocno_live_ranges
for (a = ALLOCNO_NEXT_COALESCED_ALLOCNO (allocno);;
a = ALLOCNO_NEXT_COALESCED_ALLOCNO (a))
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- r = ira_copy_live_range_list (OBJECT_LIVE_RANGES (obj));
- slot_coalesced_allocnos_live_ranges[n]
- = ira_merge_live_ranges
- (slot_coalesced_allocnos_live_ranges[n], r);
+ int nr = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < nr; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ r = ira_copy_live_range_list (OBJECT_LIVE_RANGES (obj));
+ slot_coalesced_allocnos_live_ranges[n]
+ = ira_merge_live_ranges
+ (slot_coalesced_allocnos_live_ranges[n], r);
+ }
if (a == allocno)
break;
}
@@ -2822,13 +2920,19 @@ allocno_reload_assign (ira_allocno_t a,
int hard_regno;
enum reg_class cover_class;
int regno = ALLOCNO_REGNO (a);
- HARD_REG_SET saved;
- ira_object_t obj = ALLOCNO_OBJECT (a);
+ HARD_REG_SET saved[2];
+ int i, n;
- COPY_HARD_REG_SET (saved, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
- IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), forbidden_regs);
- if (! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
- IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), call_used_reg_set);
+ n = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < n; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ COPY_HARD_REG_SET (saved[i], OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), forbidden_regs);
+ if (! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
+ call_used_reg_set);
+ }
ALLOCNO_ASSIGNED_P (a) = false;
cover_class = ALLOCNO_COVER_CLASS (a);
update_curr_costs (a);
@@ -2867,7 +2971,11 @@ allocno_reload_assign (ira_allocno_t a,
}
else if (internal_flag_ira_verbose > 3 && ira_dump_file != NULL)
fprintf (ira_dump_file, "\n");
- COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), saved);
+ for (i = 0; i < n; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ COPY_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), saved[i]);
+ }
return reg_renumber[regno] >= 0;
}
@@ -2915,25 +3023,31 @@ ira_reassign_pseudos (int *spilled_pseud
for (i = 0, n = num; i < n; i++)
{
- ira_object_t obj, conflict_obj;
- ira_object_conflict_iterator oci;
+ int nr, j;
int regno = spilled_pseudo_regs[i];
bitmap_set_bit (temp, regno);
a = ira_regno_allocno_map[regno];
- obj = ALLOCNO_OBJECT (a);
- FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ nr = ALLOCNO_NUM_OBJECTS (a);
+ for (j = 0; j < nr; j++)
{
- ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
- if (ALLOCNO_HARD_REGNO (conflict_a) < 0
- && ! ALLOCNO_DONT_REASSIGN_P (conflict_a)
- && ! bitmap_bit_p (temp, ALLOCNO_REGNO (conflict_a)))
- {
- spilled_pseudo_regs[num++] = ALLOCNO_REGNO (conflict_a);
- bitmap_set_bit (temp, ALLOCNO_REGNO (conflict_a));
- /* ?!? This seems wrong. */
- bitmap_set_bit (consideration_allocno_bitmap,
- ALLOCNO_NUM (conflict_a));
+ ira_object_t conflict_obj;
+ ira_object_t obj = ALLOCNO_OBJECT (a, j);
+ ira_object_conflict_iterator oci;
+
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ {
+ ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
+ if (ALLOCNO_HARD_REGNO (conflict_a) < 0
+ && ! ALLOCNO_DONT_REASSIGN_P (conflict_a)
+ && ! bitmap_bit_p (temp, ALLOCNO_REGNO (conflict_a)))
+ {
+ spilled_pseudo_regs[num++] = ALLOCNO_REGNO (conflict_a);
+ bitmap_set_bit (temp, ALLOCNO_REGNO (conflict_a));
+ /* ?!? This seems wrong. */
+ bitmap_set_bit (consideration_allocno_bitmap,
+ ALLOCNO_NUM (conflict_a));
+ }
}
}
}
@@ -3146,7 +3260,7 @@ calculate_spill_cost (int *regnos, rtx i
hard_regno = reg_renumber[regno];
ira_assert (hard_regno >= 0);
a = ira_regno_allocno_map[regno];
- length += ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (a);
+ length += ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (a) / ALLOCNO_NUM_OBJECTS (a);
cost += ALLOCNO_MEMORY_COST (a) - ALLOCNO_COVER_CLASS_COST (a);
nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
for (j = 0; j < nregs; j++)
@@ -3300,13 +3414,20 @@ fast_allocation (void)
allocno_priority_compare_func);
for (i = 0; i < num; i++)
{
- ira_object_t obj;
+ int nr, l;
+
a = sorted_allocnos[i];
- obj = ALLOCNO_OBJECT (a);
- COPY_HARD_REG_SET (conflict_hard_regs, OBJECT_CONFLICT_HARD_REGS (obj));
- for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
- for (j = r->start; j <= r->finish; j++)
- IOR_HARD_REG_SET (conflict_hard_regs, used_hard_regs[j]);
+ nr = ALLOCNO_NUM_OBJECTS (a);
+ CLEAR_HARD_REG_SET (conflict_hard_regs);
+ for (l = 0; l < nr; l++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, l);
+ IOR_HARD_REG_SET (conflict_hard_regs,
+ OBJECT_CONFLICT_HARD_REGS (obj));
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ for (j = r->start; j <= r->finish; j++)
+ IOR_HARD_REG_SET (conflict_hard_regs, used_hard_regs[j]);
+ }
cover_class = ALLOCNO_COVER_CLASS (a);
ALLOCNO_ASSIGNED_P (a) = true;
ALLOCNO_HARD_REGNO (a) = -1;
@@ -3331,10 +3452,14 @@ fast_allocation (void)
(prohibited_class_mode_regs[cover_class][mode], hard_regno)))
continue;
ALLOCNO_HARD_REGNO (a) = hard_regno;
- for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
- for (k = r->start; k <= r->finish; k++)
- IOR_HARD_REG_SET (used_hard_regs[k],
- ira_reg_mode_hard_regset[hard_regno][mode]);
+ for (l = 0; l < nr; l++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, l);
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ for (k = r->start; k <= r->finish; k++)
+ IOR_HARD_REG_SET (used_hard_regs[k],
+ ira_reg_mode_hard_regset[hard_regno][mode]);
+ }
break;
}
}
Index: gcc/ira-conflicts.c
===================================================================
--- gcc.orig/ira-conflicts.c
+++ gcc/ira-conflicts.c
@@ -47,11 +47,11 @@ along with GCC; see the file COPYING3.
allocno's conflict (can't go in the same hardware register).
Some arrays will be used as conflict bit vector of the
- corresponding allocnos see function build_allocno_conflicts. */
+ corresponding allocnos see function build_object_conflicts. */
static IRA_INT_TYPE **conflicts;
/* Macro to test a conflict of C1 and C2 in `conflicts'. */
-#define OBJECTS_CONFLICT_P(C1, C2) \
+#define OBJECTS_CONFLICT_P(C1, C2) \
(OBJECT_MIN (C1) <= OBJECT_CONFLICT_ID (C2) \
&& OBJECT_CONFLICT_ID (C2) <= OBJECT_MAX (C1) \
&& TEST_MINMAX_SET_BIT (conflicts[OBJECT_CONFLICT_ID (C1)], \
@@ -59,6 +59,36 @@ static IRA_INT_TYPE **conflicts;
OBJECT_MIN (C1), OBJECT_MAX (C1)))
\f
+/* Record a conflict between objects OBJ1 and OBJ2. If necessary,
+ canonicalize the conflict by recording it for lower-order subobjects
+ of the corresponding allocnos. */
+static void
+record_object_conflict (ira_object_t obj1, ira_object_t obj2)
+{
+ ira_allocno_t a1 = OBJECT_ALLOCNO (obj1);
+ ira_allocno_t a2 = OBJECT_ALLOCNO (obj2);
+ int w1 = OBJECT_SUBWORD (obj1);
+ int w2 = OBJECT_SUBWORD (obj2);
+ int id1, id2;
+
+ /* Canonicalize the conflict. If two identically-numbered words
+ conflict, always record this as a conflict between words 0. That
+ is the only information we need, and it is easier to test for if
+ it is collected in each allocno's lowest-order object. */
+ if (w1 == w2 && w1 > 0)
+ {
+ obj1 = ALLOCNO_OBJECT (a1, 0);
+ obj2 = ALLOCNO_OBJECT (a2, 0);
+ }
+ id1 = OBJECT_CONFLICT_ID (obj1);
+ id2 = OBJECT_CONFLICT_ID (obj2);
+
+ SET_MINMAX_SET_BIT (conflicts[id1], id2, OBJECT_MIN (obj1),
+ OBJECT_MAX (obj1));
+ SET_MINMAX_SET_BIT (conflicts[id2], id1, OBJECT_MIN (obj2),
+ OBJECT_MAX (obj2));
+}
+
/* Build allocno conflict table by processing allocno live ranges.
Return true if the table was built. The table is not built if it
is too big. */
@@ -73,51 +103,53 @@ build_conflict_bit_table (void)
ira_allocno_t allocno;
ira_allocno_iterator ai;
sparseset objects_live;
+ ira_object_t obj;
+ ira_allocno_object_iterator aoi;
allocated_words_num = 0;
FOR_EACH_ALLOCNO (allocno, ai)
- {
- ira_object_t obj = ALLOCNO_OBJECT (allocno);
- if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
+ FOR_EACH_ALLOCNO_OBJECT (allocno, obj, aoi)
+ {
+ if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
continue;
- conflict_bit_vec_words_num
- = ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
- / IRA_INT_BITS);
- allocated_words_num += conflict_bit_vec_words_num;
- if ((unsigned long long) allocated_words_num * sizeof (IRA_INT_TYPE)
- > (unsigned long long) IRA_MAX_CONFLICT_TABLE_SIZE * 1024 * 1024)
- {
- if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
- fprintf
- (ira_dump_file,
- "+++Conflict table will be too big(>%dMB) -- don't use it\n",
- IRA_MAX_CONFLICT_TABLE_SIZE);
- return false;
- }
- }
+ conflict_bit_vec_words_num
+ = ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
+ / IRA_INT_BITS);
+ allocated_words_num += conflict_bit_vec_words_num;
+ if ((unsigned long long) allocated_words_num * sizeof (IRA_INT_TYPE)
+ > (unsigned long long) IRA_MAX_CONFLICT_TABLE_SIZE * 1024 * 1024)
+ {
+ if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
+ fprintf
+ (ira_dump_file,
+ "+++Conflict table will be too big(>%dMB) -- don't use it\n",
+ IRA_MAX_CONFLICT_TABLE_SIZE);
+ return false;
+ }
+ }
conflicts = (IRA_INT_TYPE **) ira_allocate (sizeof (IRA_INT_TYPE *)
* ira_objects_num);
allocated_words_num = 0;
FOR_EACH_ALLOCNO (allocno, ai)
- {
- ira_object_t obj = ALLOCNO_OBJECT (allocno);
- int id = OBJECT_CONFLICT_ID (obj);
- if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
- {
- conflicts[id] = NULL;
- continue;
- }
- conflict_bit_vec_words_num
- = ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
- / IRA_INT_BITS);
- allocated_words_num += conflict_bit_vec_words_num;
- conflicts[id]
- = (IRA_INT_TYPE *) ira_allocate (sizeof (IRA_INT_TYPE)
- * conflict_bit_vec_words_num);
- memset (conflicts[id], 0,
- sizeof (IRA_INT_TYPE) * conflict_bit_vec_words_num);
- }
+ FOR_EACH_ALLOCNO_OBJECT (allocno, obj, aoi)
+ {
+ int id = OBJECT_CONFLICT_ID (obj);
+ if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
+ {
+ conflicts[id] = NULL;
+ continue;
+ }
+ conflict_bit_vec_words_num
+ = ((OBJECT_MAX (obj) - OBJECT_MIN (obj) + IRA_INT_BITS)
+ / IRA_INT_BITS);
+ allocated_words_num += conflict_bit_vec_words_num;
+ conflicts[id]
+ = (IRA_INT_TYPE *) ira_allocate (sizeof (IRA_INT_TYPE)
+ * conflict_bit_vec_words_num);
+ memset (conflicts[id], 0,
+ sizeof (IRA_INT_TYPE) * conflict_bit_vec_words_num);
+ }
object_set_words = (ira_objects_num + IRA_INT_BITS - 1) / IRA_INT_BITS;
if (internal_flag_ira_verbose > 0 && ira_dump_file != NULL)
@@ -136,33 +168,27 @@ build_conflict_bit_table (void)
ira_allocno_t allocno = OBJECT_ALLOCNO (obj);
int id = OBJECT_CONFLICT_ID (obj);
+ gcc_assert (id < ira_objects_num);
+
cover_class = ALLOCNO_COVER_CLASS (allocno);
sparseset_set_bit (objects_live, id);
EXECUTE_IF_SET_IN_SPARSESET (objects_live, j)
{
- ira_object_t live_cr = ira_object_id_map[j];
- ira_allocno_t live_a = OBJECT_ALLOCNO (live_cr);
+ ira_object_t live_obj = ira_object_id_map[j];
+ ira_allocno_t live_a = OBJECT_ALLOCNO (live_obj);
enum reg_class live_cover_class = ALLOCNO_COVER_CLASS (live_a);
if (ira_reg_classes_intersect_p[cover_class][live_cover_class]
/* Don't set up conflict for the allocno with itself. */
- && id != (int) j)
+ && live_a != allocno)
{
- SET_MINMAX_SET_BIT (conflicts[id], j,
- OBJECT_MIN (obj),
- OBJECT_MAX (obj));
- SET_MINMAX_SET_BIT (conflicts[j], id,
- OBJECT_MIN (live_cr),
- OBJECT_MAX (live_cr));
+ record_object_conflict (obj, live_obj);
}
}
}
for (r = ira_finish_point_ranges[i]; r != NULL; r = r->finish_next)
- {
- ira_object_t obj = r->object;
- sparseset_clear_bit (objects_live, OBJECT_CONFLICT_ID (obj));
- }
+ sparseset_clear_bit (objects_live, OBJECT_CONFLICT_ID (r->object));
}
sparseset_free (objects_live);
return true;
@@ -172,10 +198,13 @@ build_conflict_bit_table (void)
register due to conflicts. */
static bool
-allocnos_conflict_p (ira_allocno_t a1, ira_allocno_t a2)
+allocnos_conflict_for_copy_p (ira_allocno_t a1, ira_allocno_t a2)
{
- ira_object_t obj1 = ALLOCNO_OBJECT (a1);
- ira_object_t obj2 = ALLOCNO_OBJECT (a2);
+ /* Due to the fact that we canonicalize conflicts (see
+ record_object_conflict), we only need to test for conflicts of
+ the lowest order words. */
+ ira_object_t obj1 = ALLOCNO_OBJECT (a1, 0);
+ ira_object_t obj2 = ALLOCNO_OBJECT (a2, 0);
return OBJECTS_CONFLICT_P (obj1, obj2);
}
@@ -386,7 +415,7 @@ process_regs_for_copy (rtx reg1, rtx reg
{
ira_allocno_t a1 = ira_curr_regno_allocno_map[REGNO (reg1)];
ira_allocno_t a2 = ira_curr_regno_allocno_map[REGNO (reg2)];
- if (!allocnos_conflict_p (a1, a2) && offset1 == offset2)
+ if (!allocnos_conflict_for_copy_p (a1, a2) && offset1 == offset2)
{
cp = ira_add_allocno_copy (a1, a2, freq, constraint_p, insn,
ira_curr_loop_tree_node);
@@ -559,7 +588,7 @@ propagate_copies (void)
parent_a1 = ira_parent_or_cap_allocno (a1);
parent_a2 = ira_parent_or_cap_allocno (a2);
ira_assert (parent_a1 != NULL && parent_a2 != NULL);
- if (! allocnos_conflict_p (parent_a1, parent_a2))
+ if (! allocnos_conflict_for_copy_p (parent_a1, parent_a2))
ira_add_allocno_copy (parent_a1, parent_a2, cp->freq,
cp->constraint_p, cp->insn, cp->loop_tree_node);
}
@@ -569,23 +598,20 @@ propagate_copies (void)
static ira_object_t *collected_conflict_objects;
/* Build conflict vectors or bit conflict vectors (whatever is more
- profitable) for allocno A from the conflict table and propagate the
- conflicts to upper level allocno. */
+ profitable) for object OBJ from the conflict table. */
static void
-build_allocno_conflicts (ira_allocno_t a)
+build_object_conflicts (ira_object_t obj)
{
int i, px, parent_num;
- int conflict_bit_vec_words_num;
ira_allocno_t parent_a, another_parent_a;
- ira_object_t *vec;
- IRA_INT_TYPE *allocno_conflicts;
- ira_object_t obj, parent_obj;
+ ira_object_t parent_obj;
+ ira_allocno_t a = OBJECT_ALLOCNO (obj);
+ IRA_INT_TYPE *object_conflicts;
minmax_set_iterator asi;
- obj = ALLOCNO_OBJECT (a);
- allocno_conflicts = conflicts[OBJECT_CONFLICT_ID (obj)];
+ object_conflicts = conflicts[OBJECT_CONFLICT_ID (obj)];
px = 0;
- FOR_EACH_BIT_IN_MINMAX_SET (allocno_conflicts,
+ FOR_EACH_BIT_IN_MINMAX_SET (object_conflicts,
OBJECT_MIN (obj), OBJECT_MAX (obj), i, asi)
{
ira_object_t another_obj = ira_object_id_map[i];
@@ -596,6 +622,7 @@ build_allocno_conflicts (ira_allocno_t a
}
if (ira_conflict_vector_profitable_p (obj, px))
{
+ ira_object_t *vec;
ira_allocate_conflict_vec (obj, px);
vec = OBJECT_CONFLICT_VEC (obj);
memcpy (vec, collected_conflict_objects, sizeof (ira_object_t) * px);
@@ -604,7 +631,8 @@ build_allocno_conflicts (ira_allocno_t a
}
else
{
- OBJECT_CONFLICT_ARRAY (obj) = allocno_conflicts;
+ int conflict_bit_vec_words_num;
+ OBJECT_CONFLICT_ARRAY (obj) = object_conflicts;
if (OBJECT_MAX (obj) < OBJECT_MIN (obj))
conflict_bit_vec_words_num = 0;
else
@@ -614,28 +642,35 @@ build_allocno_conflicts (ira_allocno_t a
OBJECT_CONFLICT_ARRAY_SIZE (obj)
= conflict_bit_vec_words_num * sizeof (IRA_INT_TYPE);
}
+
parent_a = ira_parent_or_cap_allocno (a);
if (parent_a == NULL)
return;
ira_assert (ALLOCNO_COVER_CLASS (a) == ALLOCNO_COVER_CLASS (parent_a));
- parent_obj = ALLOCNO_OBJECT (parent_a);
+ ira_assert (ALLOCNO_NUM_OBJECTS (a) == ALLOCNO_NUM_OBJECTS (parent_a));
+ parent_obj = ALLOCNO_OBJECT (parent_a, OBJECT_SUBWORD (obj));
parent_num = OBJECT_CONFLICT_ID (parent_obj);
- FOR_EACH_BIT_IN_MINMAX_SET (allocno_conflicts,
+ FOR_EACH_BIT_IN_MINMAX_SET (object_conflicts,
OBJECT_MIN (obj), OBJECT_MAX (obj), i, asi)
{
ira_object_t another_obj = ira_object_id_map[i];
ira_allocno_t another_a = OBJECT_ALLOCNO (another_obj);
+ int another_word = OBJECT_SUBWORD (another_obj);
ira_assert (ira_reg_classes_intersect_p
[ALLOCNO_COVER_CLASS (a)][ALLOCNO_COVER_CLASS (another_a)]);
+
another_parent_a = ira_parent_or_cap_allocno (another_a);
if (another_parent_a == NULL)
continue;
ira_assert (ALLOCNO_NUM (another_parent_a) >= 0);
ira_assert (ALLOCNO_COVER_CLASS (another_a)
== ALLOCNO_COVER_CLASS (another_parent_a));
+ ira_assert (ALLOCNO_NUM_OBJECTS (another_a)
+ == ALLOCNO_NUM_OBJECTS (another_parent_a));
SET_MINMAX_SET_BIT (conflicts[parent_num],
- OBJECT_CONFLICT_ID (ALLOCNO_OBJECT (another_parent_a)),
+ OBJECT_CONFLICT_ID (ALLOCNO_OBJECT (another_parent_a,
+ another_word)),
OBJECT_MIN (parent_obj),
OBJECT_MAX (parent_obj));
}
@@ -657,9 +692,18 @@ build_conflicts (void)
a != NULL;
a = ALLOCNO_NEXT_REGNO_ALLOCNO (a))
{
- build_allocno_conflicts (a);
- for (cap = ALLOCNO_CAP (a); cap != NULL; cap = ALLOCNO_CAP (cap))
- build_allocno_conflicts (cap);
+ int j, nregs = ALLOCNO_NUM_OBJECTS (a);
+ for (j = 0; j < nregs; j++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, j);
+ build_object_conflicts (obj);
+ for (cap = ALLOCNO_CAP (a); cap != NULL; cap = ALLOCNO_CAP (cap))
+ {
+ ira_object_t cap_obj = ALLOCNO_OBJECT (cap, j);
+ gcc_assert (ALLOCNO_NUM_OBJECTS (cap) == ALLOCNO_NUM_OBJECTS (a));
+ build_object_conflicts (cap_obj);
+ }
+ }
}
ira_free (collected_conflict_objects);
}
@@ -699,9 +743,8 @@ static void
print_allocno_conflicts (FILE * file, bool reg_p, ira_allocno_t a)
{
HARD_REG_SET conflicting_hard_regs;
- ira_object_t obj, conflict_obj;
- ira_object_conflict_iterator oci;
basic_block bb;
+ int n, i;
if (reg_p)
fprintf (file, ";; r%d", ALLOCNO_REGNO (a));
@@ -716,39 +759,52 @@ print_allocno_conflicts (FILE * file, bo
}
fputs (" conflicts:", file);
- obj = ALLOCNO_OBJECT (a);
- if (OBJECT_CONFLICT_ARRAY (obj) != NULL)
- FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
- {
- ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
- if (reg_p)
- fprintf (file, " r%d,", ALLOCNO_REGNO (conflict_a));
- else
- {
- fprintf (file, " a%d(r%d,", ALLOCNO_NUM (conflict_a),
- ALLOCNO_REGNO (conflict_a));
- if ((bb = ALLOCNO_LOOP_TREE_NODE (conflict_a)->bb) != NULL)
- fprintf (file, "b%d)", bb->index);
- else
- fprintf (file, "l%d)",
- ALLOCNO_LOOP_TREE_NODE (conflict_a)->loop->num);
- }
- }
+ n = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < n; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ ira_object_t conflict_obj;
+ ira_object_conflict_iterator oci;
+
+ if (OBJECT_CONFLICT_ARRAY (obj) == NULL)
+ continue;
+ if (n > 1)
+ fprintf (file, "\n;; subobject %d:", i);
+ FOR_EACH_OBJECT_CONFLICT (obj, conflict_obj, oci)
+ {
+ ira_allocno_t conflict_a = OBJECT_ALLOCNO (conflict_obj);
+ if (reg_p)
+ fprintf (file, " r%d,", ALLOCNO_REGNO (conflict_a));
+ else
+ {
+ fprintf (file, " a%d(r%d", ALLOCNO_NUM (conflict_a),
+ ALLOCNO_REGNO (conflict_a));
+ if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1)
+ fprintf (file, ",w%d", OBJECT_SUBWORD (conflict_obj));
+ if ((bb = ALLOCNO_LOOP_TREE_NODE (conflict_a)->bb) != NULL)
+ fprintf (file, ",b%d", bb->index);
+ else
+ fprintf (file, ",l%d",
+ ALLOCNO_LOOP_TREE_NODE (conflict_a)->loop->num);
+ putc (')', file);
+ }
+ }
+ COPY_HARD_REG_SET (conflicting_hard_regs, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
+ AND_COMPL_HARD_REG_SET (conflicting_hard_regs, ira_no_alloc_regs);
+ AND_HARD_REG_SET (conflicting_hard_regs,
+ reg_class_contents[ALLOCNO_COVER_CLASS (a)]);
+ print_hard_reg_set (file, "\n;; total conflict hard regs:",
+ conflicting_hard_regs);
+
+ COPY_HARD_REG_SET (conflicting_hard_regs, OBJECT_CONFLICT_HARD_REGS (obj));
+ AND_COMPL_HARD_REG_SET (conflicting_hard_regs, ira_no_alloc_regs);
+ AND_HARD_REG_SET (conflicting_hard_regs,
+ reg_class_contents[ALLOCNO_COVER_CLASS (a)]);
+ print_hard_reg_set (file, ";; conflict hard regs:",
+ conflicting_hard_regs);
+ putc ('\n', file);
+ }
- COPY_HARD_REG_SET (conflicting_hard_regs, OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
- AND_COMPL_HARD_REG_SET (conflicting_hard_regs, ira_no_alloc_regs);
- AND_HARD_REG_SET (conflicting_hard_regs,
- reg_class_contents[ALLOCNO_COVER_CLASS (a)]);
- print_hard_reg_set (file, "\n;; total conflict hard regs:",
- conflicting_hard_regs);
-
- COPY_HARD_REG_SET (conflicting_hard_regs, OBJECT_CONFLICT_HARD_REGS (obj));
- AND_COMPL_HARD_REG_SET (conflicting_hard_regs, ira_no_alloc_regs);
- AND_HARD_REG_SET (conflicting_hard_regs,
- reg_class_contents[ALLOCNO_COVER_CLASS (a)]);
- print_hard_reg_set (file, ";; conflict hard regs:",
- conflicting_hard_regs);
- putc ('\n', file);
}
/* Print information about allocno or only regno (if REG_P) conflicts
@@ -798,7 +854,7 @@ ira_build_conflicts (void)
propagate_copies ();
/* Now we can free memory for the conflict table (see function
- build_allocno_conflicts for details). */
+ build_object_conflicts for details). */
FOR_EACH_OBJECT (obj, oi)
{
if (OBJECT_CONFLICT_ARRAY (obj) != conflicts[OBJECT_CONFLICT_ID (obj)])
@@ -818,29 +874,38 @@ ira_build_conflicts (void)
}
FOR_EACH_ALLOCNO (a, ai)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- reg_attrs *attrs;
- tree decl;
-
- if ((! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
- /* For debugging purposes don't put user defined variables in
- callee-clobbered registers. */
- || (optimize == 0
- && (attrs = REG_ATTRS (regno_reg_rtx [ALLOCNO_REGNO (a)])) != NULL
- && (decl = attrs->decl) != NULL
- && VAR_OR_FUNCTION_DECL_P (decl)
- && ! DECL_ARTIFICIAL (decl)))
+ int i, n = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < n; i++)
{
- IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), call_used_reg_set);
- IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), call_used_reg_set);
- }
- else if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
- {
- IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
- no_caller_save_reg_set);
- IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), temp_hard_reg_set);
- IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), no_caller_save_reg_set);
- IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), temp_hard_reg_set);
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ reg_attrs *attrs = REG_ATTRS (regno_reg_rtx [ALLOCNO_REGNO (a)]);
+ tree decl;
+
+ if ((! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
+ /* For debugging purposes don't put user defined variables in
+ callee-clobbered registers. */
+ || (optimize == 0
+ && attrs != NULL
+ && (decl = attrs->decl) != NULL
+ && VAR_OR_FUNCTION_DECL_P (decl)
+ && ! DECL_ARTIFICIAL (decl)))
+ {
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
+ call_used_reg_set);
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
+ call_used_reg_set);
+ }
+ else if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
+ {
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
+ no_caller_save_reg_set);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
+ temp_hard_reg_set);
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
+ no_caller_save_reg_set);
+ IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
+ temp_hard_reg_set);
+ }
}
}
if (optimize && ira_conflicts_p
Index: gcc/ira-emit.c
===================================================================
--- gcc.orig/ira-emit.c
+++ gcc/ira-emit.c
@@ -715,8 +715,8 @@ modify_move_list (move_t list)
&& ALLOCNO_HARD_REGNO
(hard_regno_last_set[hard_regno + i]->to) >= 0)
{
+ int n, j;
ira_allocno_t new_allocno;
- ira_object_t new_obj;
set_move = hard_regno_last_set[hard_regno + i];
/* It does not matter what loop_tree_node (of TO or
@@ -729,19 +729,25 @@ modify_move_list (move_t list)
ALLOCNO_MODE (new_allocno) = ALLOCNO_MODE (set_move->to);
ira_set_allocno_cover_class
(new_allocno, ALLOCNO_COVER_CLASS (set_move->to));
- ira_create_allocno_object (new_allocno);
+ ira_create_allocno_objects (new_allocno);
ALLOCNO_ASSIGNED_P (new_allocno) = true;
ALLOCNO_HARD_REGNO (new_allocno) = -1;
ALLOCNO_REG (new_allocno)
= create_new_reg (ALLOCNO_REG (set_move->to));
- new_obj = ALLOCNO_OBJECT (new_allocno);
-
/* Make it possibly conflicting with all earlier
created allocnos. Cases where temporary allocnos
created to remove the cycles are quite rare. */
- OBJECT_MIN (new_obj) = 0;
- OBJECT_MAX (new_obj) = ira_objects_num - 1;
+ n = ALLOCNO_NUM_OBJECTS (new_allocno);
+ gcc_assert (n == ALLOCNO_NUM_OBJECTS (set_move->to));
+ for (j = 0; j < n; j++)
+ {
+ ira_object_t new_obj = ALLOCNO_OBJECT (new_allocno, j);
+
+ OBJECT_MIN (new_obj) = 0;
+ OBJECT_MAX (new_obj) = ira_objects_num - 1;
+ }
+
new_move = create_move (set_move->to, new_allocno);
set_move->to = new_allocno;
VEC_safe_push (move_t, heap, move_vec, new_move);
@@ -937,21 +943,26 @@ add_range_and_copies_from_move_list (mov
{
ira_allocno_t from = move->from;
ira_allocno_t to = move->to;
- ira_object_t from_obj = ALLOCNO_OBJECT (from);
- ira_object_t to_obj = ALLOCNO_OBJECT (to);
- if (OBJECT_CONFLICT_ARRAY (to_obj) == NULL)
- {
- if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
- fprintf (ira_dump_file, " Allocate conflicts for a%dr%d\n",
- ALLOCNO_NUM (to), REGNO (ALLOCNO_REG (to)));
- ira_allocate_object_conflicts (to_obj, n);
- }
+ int nr, i;
+
bitmap_clear_bit (live_through, ALLOCNO_REGNO (from));
bitmap_clear_bit (live_through, ALLOCNO_REGNO (to));
- IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (from_obj), hard_regs_live);
- IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (to_obj), hard_regs_live);
- IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (from_obj), hard_regs_live);
- IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (to_obj), hard_regs_live);
+
+ nr = ALLOCNO_NUM_OBJECTS (to);
+ for (i = 0; i < nr; i++)
+ {
+ ira_object_t to_obj = ALLOCNO_OBJECT (to, i);
+ if (OBJECT_CONFLICT_ARRAY (to_obj) == NULL)
+ {
+ if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
+ fprintf (ira_dump_file, " Allocate conflicts for a%dr%d\n",
+ ALLOCNO_NUM (to), REGNO (ALLOCNO_REG (to)));
+ ira_allocate_object_conflicts (to_obj, n);
+ }
+ }
+ ior_hard_reg_conflicts (from, &hard_regs_live);
+ ior_hard_reg_conflicts (to, &hard_regs_live);
+
update_costs (from, true, freq);
update_costs (to, false, freq);
cp = ira_add_allocno_copy (from, to, freq, false, move->insn, NULL);
@@ -960,58 +971,73 @@ add_range_and_copies_from_move_list (mov
cp->num, ALLOCNO_NUM (cp->first),
REGNO (ALLOCNO_REG (cp->first)), ALLOCNO_NUM (cp->second),
REGNO (ALLOCNO_REG (cp->second)));
- r = OBJECT_LIVE_RANGES (from_obj);
- if (r == NULL || r->finish >= 0)
+
+ nr = ALLOCNO_NUM_OBJECTS (from);
+ for (i = 0; i < nr; i++)
{
- OBJECT_LIVE_RANGES (from_obj)
- = ira_create_live_range (from_obj, start, ira_max_point, r);
- if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
- fprintf (ira_dump_file,
- " Adding range [%d..%d] to allocno a%dr%d\n",
- start, ira_max_point, ALLOCNO_NUM (from),
- REGNO (ALLOCNO_REG (from)));
+ ira_object_t from_obj = ALLOCNO_OBJECT (from, i);
+ r = OBJECT_LIVE_RANGES (from_obj);
+ if (r == NULL || r->finish >= 0)
+ {
+ ira_add_live_range_to_object (from_obj, start, ira_max_point);
+ if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
+ fprintf (ira_dump_file,
+ " Adding range [%d..%d] to allocno a%dr%d\n",
+ start, ira_max_point, ALLOCNO_NUM (from),
+ REGNO (ALLOCNO_REG (from)));
+ }
+ else
+ {
+ r->finish = ira_max_point;
+ if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
+ fprintf (ira_dump_file,
+ " Adding range [%d..%d] to allocno a%dr%d\n",
+ r->start, ira_max_point, ALLOCNO_NUM (from),
+ REGNO (ALLOCNO_REG (from)));
+ }
}
- else
+ ira_max_point++;
+ nr = ALLOCNO_NUM_OBJECTS (to);
+ for (i = 0; i < nr; i++)
{
- r->finish = ira_max_point;
- if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
- fprintf (ira_dump_file,
- " Adding range [%d..%d] to allocno a%dr%d\n",
- r->start, ira_max_point, ALLOCNO_NUM (from),
- REGNO (ALLOCNO_REG (from)));
+ ira_object_t to_obj = ALLOCNO_OBJECT (to, i);
+ ira_add_live_range_to_object (to_obj, ira_max_point, -1);
}
ira_max_point++;
- OBJECT_LIVE_RANGES (to_obj)
- = ira_create_live_range (to_obj, ira_max_point, -1,
- OBJECT_LIVE_RANGES (to_obj));
- ira_max_point++;
}
for (move = list; move != NULL; move = move->next)
{
- ira_object_t to_obj = ALLOCNO_OBJECT (move->to);
- r = OBJECT_LIVE_RANGES (to_obj);
- if (r->finish < 0)
- {
- r->finish = ira_max_point - 1;
- if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
- fprintf (ira_dump_file,
- " Adding range [%d..%d] to allocno a%dr%d\n",
- r->start, r->finish, ALLOCNO_NUM (move->to),
- REGNO (ALLOCNO_REG (move->to)));
+ int nr, i;
+ nr = ALLOCNO_NUM_OBJECTS (move->to);
+ for (i = 0; i < nr; i++)
+ {
+ ira_object_t to_obj = ALLOCNO_OBJECT (move->to, i);
+ r = OBJECT_LIVE_RANGES (to_obj);
+ if (r->finish < 0)
+ {
+ r->finish = ira_max_point - 1;
+ if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
+ fprintf (ira_dump_file,
+ " Adding range [%d..%d] to allocno a%dr%d\n",
+ r->start, r->finish, ALLOCNO_NUM (move->to),
+ REGNO (ALLOCNO_REG (move->to)));
+ }
}
}
EXECUTE_IF_SET_IN_BITMAP (live_through, FIRST_PSEUDO_REGISTER, regno, bi)
{
ira_allocno_t to;
- ira_object_t obj;
+ int nr, i;
+
a = node->regno_allocno_map[regno];
- to = ALLOCNO_MEM_OPTIMIZED_DEST (a);
- if (to != NULL)
+ if ((to = ALLOCNO_MEM_OPTIMIZED_DEST (a)) != NULL)
a = to;
- obj = ALLOCNO_OBJECT (a);
- OBJECT_LIVE_RANGES (obj)
- = ira_create_live_range (obj, start, ira_max_point - 1,
- OBJECT_LIVE_RANGES (obj));
+ nr = ALLOCNO_NUM_OBJECTS (a);
+ for (i = 0; i < nr; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ ira_add_live_range_to_object (obj, start, ira_max_point - 1);
+ }
if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
fprintf
(ira_dump_file,
Index: gcc/ira-int.h
===================================================================
--- gcc.orig/ira-int.h
+++ gcc/ira-int.h
@@ -192,7 +192,6 @@ extern ira_loop_tree_node_t ira_loop_nod
#define IRA_LOOP_NODE(loop) IRA_LOOP_NODE_BY_INDEX ((loop)->num)
\f
-
/* The structure describes program points where a given allocno lives.
To save memory we store allocno conflicts only for the same cover
class allocnos which is enough to assign hard registers. To find
@@ -201,7 +200,7 @@ extern ira_loop_tree_node_t ira_loop_nod
intersected, the allocnos are in conflict. */
struct live_range
{
- /* Allocno whose live range is described by given structure. */
+ /* Object whose live range is described by given structure. */
ira_object_t object;
/* Program point range. */
int start, finish;
@@ -233,7 +232,7 @@ struct ira_object
ira_allocno_t allocno;
/* Vector of accumulated conflicting conflict_redords with NULL end
marker (if OBJECT_CONFLICT_VEC_P is true) or conflict bit vector
- otherwise. Only objects belonging to allocnos with the
+ otherwise. Only ira_objects belonging to allocnos with the
same cover class are in the vector or in the bit vector. */
void *conflicts_array;
/* Pointer to structures describing at what program point the
@@ -241,25 +240,27 @@ struct ira_object
ranges in the list are not intersected and ordered by decreasing
their program points*. */
live_range_t live_ranges;
+ /* The subword within ALLOCNO which is represented by this object.
+ Zero means the lowest-order subword (or the entire allocno in case
+ it is not being tracked in subwords). */
+ int subword;
/* Allocated size of the conflicts array. */
unsigned int conflicts_array_size;
- /* A unique number for every instance of this structure which is used
+ /* A unique number for every instance of this structure, which is used
to represent it in conflict bit vectors. */
int id;
/* Before building conflicts, MIN and MAX are initialized to
correspondingly minimal and maximal points of the accumulated
- allocno live ranges. Afterwards, they hold the minimal and
- maximal ids of other objects that this one can conflict
- with. */
+ live ranges. Afterwards, they hold the minimal and maximal ids
+ of other ira_objects that this one can conflict with. */
int min, max;
/* Initial and accumulated hard registers conflicting with this
- conflict record and as a consequences can not be assigned to the
- allocno. All non-allocatable hard regs and hard regs of cover
- classes different from given allocno one are included in the
- sets. */
+ object and as a consequences can not be assigned to the allocno.
+ All non-allocatable hard regs and hard regs of cover classes
+ different from given allocno one are included in the sets. */
HARD_REG_SET conflict_hard_regs, total_conflict_hard_regs;
/* Number of accumulated conflicts in the vector of conflicting
- conflict records. */
+ objects. */
int num_accumulated_conflicts;
/* TRUE if conflicts are represented by a vector of pointers to
ira_object structures. Otherwise, we use a bit vector indexed
@@ -346,9 +347,13 @@ struct ira_allocno
list is chained by NEXT_COALESCED_ALLOCNO. */
ira_allocno_t first_coalesced_allocno;
ira_allocno_t next_coalesced_allocno;
- /* Pointer to a structure describing conflict information about this
- allocno. */
- ira_object_t object;
+ /* The number of objects tracked in the following array. */
+ int num_objects;
+ /* An array of structures describing conflict information and live
+ ranges for each object associated with the allocno. There may be
+ more than one such object in cases where the allocno represents a
+ multi-word register. */
+ ira_object_t objects[2];
/* Accumulated frequency of calls which given allocno
intersects. */
int call_freq;
@@ -483,9 +488,11 @@ struct ira_allocno
#define ALLOCNO_TEMP(A) ((A)->temp)
#define ALLOCNO_FIRST_COALESCED_ALLOCNO(A) ((A)->first_coalesced_allocno)
#define ALLOCNO_NEXT_COALESCED_ALLOCNO(A) ((A)->next_coalesced_allocno)
-#define ALLOCNO_OBJECT(A) ((A)->object)
+#define ALLOCNO_OBJECT(A,N) ((A)->objects[N])
+#define ALLOCNO_NUM_OBJECTS(A) ((A)->num_objects)
#define OBJECT_ALLOCNO(C) ((C)->allocno)
+#define OBJECT_SUBWORD(C) ((C)->subword)
#define OBJECT_CONFLICT_ARRAY(C) ((C)->conflicts_array)
#define OBJECT_CONFLICT_VEC(C) ((ira_object_t *)(C)->conflicts_array)
#define OBJECT_CONFLICT_BITVEC(C) ((IRA_INT_TYPE *)(C)->conflicts_array)
@@ -497,7 +504,7 @@ struct ira_allocno
#define OBJECT_MIN(C) ((C)->min)
#define OBJECT_MAX(C) ((C)->max)
#define OBJECT_CONFLICT_ID(C) ((C)->id)
-#define OBJECT_LIVE_RANGES(C) ((C)->live_ranges)
+#define OBJECT_LIVE_RANGES(A) ((A)->live_ranges)
/* Map regno -> allocnos with given regno (see comments for
allocno member `next_regno_allocno'). */
@@ -596,6 +603,7 @@ extern int ira_max_nregs;
/* The type used as elements in the array, and the number of bits in
this type. */
+
#define IRA_INT_BITS HOST_BITS_PER_WIDE_INT
#define IRA_INT_TYPE HOST_WIDE_INT
@@ -693,7 +701,7 @@ minmax_set_iter_init (minmax_set_iterato
i->word = i->nel == 0 ? 0 : vec[0];
}
-/* Return TRUE if we have more elements to visit, in which case *N is
+/* Return TRUE if we have more allocnos to visit, in which case *N is
set to the number of the element to be visited. Otherwise, return
FALSE. */
static inline bool
@@ -735,7 +743,7 @@ minmax_set_iter_next (minmax_set_iterato
for (minmax_set_iter_init (&(ITER), (VEC), (MIN), (MAX)); \
minmax_set_iter_cond (&(ITER), &(N)); \
minmax_set_iter_next (&(ITER)))
-\f
+
/* ira.c: */
/* Map: hard regs X modes -> set of hard registers for storing value
@@ -865,12 +873,14 @@ extern void ira_traverse_loop_tree (bool
extern ira_allocno_t ira_parent_allocno (ira_allocno_t);
extern ira_allocno_t ira_parent_or_cap_allocno (ira_allocno_t);
extern ira_allocno_t ira_create_allocno (int, bool, ira_loop_tree_node_t);
-extern void ira_create_allocno_object (ira_allocno_t);
+extern void ira_create_allocno_objects (ira_allocno_t);
extern void ira_set_allocno_cover_class (ira_allocno_t, enum reg_class);
extern bool ira_conflict_vector_profitable_p (ira_object_t, int);
extern void ira_allocate_conflict_vec (ira_object_t, int);
extern void ira_allocate_object_conflicts (ira_object_t, int);
+extern void ior_hard_reg_conflicts (ira_allocno_t, HARD_REG_SET *);
extern void ira_print_expanded_allocno (ira_allocno_t);
+extern void ira_add_live_range_to_object (ira_object_t, int, int);
extern live_range_t ira_create_live_range (ira_object_t, int, int,
live_range_t);
extern live_range_t ira_copy_live_range_list (live_range_t);
@@ -995,7 +1005,7 @@ ira_allocno_iter_cond (ira_allocno_itera
\f
/* The iterator for all objects. */
typedef struct {
- /* The number of the current element in IRA_OBJECT_ID_MAP. */
+ /* The number of the current element in ira_object_id_map. */
int n;
} ira_object_iterator;
@@ -1023,13 +1033,44 @@ ira_object_iter_cond (ira_object_iterato
return false;
}
-/* Loop over all objects. In each iteration, A is set to the next
- conflict. ITER is an instance of ira_object_iterator used to iterate
+/* Loop over all objects. In each iteration, OBJ is set to the next
+ object. ITER is an instance of ira_object_iterator used to iterate
the objects. */
#define FOR_EACH_OBJECT(OBJ, ITER) \
for (ira_object_iter_init (&(ITER)); \
ira_object_iter_cond (&(ITER), &(OBJ));)
\f
+/* The iterator for objects associated with an allocno. */
+typedef struct {
+ /* The number of the element the allocno's object array. */
+ int n;
+} ira_allocno_object_iterator;
+
+/* Initialize the iterator I. */
+static inline void
+ira_allocno_object_iter_init (ira_allocno_object_iterator *i)
+{
+ i->n = 0;
+}
+
+/* Return TRUE if we have more objects to visit in allocno A, in which
+ case *O is set to the object to be visited. Otherwise, return
+ FALSE. */
+static inline bool
+ira_allocno_object_iter_cond (ira_allocno_object_iterator *i, ira_allocno_t a,
+ ira_object_t *o)
+{
+ *o = ALLOCNO_OBJECT (a, i->n);
+ return i->n++ < ALLOCNO_NUM_OBJECTS (a);
+}
+
+/* Loop over all objects associated with allocno A. In each
+ iteration, O is set to the next object. ITER is an instance of
+ ira_allocno_object_iterator used to iterate the conflicts. */
+#define FOR_EACH_ALLOCNO_OBJECT(A, O, ITER) \
+ for (ira_allocno_object_iter_init (&(ITER)); \
+ ira_allocno_object_iter_cond (&(ITER), (A), &(O));)
+\f
/* The iterator for copies. */
typedef struct {
@@ -1068,9 +1109,10 @@ ira_copy_iter_cond (ira_copy_iterator *i
for (ira_copy_iter_init (&(ITER)); \
ira_copy_iter_cond (&(ITER), &(C));)
\f
-/* The iterator for allocno conflicts. */
+/* The iterator for object conflicts. */
typedef struct {
- /* TRUE if the conflicts are represented by vector of objects. */
+
+ /* TRUE if the conflicts are represented by vector of allocnos. */
bool conflict_vec_p;
/* The conflict vector or conflict bit vector. */
Index: gcc/ira-lives.c
===================================================================
--- gcc.orig/ira-lives.c
+++ gcc/ira-lives.c
@@ -67,8 +67,12 @@ static int curr_point;
classes. */
static int high_pressure_start_point[N_REG_CLASSES];
-/* Allocnos live at current point in the scan. */
-static sparseset allocnos_live;
+/* Objects live at current point in the scan. */
+static sparseset objects_live;
+
+/* A temporary bitmap used in functions that wish to avoid visiting an allocno
+ multiple times. */
+static sparseset allocnos_processed;
/* Set of hard regs (except eliminable ones) currently live. */
static HARD_REG_SET hard_regs_live;
@@ -81,18 +85,17 @@ static int last_call_num;
/* The number of last call at which given allocno was saved. */
static int *allocno_saved_at_call;
-/* Record the birth of hard register REGNO, updating hard_regs_live
- and hard reg conflict information for living allocno. */
+/* Record the birth of hard register REGNO, updating hard_regs_live and
+ hard reg conflict information for living allocnos. */
static void
make_hard_regno_born (int regno)
{
unsigned int i;
SET_HARD_REG_BIT (hard_regs_live, regno);
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, i)
+ EXECUTE_IF_SET_IN_SPARSESET (objects_live, i)
{
- ira_allocno_t allocno = ira_allocnos[i];
- ira_object_t obj = ALLOCNO_OBJECT (allocno);
+ ira_object_t obj = ira_object_id_map[i];
SET_HARD_REG_BIT (OBJECT_CONFLICT_HARD_REGS (obj), regno);
SET_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), regno);
}
@@ -106,29 +109,29 @@ make_hard_regno_dead (int regno)
CLEAR_HARD_REG_BIT (hard_regs_live, regno);
}
-/* Record the birth of allocno A, starting a new live range for
- it if necessary, and updating hard reg conflict information. We also
- record it in allocnos_live. */
+/* Record the birth of object OBJ. Set a bit for it in objects_live,
+ start a new live range for it if necessary and update hard register
+ conflicts. */
static void
-make_allocno_born (ira_allocno_t a)
+make_object_born (ira_object_t obj)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- live_range_t p = OBJECT_LIVE_RANGES (obj);
+ live_range_t lr = OBJECT_LIVE_RANGES (obj);
- sparseset_set_bit (allocnos_live, ALLOCNO_NUM (a));
+ sparseset_set_bit (objects_live, OBJECT_CONFLICT_ID (obj));
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj), hard_regs_live);
IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj), hard_regs_live);
- if (p == NULL
- || (p->finish != curr_point && p->finish + 1 != curr_point))
- OBJECT_LIVE_RANGES (obj)
- = ira_create_live_range (obj, curr_point, -1, p);
+ if (lr == NULL
+ || (lr->finish != curr_point && lr->finish + 1 != curr_point))
+ ira_add_live_range_to_object (obj, curr_point, -1);
}
-/* Update ALLOCNO_EXCESS_PRESSURE_POINTS_NUM for allocno A. */
+/* Update ALLOCNO_EXCESS_PRESSURE_POINTS_NUM for the allocno
+ associated with object OBJ. */
static void
-update_allocno_pressure_excess_length (ira_allocno_t a)
+update_allocno_pressure_excess_length (ira_object_t obj)
{
+ ira_allocno_t a = OBJECT_ALLOCNO (obj);
int start, i;
enum reg_class cover_class, cl;
live_range_t p;
@@ -138,7 +141,6 @@ update_allocno_pressure_excess_length (i
(cl = ira_reg_class_super_classes[cover_class][i]) != LIM_REG_CLASSES;
i++)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
if (high_pressure_start_point[cl] < 0)
continue;
p = OBJECT_LIVE_RANGES (obj);
@@ -149,18 +151,18 @@ update_allocno_pressure_excess_length (i
}
}
-/* Process the death of allocno A. This finishes the current live
- range for it. */
+/* Process the death of object OBJ, which is associated with allocno
+ A. This finishes the current live range for it. */
static void
-make_allocno_dead (ira_allocno_t a)
+make_object_dead (ira_object_t obj)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- live_range_t p = OBJECT_LIVE_RANGES (obj);
+ live_range_t lr;
- ira_assert (p != NULL);
- p->finish = curr_point;
- update_allocno_pressure_excess_length (a);
- sparseset_clear_bit (allocnos_live, ALLOCNO_NUM (a));
+ sparseset_clear_bit (objects_live, OBJECT_CONFLICT_ID (obj));
+ lr = OBJECT_LIVE_RANGES (obj);
+ ira_assert (lr != NULL);
+ lr->finish = curr_point;
+ update_allocno_pressure_excess_length (obj);
}
/* The current register pressures for each cover class for the current
@@ -215,8 +217,8 @@ dec_register_pressure (enum reg_class co
}
if (set_p)
{
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, j)
- update_allocno_pressure_excess_length (ira_allocnos[j]);
+ EXECUTE_IF_SET_IN_SPARSESET (objects_live, j)
+ update_allocno_pressure_excess_length (ira_object_id_map[j]);
for (i = 0;
(cl = ira_reg_class_super_classes[cover_class][i])
!= LIM_REG_CLASSES;
@@ -233,8 +235,8 @@ static void
mark_pseudo_regno_live (int regno)
{
ira_allocno_t a = ira_curr_regno_allocno_map[regno];
+ int i, n, nregs;
enum reg_class cl;
- int nregs;
if (a == NULL)
return;
@@ -242,18 +244,66 @@ mark_pseudo_regno_live (int regno)
/* Invalidate because it is referenced. */
allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
- if (sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)))
+ n = ALLOCNO_NUM_OBJECTS (a);
+ cl = ALLOCNO_COVER_CLASS (a);
+ nregs = ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
+ if (n > 1)
+ {
+ /* We track every subobject separately. */
+ gcc_assert (nregs == n);
+ nregs = 1;
+ }
+
+ for (i = 0; i < n; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ if (sparseset_bit_p (objects_live, OBJECT_CONFLICT_ID (obj)))
+ continue;
+
+ inc_register_pressure (cl, nregs);
+ make_object_born (obj);
+ }
+}
+
+/* Like mark_pseudo_regno_live, but try to only mark one subword of
+ the pseudo as live. SUBWORD indicates which; a value of 0
+ indicates the low part. */
+static void
+mark_pseudo_regno_subword_live (int regno, int subword)
+{
+ ira_allocno_t a = ira_curr_regno_allocno_map[regno];
+ int n, nregs;
+ enum reg_class cl;
+ ira_object_t obj;
+
+ if (a == NULL)
return;
+ /* Invalidate because it is referenced. */
+ allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
+
+ n = ALLOCNO_NUM_OBJECTS (a);
+ if (n == 1)
+ {
+ mark_pseudo_regno_live (regno);
+ return;
+ }
+
cl = ALLOCNO_COVER_CLASS (a);
nregs = ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
+ gcc_assert (nregs == n);
+ obj = ALLOCNO_OBJECT (a, subword);
+
+ if (sparseset_bit_p (objects_live, OBJECT_CONFLICT_ID (obj)))
+ return;
+
inc_register_pressure (cl, nregs);
- make_allocno_born (a);
+ make_object_born (obj);
}
-/* Mark the hard register REG as live. Store a 1 in hard_regs_live
- for this register, record how many consecutive hardware registers
- it actually needs. */
+/* Mark the register REG as live. Store a 1 in hard_regs_live for
+ this register, record how many consecutive hardware registers it
+ actually needs. */
static void
mark_hard_reg_live (rtx reg)
{
@@ -281,13 +331,22 @@ mark_hard_reg_live (rtx reg)
static void
mark_ref_live (df_ref ref)
{
- rtx reg;
+ rtx reg = DF_REF_REG (ref);
+ rtx orig_reg = reg;
- reg = DF_REF_REG (ref);
if (GET_CODE (reg) == SUBREG)
reg = SUBREG_REG (reg);
+
if (REGNO (reg) >= FIRST_PSEUDO_REGISTER)
- mark_pseudo_regno_live (REGNO (reg));
+ {
+ if (df_read_modify_subreg_p (orig_reg))
+ {
+ mark_pseudo_regno_subword_live (REGNO (reg),
+ subreg_lowpart_p (orig_reg) ? 0 : 1);
+ }
+ else
+ mark_pseudo_regno_live (REGNO (reg));
+ }
else
mark_hard_reg_live (reg);
}
@@ -298,8 +357,8 @@ static void
mark_pseudo_regno_dead (int regno)
{
ira_allocno_t a = ira_curr_regno_allocno_map[regno];
+ int n, i, nregs;
enum reg_class cl;
- int nregs;
if (a == NULL)
return;
@@ -307,18 +366,61 @@ mark_pseudo_regno_dead (int regno)
/* Invalidate because it is referenced. */
allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
- if (! sparseset_bit_p (allocnos_live, ALLOCNO_NUM (a)))
+ n = ALLOCNO_NUM_OBJECTS (a);
+ cl = ALLOCNO_COVER_CLASS (a);
+ nregs = ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
+ if (n > 1)
+ {
+ /* We track every subobject separately. */
+ gcc_assert (nregs == n);
+ nregs = 1;
+ }
+ for (i = 0; i < n; i++)
+ {
+ ira_object_t obj = ALLOCNO_OBJECT (a, i);
+ if (!sparseset_bit_p (objects_live, OBJECT_CONFLICT_ID (obj)))
+ continue;
+
+ dec_register_pressure (cl, nregs);
+ make_object_dead (obj);
+ }
+}
+
+/* Like mark_pseudo_regno_dead, but called when we know that only part of the
+ register dies. SUBWORD indicates which; a value of 0 indicates the low part. */
+static void
+mark_pseudo_regno_subword_dead (int regno, int subword)
+{
+ ira_allocno_t a = ira_curr_regno_allocno_map[regno];
+ int n, nregs;
+ enum reg_class cl;
+ ira_object_t obj;
+
+ if (a == NULL)
+ return;
+
+ /* Invalidate because it is referenced. */
+ allocno_saved_at_call[ALLOCNO_NUM (a)] = 0;
+
+ n = ALLOCNO_NUM_OBJECTS (a);
+ if (n == 1)
+ /* The allocno as a whole doesn't die in this case. */
return;
cl = ALLOCNO_COVER_CLASS (a);
nregs = ira_reg_class_nregs[cl][ALLOCNO_MODE (a)];
- dec_register_pressure (cl, nregs);
+ gcc_assert (nregs == n);
+
+ obj = ALLOCNO_OBJECT (a, subword);
+ if (!sparseset_bit_p (objects_live, OBJECT_CONFLICT_ID (obj)))
+ return;
- make_allocno_dead (a);
+ dec_register_pressure (cl, 1);
+ make_object_dead (obj);
}
-/* Mark the hard register REG as dead. Store a 0 in hard_regs_live
- for the register. */
+/* Mark the hard register REG as dead. Store a 0 in hard_regs_live for the
+ register. */
static void
mark_hard_reg_dead (rtx reg)
{
@@ -346,17 +448,31 @@ mark_hard_reg_dead (rtx reg)
static void
mark_ref_dead (df_ref def)
{
- rtx reg;
+ rtx reg = DF_REF_REG (def);
+ rtx orig_reg = reg;
- if (DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL)
- || DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL))
+ if (DF_REF_FLAGS_IS_SET (def, DF_REF_CONDITIONAL))
return;
- reg = DF_REF_REG (def);
if (GET_CODE (reg) == SUBREG)
reg = SUBREG_REG (reg);
+
+ if (DF_REF_FLAGS_IS_SET (def, DF_REF_PARTIAL)
+ && (GET_CODE (orig_reg) != SUBREG
+ || REGNO (reg) < FIRST_PSEUDO_REGISTER
+ || !df_read_modify_subreg_p (orig_reg)))
+ return;
+
if (REGNO (reg) >= FIRST_PSEUDO_REGISTER)
- mark_pseudo_regno_dead (REGNO (reg));
+ {
+ if (df_read_modify_subreg_p (orig_reg))
+ {
+ mark_pseudo_regno_subword_dead (REGNO (reg),
+ subreg_lowpart_p (orig_reg) ? 0 : 1);
+ }
+ else
+ mark_pseudo_regno_dead (REGNO (reg));
+ }
else
mark_hard_reg_dead (reg);
}
@@ -467,7 +583,7 @@ check_and_make_def_conflict (int alt, in
/* If there's any alternative that allows USE to match DEF, do not
record a conflict. If that causes us to create an invalid
- instruction due to the earlyclobber, reload must fix it up. */
+ instruction due to the earlyclobber, reload must fix it up. */
for (alt1 = 0; alt1 < recog_data.n_alternatives; alt1++)
if (recog_op_alt[use][alt1].matches == def
|| (use < recog_data.n_operands - 1
@@ -835,13 +951,12 @@ process_single_reg_class_operands (bool
}
}
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, px)
+ EXECUTE_IF_SET_IN_SPARSESET (objects_live, px)
{
- a = ira_allocnos[px];
+ ira_object_t obj = ira_object_id_map[px];
+ a = OBJECT_ALLOCNO (obj);
if (a != operand_a)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
-
/* We could increase costs of A instead of making it
conflicting with the hard register. But it works worse
because it will be spilled in reload in anyway. */
@@ -896,7 +1011,7 @@ process_bb_node_lives (ira_loop_tree_nod
}
curr_bb_node = loop_tree_node;
reg_live_out = DF_LR_OUT (bb);
- sparseset_clear (allocnos_live);
+ sparseset_clear (objects_live);
REG_SET_TO_HARD_REG_SET (hard_regs_live, reg_live_out);
AND_COMPL_HARD_REG_SET (hard_regs_live, eliminable_regset);
AND_COMPL_HARD_REG_SET (hard_regs_live, ira_no_alloc_regs);
@@ -1010,21 +1125,14 @@ process_bb_node_lives (ira_loop_tree_nod
if (call_p)
{
last_call_num++;
+ sparseset_clear (allocnos_processed);
/* The current set of live allocnos are live across the call. */
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, i)
+ EXECUTE_IF_SET_IN_SPARSESET (objects_live, i)
{
- ira_allocno_t a = ira_allocnos[i];
+ ira_object_t obj = ira_object_id_map[i];
+ ira_allocno_t a = OBJECT_ALLOCNO (obj);
+ int num = ALLOCNO_NUM (a);
- if (allocno_saved_at_call[i] != last_call_num)
- /* Here we are mimicking caller-save.c behaviour
- which does not save hard register at a call if
- it was saved on previous call in the same basic
- block and the hard register was not mentioned
- between the two calls. */
- ALLOCNO_CALL_FREQ (a) += freq;
- /* Mark it as saved at the next call. */
- allocno_saved_at_call[i] = last_call_num + 1;
- ALLOCNO_CALLS_CROSSED_NUM (a)++;
/* Don't allocate allocnos that cross setjmps or any
call, if this function receives a nonlocal
goto. */
@@ -1032,18 +1140,31 @@ process_bb_node_lives (ira_loop_tree_nod
|| find_reg_note (insn, REG_SETJMP,
NULL_RTX) != NULL_RTX)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
SET_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj));
SET_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj));
}
if (can_throw_internal (insn))
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
- call_used_reg_set);
IOR_HARD_REG_SET (OBJECT_CONFLICT_HARD_REGS (obj),
call_used_reg_set);
+ IOR_HARD_REG_SET (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
+ call_used_reg_set);
}
+
+ if (sparseset_bit_p (allocnos_processed, num))
+ continue;
+ sparseset_set_bit (allocnos_processed, num);
+
+ if (allocno_saved_at_call[num] != last_call_num)
+ /* Here we are mimicking caller-save.c behaviour
+ which does not save hard register at a call if
+ it was saved on previous call in the same basic
+ block and the hard register was not mentioned
+ between the two calls. */
+ ALLOCNO_CALL_FREQ (a) += freq;
+ /* Mark it as saved at the next call. */
+ allocno_saved_at_call[num] = last_call_num + 1;
+ ALLOCNO_CALLS_CROSSED_NUM (a)++;
}
}
@@ -1101,10 +1222,11 @@ process_bb_node_lives (ira_loop_tree_nod
if (bb_has_abnormal_pred (bb))
{
#ifdef STACK_REGS
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, px)
+ EXECUTE_IF_SET_IN_SPARSESET (objects_live, px)
{
- ALLOCNO_NO_STACK_REG_P (ira_allocnos[px]) = true;
- ALLOCNO_TOTAL_NO_STACK_REG_P (ira_allocnos[px]) = true;
+ ira_allocno_t a = OBJECT_ALLOCNO (ira_object_id_map[px]);
+ ALLOCNO_NO_STACK_REG_P (a) = true;
+ ALLOCNO_TOTAL_NO_STACK_REG_P (a) = true;
}
for (px = FIRST_STACK_REG; px <= LAST_STACK_REG; px++)
make_hard_regno_born (px);
@@ -1118,8 +1240,8 @@ process_bb_node_lives (ira_loop_tree_nod
make_hard_regno_born (px);
}
- EXECUTE_IF_SET_IN_SPARSESET (allocnos_live, i)
- make_allocno_dead (ira_allocnos[i]);
+ EXECUTE_IF_SET_IN_SPARSESET (objects_live, i)
+ make_object_dead (ira_object_id_map[i]);
curr_point++;
@@ -1143,31 +1265,24 @@ process_bb_node_lives (ira_loop_tree_nod
static void
create_start_finish_chains (void)
{
- ira_allocno_t a;
- ira_allocno_iterator ai;
+ ira_object_t obj;
+ ira_object_iterator oi;
live_range_t r;
ira_start_point_ranges
- = (live_range_t *) ira_allocate (ira_max_point
- * sizeof (live_range_t));
- memset (ira_start_point_ranges, 0,
- ira_max_point * sizeof (live_range_t));
+ = (live_range_t *) ira_allocate (ira_max_point * sizeof (live_range_t));
+ memset (ira_start_point_ranges, 0, ira_max_point * sizeof (live_range_t));
ira_finish_point_ranges
- = (live_range_t *) ira_allocate (ira_max_point
- * sizeof (live_range_t));
- memset (ira_finish_point_ranges, 0,
- ira_max_point * sizeof (live_range_t));
- FOR_EACH_ALLOCNO (a, ai)
- {
- ira_object_t obj = ALLOCNO_OBJECT (a);
- for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
- {
- r->start_next = ira_start_point_ranges[r->start];
- ira_start_point_ranges[r->start] = r;
- r->finish_next = ira_finish_point_ranges[r->finish];
+ = (live_range_t *) ira_allocate (ira_max_point * sizeof (live_range_t));
+ memset (ira_finish_point_ranges, 0, ira_max_point * sizeof (live_range_t));
+ FOR_EACH_OBJECT (obj, oi)
+ for (r = OBJECT_LIVE_RANGES (obj); r != NULL; r = r->next)
+ {
+ r->start_next = ira_start_point_ranges[r->start];
+ ira_start_point_ranges[r->start] = r;
+ r->finish_next = ira_finish_point_ranges[r->finish];
ira_finish_point_ranges[r->finish] = r;
- }
- }
+ }
}
/* Rebuild IRA_START_POINT_RANGES and IRA_FINISH_POINT_RANGES after
@@ -1201,7 +1316,7 @@ remove_some_program_points_and_update_li
{
ira_assert (r->start <= r->finish);
bitmap_set_bit (born_or_died, r->start);
- bitmap_set_bit (born_or_died, r->finish);
+ bitmap_set_bit (born_or_died, r->finish);
}
map = (int *) ira_allocate (sizeof (int) * ira_max_point);
@@ -1222,6 +1337,7 @@ remove_some_program_points_and_update_li
r->start = map[r->start];
r->finish = map[r->finish];
}
+
ira_free (map);
}
@@ -1241,13 +1357,27 @@ ira_debug_live_range_list (live_range_t
ira_print_live_range_list (stderr, r);
}
+/* Print live ranges of object OBJ to file F. */
+static void
+print_object_live_ranges (FILE *f, ira_object_t obj)
+{
+ ira_print_live_range_list (f, OBJECT_LIVE_RANGES (obj));
+}
+
/* Print live ranges of allocno A to file F. */
static void
print_allocno_live_ranges (FILE *f, ira_allocno_t a)
{
- ira_object_t obj = ALLOCNO_OBJECT (a);
- fprintf (f, " a%d(r%d):", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
- ira_print_live_range_list (f, OBJECT_LIVE_RANGES (obj));
+ int n = ALLOCNO_NUM_OBJECTS (a);
+ int i;
+ for (i = 0; i < n; i++)
+ {
+ fprintf (f, " a%d(r%d", ALLOCNO_NUM (a), ALLOCNO_REGNO (a));
+ if (n > 1)
+ fprintf (f, " [%d]", i);
+ fprintf (f, "):");
+ print_object_live_ranges (f, ALLOCNO_OBJECT (a, i));
+ }
}
/* Print live ranges of allocno A to stderr. */
@@ -1276,12 +1406,13 @@ ira_debug_live_ranges (void)
}
/* The main entry function creates live ranges, set up
- CONFLICT_HARD_REGS and TOTAL_CONFLICT_HARD_REGS for allocnos, and
+ CONFLICT_HARD_REGS and TOTAL_CONFLICT_HARD_REGS for objects, and
calculate register pressure info. */
void
ira_create_allocno_live_ranges (void)
{
- allocnos_live = sparseset_alloc (ira_allocnos_num);
+ objects_live = sparseset_alloc (ira_objects_num);
+ allocnos_processed = sparseset_alloc (ira_allocnos_num);
curr_point = 0;
last_call_num = 0;
allocno_saved_at_call
@@ -1295,7 +1426,8 @@ ira_create_allocno_live_ranges (void)
print_live_ranges (ira_dump_file);
/* Clean up. */
ira_free (allocno_saved_at_call);
- sparseset_free (allocnos_live);
+ sparseset_free (objects_live);
+ sparseset_free (allocnos_processed);
}
/* Compress allocno live ranges. */
^ permalink raw reply [flat|nested] 42+ messages in thread
* Ping: Patch 10/9: track subwords of DImode allocnos
2010-06-21 18:01 ` Patch 10/9: track subwords of DImode allocnos Bernd Schmidt
@ 2010-07-06 23:49 ` Bernd Schmidt
2010-07-13 20:43 ` Jeff Law
1 sibling, 0 replies; 42+ messages in thread
From: Bernd Schmidt @ 2010-07-06 23:49 UTC (permalink / raw)
To: GCC Patches
Friendly ping for the IRA DImode changes. I realize that this can take
some time to review.
http://gcc.gnu.org/ml/gcc-patches/2010-06/msg02056.html
Bernd
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-06-21 18:01 ` Patch 10/9: track subwords of DImode allocnos Bernd Schmidt
2010-07-06 23:49 ` Ping: " Bernd Schmidt
@ 2010-07-13 20:43 ` Jeff Law
2010-07-13 21:10 ` Bernd Schmidt
1 sibling, 1 reply; 42+ messages in thread
From: Jeff Law @ 2010-07-13 20:43 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches
On 06/21/10 11:04, Bernd Schmidt wrote:
> So here's the scary part. This adds ALLOCNO_NUM_OBJECTS and the
> possibility that it may be larger than 1. Currently, it only tries to
> do anything for two-word (i.e. DImode) allocnos; it should be possible
> (and even relatively easy) to extend, but I'm not sure it's worthwhile.
> Whether even this version is worthwhile is for others to decide.
>
> I should explain what I've done with the conflict handling. Given two
> DImode allocnos A and B with halves Ah, Al, Bh and Bl, we can encounter
> four different conflicts: AhxBl, AhxBh, AlxBh and AlxBl. Of these, only
> three are meaningful: AhxBh and AlxBl can be treated equivalently in
> every place I found. This reduces the number of ways two such allocnos
> can conflict to 3, and I've implemented this (as "conflict
> canonicalization") by recording an AlxBl conflict instead of a AhxBh
> conflict if one is found. This is meaningful for functions like
> setup_allocno_left_conflicts_size: each of these three conflicts reduces
> the number of registers available for allocation by 1.
>
> There are some places in IRA that use conflict tests to determine
> whether two allocnos can be given the same hard register; in these cases
> it is sufficient to test the low-order objects for conflicts (given the
> canonicalization described above). Any other type of conflict would not
> prevent the allocnos from being given the same hard register (assuming
> that both will be assigned two hard regs).
>
> There is one place in the code where this canonicalization has an ugly
> effect: in setup_min_max_conflict_allocno_ids, we have to extend the
> min/max value for object 0 of each multi-word allocno, since we may
> later record conflicts for them that are due to AhxBh and not apparent
> at this point in the code.
>
> Another possibly slightly ugly case is the handling of
> ALLOCNO_EXCESS_PRESSURE_POINTS_NUM; it seemed easiest just to count
> these points for each object separately, and then divide by
> ALLOCNO_NUM_OBJECTS later on.
>
> The test for conflicts in assign_hard_reg is quite complicated due to
> the possibility Jeff mentioned: the value of hard_regno_nregs may differ
> for some element regs of a cover class. I believe this is handled
> correctly, but it really is quite tricky.
>
> Even after more than a week of digging through IRA, I can't claim to
> understand all of it. I've made sure that all the places I touched
> looked sane afterwards, but - for example - I don't really know yet what
> ira_emit is trying to do. There may be bad interactions.
>
> Still, successfully bootstrapped and regression tested on i686-linux.
> Last week I've used earlier versions with an ARM compiler and seemed to
> get small code size improvements on Crafty; it also fixes the remaining
> issue with PR42502. I'm also thinking of extending it further to do a
> DCE of subreg stores which should help PR42575.
>
>
> Bernd
>
Overall this was relatively straightforward. You touched on most of the
non-obvious stuff above. Answers to most of my questions became clear
as wrote out the questions. Here's all that's left:
In assign_hard_reg, you moved this hunk:
+ if (allocno_coalesced_p)
+ {
+ if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno)))
+ continue;
+ bitmap_set_bit (processed_coalesced_allocno_bitmap,
+ ALLOCNO_NUM (conflict_allocno));
+ }
Into the ! ALLOCNO_MAY_BE_SPILLED_P if-clause rather than leaving it to
execute unconditionally for each conflict allocno. I don't see the
reasoning behind this change.
I'm happy to let you and Vlad work out the exact timing for when the
bits get committed.
Jeff
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-13 20:43 ` Jeff Law
@ 2010-07-13 21:10 ` Bernd Schmidt
2010-07-13 22:01 ` Vladimir Makarov
2010-07-14 19:06 ` Jeff Law
0 siblings, 2 replies; 42+ messages in thread
From: Bernd Schmidt @ 2010-07-13 21:10 UTC (permalink / raw)
To: Jeff Law; +Cc: GCC Patches, Vladimir N. Makarov
On 07/13/2010 10:43 PM, Jeff Law wrote:
> Overall this was relatively straightforward. You touched on most of the
> non-obvious stuff above. Answers to most of my questions became clear
> as wrote out the questions. Here's all that's left:
>
>
> In assign_hard_reg, you moved this hunk:
>
> + if (allocno_coalesced_p)
> + {
> + if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
> + ALLOCNO_NUM (conflict_allocno)))
> + continue;
> + bitmap_set_bit (processed_coalesced_allocno_bitmap,
> + ALLOCNO_NUM (conflict_allocno));
> + }
>
> Into the ! ALLOCNO_MAY_BE_SPILLED_P if-clause rather than leaving it to
> execute unconditionally for each conflict allocno. I don't see the
> reasoning behind this change.
We've found a conflicting object, and looked up the corresponding
allocno. There are two cases here, either the conflicting allocno has a
hard register already, or it doesn't. In the first case, we need to
track the conflicts by object, which means we can't ignore the conflict
if we've seen the allocno previously - we might have seen a different
subword. In the second case, we're just doing some costs bookkeeping,
and here it's OK to skip the allocno if we've seen it before.
Does that make sense?
> I'm happy to let you and Vlad work out the exact timing for when the
> bits get committed.
Vlad, is it better for you if I check in the preliminary bits (6-9) now,
or should I wait until you've had a chance to look at things?
Bernd
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-13 21:10 ` Bernd Schmidt
@ 2010-07-13 22:01 ` Vladimir Makarov
2010-07-14 2:00 ` Bernd Schmidt
2010-07-20 14:28 ` Bernd Schmidt
2010-07-14 19:06 ` Jeff Law
1 sibling, 2 replies; 42+ messages in thread
From: Vladimir Makarov @ 2010-07-13 22:01 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: Jeff Law, GCC Patches
Bernd Schmidt wrote:
> On 07/13/2010 10:43 PM, Jeff Law wrote:
>
>
>> Overall this was relatively straightforward. You touched on most of the
>> non-obvious stuff above. Answers to most of my questions became clear
>> as wrote out the questions. Here's all that's left:
>>
>>
>> In assign_hard_reg, you moved this hunk:
>>
>> + if (allocno_coalesced_p)
>> + {
>> + if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
>> + ALLOCNO_NUM (conflict_allocno)))
>> + continue;
>> + bitmap_set_bit (processed_coalesced_allocno_bitmap,
>> + ALLOCNO_NUM (conflict_allocno));
>> + }
>>
>> Into the ! ALLOCNO_MAY_BE_SPILLED_P if-clause rather than leaving it to
>> execute unconditionally for each conflict allocno. I don't see the
>> reasoning behind this change.
>>
>
> We've found a conflicting object, and looked up the corresponding
> allocno. There are two cases here, either the conflicting allocno has a
> hard register already, or it doesn't. In the first case, we need to
> track the conflicts by object, which means we can't ignore the conflict
> if we've seen the allocno previously - we might have seen a different
> subword. In the second case, we're just doing some costs bookkeeping,
> and here it's OK to skip the allocno if we've seen it before.
>
> Does that make sense?
>
>
>> I'm happy to let you and Vlad work out the exact timing for when the
>> bits get committed.
>>
>
> Vlad, is it better for you if I check in the preliminary bits (6-9) now,
> or should I wait until you've had a chance to look at things?
>
>
It is ok for me if you commit it now. The earlier you commit, the
earlier I start conflict resolution on the branch and look at your patches.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-13 22:01 ` Vladimir Makarov
@ 2010-07-14 2:00 ` Bernd Schmidt
2010-07-22 18:00 ` Nathan Froyd
2010-07-20 14:28 ` Bernd Schmidt
1 sibling, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-07-14 2:00 UTC (permalink / raw)
To: Vladimir Makarov; +Cc: Jeff Law, GCC Patches
On 07/14/2010 12:02 AM, Vladimir Makarov wrote:
>> Vlad, is it better for you if I check in the preliminary bits (6-9) now,
>> or should I wait until you've had a chance to look at things?
>>
>>
> It is ok for me if you commit it now. The earlier you commit, the
> earlier I start conflict resolution on the branch and look at your patches.
Patches 7-9 now committed after another bootstrap & regression test on
i686-linux. Still no observed changes in code generation.
It probably would make sense to remove -fira-coalesce at this point if
you want to merge that part now. This should simplify the final piece
of my patchkit; I can adapt and resubmit it. Or we could stick to the
original plan.
Bernd
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-14 2:00 ` Bernd Schmidt
@ 2010-07-22 18:00 ` Nathan Froyd
2010-07-22 18:25 ` Bernd Schmidt
0 siblings, 1 reply; 42+ messages in thread
From: Nathan Froyd @ 2010-07-22 18:00 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: Vladimir Makarov, Jeff Law, GCC Patches
On Wed, Jul 14, 2010 at 04:00:23AM +0200, Bernd Schmidt wrote:
> On 07/14/2010 12:02 AM, Vladimir Makarov wrote:
> >> Vlad, is it better for you if I check in the preliminary bits (6-9) now,
> >> or should I wait until you've had a chance to look at things?
> >>
> >>
> > It is ok for me if you commit it now. The earlier you commit, the
> > earlier I start conflict resolution on the branch and look at your patches.
>
> Patches 7-9 now committed after another bootstrap & regression test on
> i686-linux. Still no observed changes in code generation.
At least this patch:
http://gcc.gnu.org/ml/gcc-patches/2010-06/msg02056.html
causes ICEs with powerpc-eabispe when compiling -msoft-float's libgcc:
../../.././gcc/dp-bit.c: In function '_fpadd_parts':
../../.././gcc/dp-bit.c:731:1: internal compiler error: in check_allocation, at ira.c:1629
Please submit a full bug report,
with preprocessed source if appropriate.
See <http://gcc.gnu.org/bugs.html> for instructions.
I am way out of my depth debugging this code; I tried loading everything
into gdb to provide a little more useful information, but of course the
compilation succeeds when cc1 is being run under gdb. :(
-Nathan
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-22 18:00 ` Nathan Froyd
@ 2010-07-22 18:25 ` Bernd Schmidt
2010-07-22 18:50 ` Nathan Froyd
0 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-07-22 18:25 UTC (permalink / raw)
To: Nathan Froyd; +Cc: Vladimir Makarov, Jeff Law, GCC Patches
[-- Attachment #1: Type: text/plain, Size: 790 bytes --]
On 07/22/2010 08:00 PM, Nathan Froyd wrote:
> http://gcc.gnu.org/ml/gcc-patches/2010-06/msg02056.html
>
> causes ICEs with powerpc-eabispe when compiling -msoft-float's libgcc:
>
> ../../.././gcc/dp-bit.c: In function '_fpadd_parts':
> ../../.././gcc/dp-bit.c:731:1: internal compiler error: in check_allocation, at ira.c:1629
> Please submit a full bug report,
> with preprocessed source if appropriate.
> See <http://gcc.gnu.org/bugs.html> for instructions.
>
> I am way out of my depth debugging this code; I tried loading everything
> into gdb to provide a little more useful information, but of course the
> compilation succeeds when cc1 is being run under gdb. :(
At least this seems at first glance only to be a bug in the
verification. Can you test the attached patch?
Bernd
[-- Attachment #2: for-nathan.diff --]
[-- Type: text/plain, Size: 1577 bytes --]
Index: ira.c
===================================================================
--- ira.c (revision 162421)
+++ ira.c (working copy)
@@ -1624,11 +1624,14 @@ check_allocation (void)
|| (hard_regno = ALLOCNO_HARD_REGNO (a)) < 0)
continue;
nregs = hard_regno_nregs[hard_regno][ALLOCNO_MODE (a)];
- if (n > 1)
- {
- gcc_assert (n == nregs);
- nregs = 1;
- }
+ if (nregs == 1)
+ /* We allocated a single hard register. */
+ n = 1;
+ else if (n > 1)
+ /* We allocated multiple hard registers, and we will test
+ conflicts in a granularity of single hard regs. */
+ nregs = 1;
+
for (i = 0; i < n; i++)
{
ira_object_t obj = ALLOCNO_OBJECT (a, i);
@@ -1648,7 +1651,13 @@ check_allocation (void)
int conflict_hard_regno = ALLOCNO_HARD_REGNO (conflict_a);
if (conflict_hard_regno < 0)
continue;
- if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1)
+
+ conflict_nregs
+ = (hard_regno_nregs
+ [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
+
+ if (ALLOCNO_NUM_OBJECTS (conflict_a) > 1
+ && conflict_nregs == ALLOCNO_NUM_OBJECTS (conflict_a))
{
if (WORDS_BIG_ENDIAN)
conflict_hard_regno += (ALLOCNO_NUM_OBJECTS (conflict_a)
@@ -1657,10 +1666,6 @@ check_allocation (void)
conflict_hard_regno += OBJECT_SUBWORD (conflict_obj);
conflict_nregs = 1;
}
- else
- conflict_nregs
- = (hard_regno_nregs
- [conflict_hard_regno][ALLOCNO_MODE (conflict_a)]);
if ((conflict_hard_regno <= this_regno
&& this_regno < conflict_hard_regno + conflict_nregs)
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-22 18:25 ` Bernd Schmidt
@ 2010-07-22 18:50 ` Nathan Froyd
2010-07-22 22:35 ` Bernd Schmidt
0 siblings, 1 reply; 42+ messages in thread
From: Nathan Froyd @ 2010-07-22 18:50 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: Vladimir Makarov, Jeff Law, GCC Patches
On Thu, Jul 22, 2010 at 08:24:41PM +0200, Bernd Schmidt wrote:
> On 07/22/2010 08:00 PM, Nathan Froyd wrote:
> > http://gcc.gnu.org/ml/gcc-patches/2010-06/msg02056.html
> >
> > causes ICEs with powerpc-eabispe when compiling -msoft-float's libgcc:
> >
> > ../../.././gcc/dp-bit.c: In function '_fpadd_parts':
> > ../../.././gcc/dp-bit.c:731:1: internal compiler error: in check_allocation, at ira.c:1629
> > Please submit a full bug report,
> > with preprocessed source if appropriate.
> > See <http://gcc.gnu.org/bugs.html> for instructions.
> >
> > I am way out of my depth debugging this code; I tried loading everything
> > into gdb to provide a little more useful information, but of course the
> > compilation succeeds when cc1 is being run under gdb. :(
>
> At least this seems at first glance only to be a bug in the
> verification. Can you test the attached patch?
Yeah, that seems to work much better. Thanks!
-Nathan
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-22 18:50 ` Nathan Froyd
@ 2010-07-22 22:35 ` Bernd Schmidt
2010-07-25 1:23 ` H.J. Lu
0 siblings, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-07-22 22:35 UTC (permalink / raw)
To: Nathan Froyd; +Cc: Vladimir Makarov, Jeff Law, GCC Patches
>> At least this seems at first glance only to be a bug in the
>> verification. Can you test the attached patch?
>
> Yeah, that seems to work much better. Thanks!
Nathan tested a bit more, and I bootstrapped & tested on
{i686,x86_64}-linux. Committed.
Bernd
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-22 22:35 ` Bernd Schmidt
@ 2010-07-25 1:23 ` H.J. Lu
2011-01-27 8:39 ` H.J. Lu
0 siblings, 1 reply; 42+ messages in thread
From: H.J. Lu @ 2010-07-25 1:23 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: Nathan Froyd, Vladimir Makarov, Jeff Law, GCC Patches
On Thu, Jul 22, 2010 at 3:33 PM, Bernd Schmidt <bernds@codesourcery.com> wrote:
>>> At least this seems at first glance only to be a bug in the
>>> verification. Can you test the attached patch?
>>
>> Yeah, that seems to work much better. Thanks!
>
> Nathan tested a bit more, and I bootstrapped & tested on
> {i686,x86_64}-linux. Committed.
>
This caused:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45061
--
H.J.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-25 1:23 ` H.J. Lu
@ 2011-01-27 8:39 ` H.J. Lu
0 siblings, 0 replies; 42+ messages in thread
From: H.J. Lu @ 2011-01-27 8:39 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: Nathan Froyd, Vladimir Makarov, Jeff Law, GCC Patches
On Sat, Jul 24, 2010 at 6:22 PM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Thu, Jul 22, 2010 at 3:33 PM, Bernd Schmidt <bernds@codesourcery.com> wrote:
>>>> At least this seems at first glance only to be a bug in the
>>>> verification. Can you test the attached patch?
>>>
>>> Yeah, that seems to work much better. Thanks!
>>
>> Nathan tested a bit more, and I bootstrapped & tested on
>> {i686,x86_64}-linux. Committed.
>>
>
> This caused:
>
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45061
>
This also caused:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=47477
--
H.J.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-13 22:01 ` Vladimir Makarov
2010-07-14 2:00 ` Bernd Schmidt
@ 2010-07-20 14:28 ` Bernd Schmidt
2010-07-20 14:44 ` Vladimir Makarov
1 sibling, 1 reply; 42+ messages in thread
From: Bernd Schmidt @ 2010-07-20 14:28 UTC (permalink / raw)
To: Vladimir Makarov; +Cc: Jeff Law, GCC Patches
On 07/14/2010 12:02 AM, Vladimir Makarov wrote:
> Bernd Schmidt wrote:
>> Vlad, is it better for you if I check in the preliminary bits (6-9) now,
>> or should I wait until you've had a chance to look at things?
>>
>>
> It is ok for me if you commit it now. The earlier you commit, the
> earlier I start conflict resolution on the branch and look at your patches.
Did you mean all of it, or just the preliminary bits? I've been holding
off on the final piece to give you time to investigate whether it will
cause problems with the new code.
Bernd
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-20 14:28 ` Bernd Schmidt
@ 2010-07-20 14:44 ` Vladimir Makarov
2010-07-22 15:49 ` Bernd Schmidt
0 siblings, 1 reply; 42+ messages in thread
From: Vladimir Makarov @ 2010-07-20 14:44 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: Jeff Law, GCC Patches
Bernd Schmidt wrote:
> On 07/14/2010 12:02 AM, Vladimir Makarov wrote:
>
>> Bernd Schmidt wrote:
>>
>
>
>>> Vlad, is it better for you if I check in the preliminary bits (6-9) now,
>>> or should I wait until you've had a chance to look at things?
>>>
>>>
>>>
>> It is ok for me if you commit it now. The earlier you commit, the
>> earlier I start conflict resolution on the branch and look at your patches.
>>
>
> Did you mean all of it, or just the preliminary bits? I've been holding
> off on the final piece to give you time to investigate whether it will
> cause problems with the new code.
>
>
I meant all of it. I'll resolve the conflicts on my branch and submit
my patches again.
^ permalink raw reply [flat|nested] 42+ messages in thread
* Re: Patch 10/9: track subwords of DImode allocnos
2010-07-13 21:10 ` Bernd Schmidt
2010-07-13 22:01 ` Vladimir Makarov
@ 2010-07-14 19:06 ` Jeff Law
1 sibling, 0 replies; 42+ messages in thread
From: Jeff Law @ 2010-07-14 19:06 UTC (permalink / raw)
To: Bernd Schmidt; +Cc: GCC Patches, Vladimir N. Makarov
On 07/13/10 15:09, Bernd Schmidt wrote:
> On 07/13/2010 10:43 PM, Jeff Law wrote:
>
>
>> Overall this was relatively straightforward. You touched on most of the
>> non-obvious stuff above. Answers to most of my questions became clear
>> as wrote out the questions. Here's all that's left:
>>
>>
>> In assign_hard_reg, you moved this hunk:
>>
>> + if (allocno_coalesced_p)
>> + {
>> + if (bitmap_bit_p (processed_coalesced_allocno_bitmap,
>> + ALLOCNO_NUM (conflict_allocno)))
>> + continue;
>> + bitmap_set_bit (processed_coalesced_allocno_bitmap,
>> + ALLOCNO_NUM (conflict_allocno));
>> + }
>>
>> Into the ! ALLOCNO_MAY_BE_SPILLED_P if-clause rather than leaving it to
>> execute unconditionally for each conflict allocno. I don't see the
>> reasoning behind this change.
>>
> We've found a conflicting object, and looked up the corresponding
> allocno. There are two cases here, either the conflicting allocno has a
> hard register already, or it doesn't. In the first case, we need to
> track the conflicts by object, which means we can't ignore the conflict
> if we've seen the allocno previously - we might have seen a different
> subword. In the second case, we're just doing some costs bookkeeping,
> and here it's OK to skip the allocno if we've seen it before.
>
> Does that make sense?
>
Yes. Makes perfect sense now. I might have just been burned out when I
stumbled across that somewhat odd hunk.
jeff
^ permalink raw reply [flat|nested] 42+ messages in thread