* [PATCH]: Improvements to sched-deps.c interface
@ 2007-04-13 7:00 Maxim Kuvyrkov
2007-06-07 8:27 ` [PING][PATCH]: " Maxim Kuvyrkov
0 siblings, 1 reply; 15+ messages in thread
From: Maxim Kuvyrkov @ 2007-04-13 7:00 UTC (permalink / raw)
To: Vladimir Makarov; +Cc: James E Wilson, Ayal Zaks, gcc-patches
[-- Attachment #1: Type: text/plain, Size: 2143 bytes --]
Hi!
This patch concludes stages <1>-<3> in the scheduler improvements plan
posted at http://gcc.gnu.org/ml/gcc/2007-03/msg00105.html :
1. Define clear interfaces to sched-deps.c
This part of the patch also provides several compile time improvements
such as faster dependency search due to:
1.1 using dependency caches not only during DDG building but also during
scheduling itself;
1.2 walking through the shorter list given the choice of two: backward
list of the consumer or forward list of the producer.
2. Use alloc_pools for dependencies creation / deletion.
Current implementation uses obstacks that are freed at the end of
scheduling. The patch provides functionality to free dependency
information after each region and allow dependency nodes to be reused
thus lowering memory consumption to the value of the largest region.
3. Generate forward dependencies simultaneously with backward ones.
A small cleanup that makes 1.2 possible.
Testing:
The patch was bootstrapped and regtested on {x86_64, ia64}-linux-gnu.
I timed the bootstrap on x86_64 and the patched compiler showed exactly
same results as original one.
As the patch does not change scheduler behavior. I.e. after ready_lists
are sorted the patched compiler produces exactly the same output as
original one. To verify that I built SPECCPU2000 on ia64-linux-gnu with
'-fsched-verbose=2' and filtered the dump with 'grep ready_sort'.
As a result of this testing a bug in priorities update mechanism was
discovered (http://gcc.gnu.org/ml/gcc-patches/2007-04/msg00520.html).
Should anyone repeat this testing on ia64 he/she should also add
-msched-count-spec-in-critical-path flag to his/her command line. This
is necessary because current implementation creates speculative
dependencies even when it won't be able to discard them (e.g.
data-speculative dependencies during 1st scheduler). Speculative
dependencies instead of hard ones are created and by default priority
computation ignores them. The patch fixes this issue and hence to make
the environment equivalent the flag should be specified.
:ADDPATCH scheduler:
Thanks,
Maxim
[-- Attachment #2: fdl-pool.clg --]
[-- Type: text/plain, Size: 8013 bytes --]
2007-04-12 Maxim Kuvyrkov <mkuvyrkov@ispras.ru>
* sched-int.h (struct _dep): Rename field 'kind' to 'type'.
(DEP_KIND): Rename to DEP_TYPE. Update all uses.
(dep_def): New typedef.
(init_dep_1, debug_dep): Declare functions.
(DEP_LINK_KIND): Rename to DEP_LINK_TYPE.
(debug_dep_links): Remove.
(struct _deps_list): New field 'n_links'.
(DEPS_LIST_N_LINKS): New macro.
(FOR_EACH_DEP_LINK): Remove.
(create_deps_list, free_deps_list, delete_deps_list): Remove
declaration.
(deps_list_empty_p, debug_deps_list, add_back_dep_to_deps_list): Ditto.
(find_link_by_pro_in_deps_list, find_link_by_con_in_deps_list): Ditto.
(copy_deps_list_change_con, move_dep_link): Ditto.
(struct haifa_insn_data): Split field 'back_deps' into 'hard_back_deps'
and 'spec_back_deps'. New field 'resolved_forw_deps'. Remove field
'dep_count'.
(INSN_BACK_DEPS): Remove.
(INSN_HARD_BACK_DEPS, INSN_SPEC_BACK_DEPS, INSN_RESOLVED_FORW_DEPS):
New macros.
(INSN_DEP_COUNT): Remove.
(enum DEPS_ADJUST_RESULT): Add new constant DEP_NODEP. Fix comments.
(spec_info, haifa_recovery_block_was_added_during_scheduling_p):
Declare global variables.
(deps_pools_are_empty_p, sched_free_deps): Declare functions.
(add_forw_dep, compute_forward_dependences): Remove declarations.
(add_or_update_back_dep, add_or_update_back_forw_dep): Ditto.
(add_back_forw_dep, delete_back_forw_dep): Ditto.
(debug_ds, sched_insn_is_legitimate_for_speculation_p): Declare
functions.
(SD_LIST_NONE, SD_LIST_HARD_BACK, SD_LIST_SPEC_BACK, SD_LIST_FORW): New
constants.
(SD_LIST_RES_BACK, SD_LIST_RES_FORW, SD_LIST_BACK): Ditto.
(sd_list_types_def): New typedef.
(sd_next_list): Declare function.
(struct _sd_iterator): New type.
(sd_iterator_def): New typedef.
(sd_iterator_start, sd_iterator_cond, sd_iterator_next): New inline
functions.
(FOR_EACH_DEP): New cycle wrapper.
(sd_lists_size, sd_lists_empty_p, sd_init_insn, sd_finish_insn):
Declare functions.
(sd_find_dep_between, sd_add_dep, sd_add_or_update_dep): Ditto.
(sd_resolve_dep, sd_copy_back_deps, sd_delete_dep, sd_debug_lists):
Ditto.
* sched-deps.c (init_dep_1): Make global.
(DUMP_DEP_PRO, DUMP_DEP_CON, DUMP_DEP_STATUS, DUMP_DEP_ALL): New
constants.
(dump_dep): New static function.
(dump_dep_flags): New static variable.
(debug_dep): New function.
(add_to_deps_list, remove_from_deps_list): Update 'n_links' field of
the list.
(move_dep_link): Use remove_from_deps_list (), instead of
detach_dep_link ().
(dep_links_consistent_p, dump_dep_links, debug_dep_links): Remove.
(dep_link_is_detached_p): New static function.
(deps_obstack, dl_obstack, dn_obstack): Remove. Use dn_pool, dl_pool
instead.
(dn_pool, dl_pool): New alloc_pools.
(dn_pool_diff, dl_pool_diff): New static variables.
(create_dep_node, delete_dep_node): New static function.
(create_deps_list): Use alloc_pool dl_pool.
(deps_list_empty_p): Use 'n_links' field.
(deps_pools_are_empty_p): New static function.
(alloc_deps_list, delete_deps_list): Remove.
(dump_deps_list, debug_deps_list, add_back_dep_to_deps_list): Remove.
(find_link_by_pro_in_deps_list, find_link_by_con_in_deps_list): Ditto.
(copy_deps_list_change_con): Remove. Use sd_copy_back_deps () instead.
(forward_dependency_cache): Remove.
(maybe_add_or_update_back_dep_1, add_or_update_back_dep_1): Remove
'back' from the names. Change signature to use dep_t instead of
equivalent quad.
(add_back_dep): Ditto. Make global.
(check_dep_status): Rename to check_dep ().
(sd_next_list, sd_lists_size, sd_lists_empty_p, sd_init_insn):
New functions.
(sd_finish_insn): Ditto.
(sd_find_dep_between_no_cache): New static function.
(sd_find_dep_between): New function.
(ask_dependency_caches, set_dependency_caches): New static functions.
(update_dependency_caches, change_spec_dep_to_hard, update_dep): Ditto.
(add_or_update_dep_1): Separate pieces of functionality into
ask_dependency_caches (), update_dependency_caches (),
change_spec_dep_to_hard (), update_dep ().
(get_back_and_forw_lists): New static function.
(sd_add_dep): Separate setting of dependency caches into
set_dependency_caches ().
(sd_add_or_update_dep, sd_resolve_dep, sd_copy_back_deps):
New functions.
(sd_delete_dep): Ditto.
(sd_dump_lists): New static function.
(sd_debug_lists): New debugging function.
(delete_all_dependences, fixup_sched_groups): Update to use
sd_* infrastructure.
(sched_analyze_2): Create data-speculative dependency only if
data-speculation is enabled.
(sched_analyze_insn): If insn cannot be speculative, make all its
dependencies non-speculative.
(sched_analyze): Use sd_init_insn ().
(add_forw_dep, compute_forward_dependencies): Remove.
(delete_dep_nodes_in_back_deps): New static function.
(sched_free_deps): New function.
(init_dependency_caches): Init alloc_pools.
(extend_depedency_caches): Update after removing of
forward_dependency_cache.
(adjust_add_sorted_back_dep, adjust_back_add_forw_dep): Remove.
(delete_forw_dep, add_or_update_back_dep, add_or_update_back_forw_dep):
Ditto.
(add_back_forw_dep, delete_back_forw_dep): Ditto.
(add_dependence): Use init_dep ().
(get_dep_weak_1): New static function.
(get_dep_weak): Move logic to get_dep_weak_1 ().
(dump_ds): New static function moved from haifa-sched.c:
debug_spec_status ().
(debug_ds): New debug function.
(check_dep_status): Rename to check_dep (). Update to check whole
dependencies.
* haifa-sched.c (spec_info): Make global.
(added_recovery_block_p): Rename to
'haifa_recovery_block_was_added_during_current_schedule_block_p'.
(haifa_recovery_block_was_added_during_scheduling_p): New variable.
(dep_cost, priority, rank_for_schedule, schedule_insn): Update
to use new interfaces.
(ok_for_early_queue_removal): Ditto.
(schedule_block): Initialize logical uids of insns emitted by the
target.
(sched_init): Initialize new variable.
(fix_inter_tick, try_ready, fix_tick_ready): Update to use new
interfaces.
(extend_global): Initialize insn data.
(init_h_i_d): Remove code that is now handled in sd_init_insn ().
(process_insn_forw_deps_be_in_spec): Change signature. Update to use
new interfaces.
(add_to_speculative_block): Update to use new interfaces.
(create_recovery_block): Set new variable.
(create_check_block_twin, fix_recovery_deps): Update to use new
interfaces.
(sched_insn_is_legitimate_for_speculation_p): New function.
(speculate_insn): Move checking logic to
sched_insn_is_legitimate_for_speculation_p ().
(clear_priorities, add_jump_dependencies): Update to use new
interfaces.
(debug_spec_status): Rename to dump_ds () and move to sched-deps.c.
* sched-rgn.c (set_spec_fed, find_conditional_protection): Update
to use new interfaces.
(is_conditionally_protected, is_pfree, is_prisky) Ditto.
(new_ready): Try to use control speculation only if it is available.
(add_branch_dependences): Update to use new interfaces.
(compute_block_backward_dependences): Rename to
compute_block_dependences (). Call
targetm.sched.dependencies_evaluation_hook ().
(free_block_dependencies): New static function.
(debug_dependencies): Update to use new interfaces.
(schedule_region): Remove separate computation of forward dependencies.
Move call of targetm.sched.dependencies_evaluation_hook () to
compute_block_dependences (). Free dependencies at the end of
scheduling the region.
* sched-ebb.c (earliest_block_with_similiar_load): Update to use
new interfaces.
(add_deps_for_risky_insns): Ditto.
(schedule_ebb): Remove separate computation of forward dependencies.
Free dependencies at the end of scheduling the ebb.
* ddg.c (create_ddg_dependence, build_intra_loop_deps): Update to use
new interfaces. Remove separate computation of forward dependencies.
Free dependencies sched-deps dependencies.
* config/ia64/ia64.c (ia64_dependencies_evaluation_hook): Update
to use new interfaces.
(ia64_dfa_new_cycle, ia64_gen_check): Ditto.
* config/rs6000/rs6000.c (rs6000_is_costly_dependence): Update to use
new interfaces.
[-- Attachment #3: fdl-pool.patch --]
[-- Type: text/x-patch, Size: 130427 bytes --]
--- sched-ebb.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
+++ sched-ebb.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
@@ -311,24 +311,26 @@ static struct sched_info ebb_sched_info
static basic_block
earliest_block_with_similiar_load (basic_block last_block, rtx load_insn)
{
- dep_link_t back_link;
+ sd_iterator_def back_sd_it;
+ dep_t back_dep;
basic_block bb, earliest_block = NULL;
- FOR_EACH_DEP_LINK (back_link, INSN_BACK_DEPS (load_insn))
+ FOR_EACH_DEP (load_insn, SD_LIST_BACK, back_sd_it, back_dep)
{
- rtx insn1 = DEP_LINK_PRO (back_link);
+ rtx insn1 = DEP_PRO (back_dep);
- if (DEP_LINK_KIND (back_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (back_dep) == REG_DEP_TRUE)
+ /* Found a DEF-USE dependence (insn1, load_insn). */
{
- /* Found a DEF-USE dependence (insn1, load_insn). */
- dep_link_t fore_link;
+ sd_iterator_def fore_sd_it;
+ dep_t fore_dep;
- FOR_EACH_DEP_LINK (fore_link, INSN_FORW_DEPS (insn1))
+ FOR_EACH_DEP (insn1, SD_LIST_FORW, fore_sd_it, fore_dep)
{
- rtx insn2 = DEP_LINK_CON (fore_link);
+ rtx insn2 = DEP_CON (fore_dep);
basic_block insn2_block = BLOCK_FOR_INSN (insn2);
- if (DEP_LINK_KIND (fore_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (fore_dep) == REG_DEP_TRUE)
{
if (earliest_block != NULL
&& earliest_block->index < insn2_block->index)
@@ -403,23 +405,32 @@ add_deps_for_risky_insns (rtx head, rtx
rank. */
if (! sched_insns_conditions_mutex_p (insn, prev))
{
- if (!(current_sched_info->flags & DO_SPECULATION))
+ dep_def _dep, *dep = &_dep;
+
+ init_dep (dep, prev, insn, REG_DEP_ANTI);
+
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
{
enum DEPS_ADJUST_RESULT res;
-
- res = add_or_update_back_dep (insn, prev,
- REG_DEP_ANTI, DEP_ANTI);
-
- if (res == DEP_CREATED)
- add_forw_dep (DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
- else
- gcc_assert (res != DEP_CHANGED);
+
+ res = sd_add_or_update_dep (dep, false);
+
+ /* We can't change an existing dependency with
+ DEP_ANTI. */
+ gcc_assert (res != DEP_CHANGED);
}
else
- add_or_update_back_forw_dep (insn, prev, REG_DEP_ANTI,
- set_dep_weak (DEP_ANTI,
- BEGIN_CONTROL,
- MAX_DEP_WEAK));
+ {
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (spec_info->mask & BEGIN_CONTROL))
+ DEP_STATUS (dep) = set_dep_weak (DEP_ANTI, BEGIN_CONTROL,
+ MAX_DEP_WEAK);
+
+ sd_add_or_update_dep (dep, false);
+
+ /* Dep_status could have been changed.
+ No assertion here. */
+ }
}
break;
@@ -458,14 +469,11 @@ schedule_ebb (rtx head, rtx tail)
{
init_deps_global ();
- /* Compute backward dependencies. */
+ /* Compute dependencies. */
init_deps (&tmp_deps);
sched_analyze (&tmp_deps, head, tail);
free_deps (&tmp_deps);
- /* Compute forward dependencies. */
- compute_forward_dependences (head, tail);
-
add_deps_for_risky_insns (head, tail);
if (targetm.sched.dependencies_evaluation_hook)
@@ -518,8 +526,12 @@ schedule_ebb (rtx head, rtx tail)
/* Sanity check: verify that all region insns were scheduled. */
gcc_assert (sched_n_insns == n_insns);
- head = current_sched_info->head;
- tail = current_sched_info->tail;
+
+ /* Free dependencies. */
+ sched_free_deps (current_sched_info->head, current_sched_info->tail, true);
+
+ gcc_assert (haifa_recovery_block_was_added_during_scheduling_p
+ || deps_pools_are_empty_p ());
if (EDGE_COUNT (last_bb->preds) == 0)
/* LAST_BB is unreachable. */
--- ddg.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
+++ ddg.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
@@ -166,9 +166,9 @@ create_ddg_dependence (ddg_ptr g, ddg_no
gcc_assert (link);
/* Note: REG_DEP_ANTI applies to MEM ANTI_DEP as well!! */
- if (DEP_KIND (link) == REG_DEP_ANTI)
+ if (DEP_TYPE (link) == REG_DEP_ANTI)
t = ANTI_DEP;
- else if (DEP_KIND (link) == REG_DEP_OUTPUT)
+ else if (DEP_TYPE (link) == REG_DEP_OUTPUT)
t = OUTPUT_DEP;
latency = dep_cost (link);
@@ -384,7 +384,6 @@ build_intra_loop_deps (ddg_ptr g)
/* Hold the dependency analysis state during dependency calculations. */
struct deps tmp_deps;
rtx head, tail;
- dep_link_t link;
/* Build the dependence information, using the sched_analyze function. */
init_deps_global ();
@@ -399,19 +398,19 @@ build_intra_loop_deps (ddg_ptr g)
for (i = 0; i < g->num_nodes; i++)
{
ddg_node_ptr dest_node = &g->nodes[i];
+ sd_iterator_def sd_it;
+ dep_t dep;
if (! INSN_P (dest_node->insn))
continue;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (dest_node->insn))
+ FOR_EACH_DEP (dest_node->insn, SD_LIST_BACK, sd_it, dep)
{
- dep_t dep = DEP_LINK_DEP (link);
ddg_node_ptr src_node = get_node_of_insn (g, DEP_PRO (dep));
if (!src_node)
continue;
- add_forw_dep (link);
create_ddg_dependence (g, src_node, dest_node, dep);
}
@@ -436,6 +435,9 @@ build_intra_loop_deps (ddg_ptr g)
/* Free the INSN_LISTs. */
finish_deps_global ();
free_deps (&tmp_deps);
+
+ /* Free dependencies. */
+ sched_free_deps (head, tail, false);
}
--- haifa-sched.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
+++ haifa-sched.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
@@ -207,11 +207,16 @@ static rtx note_list;
static struct spec_info_def spec_info_var;
/* Description of the speculative part of the scheduling.
If NULL - no speculation. */
-static spec_info_t spec_info;
+spec_info_t spec_info;
/* True, if recovery block was added during scheduling of current block.
Used to determine, if we need to fix INSN_TICKs. */
-static bool added_recovery_block_p;
+static bool haifa_recovery_block_was_added_during_current_schedule_block_p;
+
+/* True, if recovery block was added during this scheduling pass.
+ Used to determine if we should have empty memory pools of dependencies
+ after finishing current region. */
+bool haifa_recovery_block_was_added_during_scheduling_p;
/* Counters of different types of speculative instructions. */
static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
@@ -557,7 +562,7 @@ static void extend_global (rtx);
static void extend_all (rtx);
static void init_h_i_d (rtx);
static void generate_recovery_code (rtx);
-static void process_insn_forw_deps_be_in_spec (deps_list_t, rtx, ds_t);
+static void process_insn_forw_deps_be_in_spec (rtx, rtx, ds_t);
static void begin_speculative_block (rtx);
static void add_to_speculative_block (rtx);
static dw_t dep_weak (ds_t);
@@ -662,7 +667,7 @@ dep_cost (dep_t link)
else
{
rtx insn = DEP_PRO (link);
- enum reg_note dep_type = DEP_KIND (link);
+ enum reg_note dep_type = DEP_TYPE (link);
cost = insn_cost (insn);
@@ -692,7 +697,7 @@ dep_cost (dep_t link)
XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
/* Targets use only REG_NOTE_KIND of the link. */
- PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_KIND (link));
+ PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_TYPE (link));
cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
insn, cost);
@@ -734,8 +739,6 @@ contributes_to_priority_p (dep_t dep)
static int
priority (rtx insn)
{
- dep_link_t link;
-
if (! INSN_P (insn))
return 0;
@@ -746,7 +749,7 @@ priority (rtx insn)
{
int this_priority = 0;
- if (deps_list_empty_p (INSN_FORW_DEPS (insn)))
+ if (sd_lists_empty_p (insn, SD_LIST_FORW))
/* ??? We should set INSN_PRIORITY to insn_cost when and insn has
some forward deps but all of them are ignored by
contributes_to_priority hook. At the moment we set priority of
@@ -777,11 +780,13 @@ priority (rtx insn)
do
{
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (twin))
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ FOR_EACH_DEP (twin, SD_LIST_FORW, sd_it, dep)
{
rtx next;
int next_priority;
- dep_t dep = DEP_LINK_DEP (link);
next = DEP_CON (dep);
@@ -840,7 +845,6 @@ rank_for_schedule (const void *x, const
{
rtx tmp = *(const rtx *) y;
rtx tmp2 = *(const rtx *) x;
- dep_link_t link1, link2;
int tmp_class, tmp2_class;
int val, priority_val, weight_val, info_val;
@@ -893,31 +897,30 @@ rank_for_schedule (const void *x, const
/* Compare insns based on their relation to the last-scheduled-insn. */
if (INSN_P (last_scheduled_insn))
{
+ dep_t dep1;
+ dep_t dep2;
+
/* Classify the instructions into three classes:
1) Data dependent on last schedule insn.
2) Anti/Output dependent on last scheduled insn.
3) Independent of last scheduled insn, or has latency of one.
Choose the insn from the highest numbered class if different. */
- link1
- = find_link_by_con_in_deps_list (INSN_FORW_DEPS (last_scheduled_insn),
- tmp);
+ dep1 = sd_find_dep_between (last_scheduled_insn, tmp, true);
- if (link1 == NULL || dep_cost (DEP_LINK_DEP (link1)) == 1)
+ if (dep1 == NULL || dep_cost (dep1) == 1)
tmp_class = 3;
else if (/* Data dependence. */
- DEP_LINK_KIND (link1) == REG_DEP_TRUE)
+ DEP_TYPE (dep1) == REG_DEP_TRUE)
tmp_class = 1;
else
tmp_class = 2;
- link2
- = find_link_by_con_in_deps_list (INSN_FORW_DEPS (last_scheduled_insn),
- tmp2);
+ dep2 = sd_find_dep_between (last_scheduled_insn, tmp2, true);
- if (link2 == NULL || dep_cost (DEP_LINK_DEP (link2)) == 1)
+ if (dep2 == NULL || dep_cost (dep2) == 1)
tmp2_class = 3;
else if (/* Data dependence. */
- DEP_LINK_KIND (link2) == REG_DEP_TRUE)
+ DEP_TYPE (dep2) == REG_DEP_TRUE)
tmp2_class = 1;
else
tmp2_class = 2;
@@ -930,21 +933,11 @@ rank_for_schedule (const void *x, const
This gives the scheduler more freedom when scheduling later
instructions at the expense of added register pressure. */
- link1 = DEPS_LIST_FIRST (INSN_FORW_DEPS (tmp));
- link2 = DEPS_LIST_FIRST (INSN_FORW_DEPS (tmp2));
+ val = (sd_lists_size (tmp2, SD_LIST_FORW)
+ - sd_lists_size (tmp, SD_LIST_FORW));
- while (link1 != NULL && link2 != NULL)
- {
- link1 = DEP_LINK_NEXT (link1);
- link2 = DEP_LINK_NEXT (link2);
- }
-
- if (link1 != NULL && link2 == NULL)
- /* TMP (Y) has more insns that depend on it. */
- return -1;
- if (link1 == NULL && link2 != NULL)
- /* TMP2 (X) has more insns that depend on it. */
- return 1;
+ if (val != 0)
+ return val;
/* If insns are equally good, sort by INSN_LUID (original insn order),
so that we make the sort stable. This minimizes instruction movement,
@@ -1180,7 +1173,8 @@ static int last_clock_var;
static int
schedule_insn (rtx insn)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
int advance = 0;
if (sched_verbose >= 1)
@@ -1200,12 +1194,7 @@ schedule_insn (rtx insn)
/* Scheduling instruction should have all its dependencies resolved and
should have been removed from the ready list. */
- gcc_assert (INSN_DEP_COUNT (insn) == 0
- && deps_list_empty_p (INSN_BACK_DEPS (insn)));
- free_deps_list (INSN_BACK_DEPS (insn));
-
- /* Now we can free INSN_RESOLVED_BACK_DEPS list. */
- delete_deps_list (INSN_RESOLVED_BACK_DEPS (insn));
+ gcc_assert (sd_lists_empty_p (insn, SD_LIST_BACK));
gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
@@ -1221,19 +1210,15 @@ schedule_insn (rtx insn)
INSN_TICK (insn) = clock_var;
/* Update dependent instructions. */
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
+ for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
+ sd_iterator_cond (&sd_it, &dep);)
{
- rtx next = DEP_LINK_CON (link);
-
- /* Resolve the dependence between INSN and NEXT. */
-
- INSN_DEP_COUNT (next)--;
+ rtx next = DEP_CON (dep);
- move_dep_link (DEP_NODE_BACK (DEP_LINK_NODE (link)),
- INSN_RESOLVED_BACK_DEPS (next));
-
- gcc_assert ((INSN_DEP_COUNT (next) == 0)
- == deps_list_empty_p (INSN_BACK_DEPS (next)));
+ /* Resolve the dependence between INSN and NEXT.
+ sd_resolve_dep () moves current dep to another list thus
+ advancing the iterator. */
+ sd_resolve_dep (sd_it);
if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
{
@@ -1250,11 +1235,23 @@ schedule_insn (rtx insn)
/* Check always has only one forward dependence (to the first insn in
the recovery block), therefore, this will be executed only once. */
{
- gcc_assert (DEP_LINK_NEXT (link) == NULL);
+ gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW));
fix_recovery_deps (RECOVERY_BLOCK (insn));
}
}
+ /* This is the place where scheduler doesn't *basically* need backward and
+ forward dependencies for INSN anymore. Nevertheless their are used in
+ heuristics in rank_for_schedule (), early_queue_to_ready () and in
+ some targets (e.g. rs6000). Thus the earliest place where we *can*
+ remove dependencies is after targetm.sched.md_finish () call in
+ schedule_block (). But, on the other side, the safest place to remove
+ dependencies is when we finishing scheduling entire region. As we
+ don't generate [many] dependencies during scheduling itself, we won't
+ need memory until beginning of next region.
+ Bottom line: Dependencies are removed for all insns in the end of
+ scheduling the region. */
+
/* Annotate the instruction with issue information -- TImode
indicates that the instruction is expected not to be able
to issue on the same cycle as the previous insn. A machine
@@ -1588,15 +1585,12 @@ ok_for_early_queue_removal (rtx insn)
if (!NOTE_P (prev_insn))
{
- dep_link_t dep_link;
+ dep_t dep;
- dep_link = (find_link_by_con_in_deps_list
- (INSN_FORW_DEPS (prev_insn), insn));
+ dep = sd_find_dep_between (prev_insn, insn, true);
- if (dep_link)
+ if (dep != NULL)
{
- dep_t dep = DEP_LINK_DEP (dep_link);
-
cost = dep_cost (dep);
if (targetm.sched.is_costly_dependence (dep, cost,
@@ -2111,7 +2105,7 @@ schedule_block (basic_block *target_bb,
gcc_assert (head != tail || INSN_P (head));
- added_recovery_block_p = false;
+ haifa_recovery_block_was_added_during_current_schedule_block_p = false;
/* Debug info. */
if (sched_verbose)
@@ -2474,7 +2468,7 @@ schedule_block (basic_block *target_bb,
}
if (!current_sched_info->queue_must_finish_empty
- || added_recovery_block_p)
+ || haifa_recovery_block_was_added_during_current_schedule_block_p)
{
/* INSN_TICK (minimum clock tick at which the insn becomes
ready) may be not correct for the insn in the subsequent
@@ -2486,7 +2480,15 @@ schedule_block (basic_block *target_bb,
}
if (targetm.sched.md_finish)
- targetm.sched.md_finish (sched_dump, sched_verbose);
+ {
+ targetm.sched.md_finish (sched_dump, sched_verbose);
+
+ /* Target might have added some instructions to the scheduled block.
+ in its md_finish () hook. These new insns don't have any data
+ initialized and to identify them we extend h_i_d so that they'll
+ get zero luids.*/
+ extend_h_i_d ();
+ }
/* Update head/tail boundaries. */
head = NEXT_INSN (prev_head);
@@ -2699,6 +2701,8 @@ sched_init (void)
nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
before_recovery = 0;
+ haifa_recovery_block_was_added_during_scheduling_p = false;
+
#ifdef ENABLE_CHECKING
/* This is used preferably for finding bugs in check_cfg () itself. */
check_cfg (0, 0);
@@ -2758,6 +2762,10 @@ fix_inter_tick (rtx head, rtx tail)
{
/* Set of instructions with corrected INSN_TICK. */
bitmap_head processed;
+ /* ??? It is doubtful if we should assume that cycle advance happens on
+ basic block boundaries. Basically insns that are unconditionally ready
+ on the start of the block are more preferable then those which have
+ a one cycle dependency over insn from the previous block. */
int next_clock = clock_var + 1;
bitmap_initialize (&processed, 0);
@@ -2770,7 +2778,8 @@ fix_inter_tick (rtx head, rtx tail)
if (INSN_P (head))
{
int tick;
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
tick = INSN_TICK (head);
gcc_assert (tick >= MIN_TICK);
@@ -2787,11 +2796,11 @@ fix_inter_tick (rtx head, rtx tail)
INSN_TICK (head) = tick;
}
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (head))
+ FOR_EACH_DEP (head, SD_LIST_RES_FORW, sd_it, dep)
{
rtx next;
- next = DEP_LINK_CON (link);
+ next = DEP_CON (dep);
tick = INSN_TICK (next);
if (tick != INVALID_TICK
@@ -2810,7 +2819,7 @@ fix_inter_tick (rtx head, rtx tail)
INTER_TICK (next) = tick;
else
tick = INTER_TICK (next);
-
+
INSN_TICK (next) = tick;
}
}
@@ -2829,7 +2838,6 @@ int
try_ready (rtx next)
{
ds_t old_ts, *ts;
- dep_link_t link;
ts = &TODO_SPEC (next);
old_ts = *ts;
@@ -2838,44 +2846,52 @@ try_ready (rtx next)
&& ((old_ts & HARD_DEP)
|| (old_ts & SPECULATIVE)));
- if (!(current_sched_info->flags & DO_SPECULATION))
+ if (sd_lists_empty_p (next, SD_LIST_BACK))
+ /* NEXT has all its dependencies resolved. */
{
- if (deps_list_empty_p (INSN_BACK_DEPS (next)))
- *ts &= ~HARD_DEP;
+ /* Remove HARD_DEP bit from NEXT's status. */
+ *ts &= ~HARD_DEP;
+
+ if (current_sched_info->flags & DO_SPECULATION)
+ /* Remove all speculative bits from NEXT's status. */
+ *ts &= ~SPECULATIVE;
}
else
{
- *ts &= ~SPECULATIVE & ~HARD_DEP;
+ /* One of the NEXT's dependencies has been resolved.
+ Recalcute NEXT's status. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (next));
+ *ts &= ~SPECULATIVE & ~HARD_DEP;
- if (link != NULL)
- {
- ds_t ds = DEP_LINK_STATUS (link) & SPECULATIVE;
+ if (sd_lists_empty_p (next, SD_LIST_HARD_BACK))
+ /* Now we've got NEXT with speculative deps only.
+ 1. Look at the deps to see what we have to do.
+ 2. Check if we can do 'todo'. */
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
+ bool first_p = true;
- /* Backward dependencies of the insn are maintained sorted.
- So if DEP_STATUS of the first dep is SPECULATIVE,
- than all other deps are speculative too. */
- if (ds != 0)
- {
- /* Now we've got NEXT with speculative deps only.
- 1. Look at the deps to see what we have to do.
- 2. Check if we can do 'todo'. */
- *ts = ds;
+ FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
+ {
+ ds_t ds = DEP_STATUS (dep) & SPECULATIVE;
- while ((link = DEP_LINK_NEXT (link)) != NULL)
+ if (first_p)
{
- ds = DEP_LINK_STATUS (link) & SPECULATIVE;
- *ts = ds_merge (*ts, ds);
- }
+ first_p = false;
- if (dep_weak (*ts) < spec_info->weakness_cutoff)
- /* Too few points. */
- *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
+ *ts = ds;
+ }
+ else
+ *ts = ds_merge (*ts, ds);
}
- else
- *ts |= HARD_DEP;
- }
+
+ if (dep_weak (*ts) < spec_info->weakness_cutoff)
+ /* Too few points. */
+ *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
+ }
+ else
+ *ts |= HARD_DEP;
}
if (*ts & HARD_DEP)
@@ -2994,10 +3010,11 @@ fix_tick_ready (rtx next)
{
int tick, delay;
- if (!deps_list_empty_p (INSN_RESOLVED_BACK_DEPS (next)))
+ if (!sd_lists_empty_p (next, SD_LIST_RES_BACK))
{
int full_p;
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
tick = INSN_TICK (next);
/* if tick is not equal to INVALID_TICK, then update
@@ -3005,9 +3022,8 @@ fix_tick_ready (rtx next)
cost. Otherwise, recalculate from scratch. */
full_p = (tick == INVALID_TICK);
- FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (next))
+ FOR_EACH_DEP (next, SD_LIST_RES_BACK, sd_it, dep)
{
- dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
int tick1;
@@ -3121,11 +3137,18 @@ static void
extend_global (rtx insn)
{
gcc_assert (INSN_P (insn));
+
/* These structures have scheduler scope. */
+
+ /* Init h_i_d. */
extend_h_i_d ();
init_h_i_d (insn);
- extend_dependency_caches (1, 0);
+ /* Init data handled in sched-deps.c. */
+ sd_init_insn (insn);
+
+ /* Extend dependency caches by one element. */
+ extend_dependency_caches (1, false);
}
/* Extends global and local scheduler structures to include information
@@ -3153,14 +3176,6 @@ init_h_i_d (rtx insn)
INSN_TICK (insn) = INVALID_TICK;
INTER_TICK (insn) = INVALID_TICK;
find_insn_reg_weight1 (insn);
-
- /* These two lists will be freed in schedule_insn (). */
- INSN_BACK_DEPS (insn) = create_deps_list (false);
- INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list (false);
-
- /* This one should be allocated on the obstack because it should live till
- the scheduling ends. */
- INSN_FORW_DEPS (insn) = create_deps_list (true);
}
/* Generates recovery code for INSN. */
@@ -3181,18 +3196,19 @@ generate_recovery_code (rtx insn)
Tries to add speculative dependencies of type FS between instructions
in deps_list L and TWIN. */
static void
-process_insn_forw_deps_be_in_spec (deps_list_t l, rtx twin, ds_t fs)
+process_insn_forw_deps_be_in_spec (rtx insn, rtx twin, ds_t fs)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
- FOR_EACH_DEP_LINK (link, l)
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
{
ds_t ds;
rtx consumer;
- consumer = DEP_LINK_CON (link);
+ consumer = DEP_CON (dep);
- ds = DEP_LINK_STATUS (link);
+ ds = DEP_STATUS (dep);
if (/* If we want to create speculative dep. */
fs
@@ -3219,7 +3235,12 @@ process_insn_forw_deps_be_in_spec (deps_
ds |= fs;
}
- add_back_forw_dep (consumer, twin, DEP_LINK_KIND (link), ds);
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep_1 (new_dep, twin, consumer, DEP_TYPE (dep), ds);
+ sd_add_dep (new_dep, false);
+ }
}
}
@@ -3242,7 +3263,8 @@ static void
add_to_speculative_block (rtx insn)
{
ds_t ts;
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
rtx twins = NULL;
rtx_vec_t priorities_roots;
@@ -3260,50 +3282,52 @@ add_to_speculative_block (rtx insn)
DONE_SPEC (insn) |= ts;
/* First we convert all simple checks to branchy. */
- for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
{
- rtx check = DEP_LINK_PRO (link);
+ rtx check = DEP_PRO (dep);
if (IS_SPECULATION_SIMPLE_CHECK_P (check))
{
create_check_block_twin (check, true);
/* Restart search. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
+ sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
}
else
/* Continue search. */
- link = DEP_LINK_NEXT (link);
+ sd_iterator_next (&sd_it);
}
priorities_roots = NULL;
clear_priorities (insn, &priorities_roots);
-
- do
+
+ while (1)
{
- dep_link_t link;
rtx check, twin;
basic_block rec;
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
+ /* Get the first backward dependency of INSN. */
+ sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ if (!sd_iterator_cond (&sd_it, &dep))
+ /* INSN has no backward dependencies left. */
+ break;
- gcc_assert ((DEP_LINK_STATUS (link) & BEGIN_SPEC) == 0
- && (DEP_LINK_STATUS (link) & BE_IN_SPEC) != 0
- && (DEP_LINK_STATUS (link) & DEP_TYPES) == DEP_TRUE);
+ gcc_assert ((DEP_STATUS (dep) & BEGIN_SPEC) == 0
+ && (DEP_STATUS (dep) & BE_IN_SPEC) != 0
+ && (DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
- check = DEP_LINK_PRO (link);
+ check = DEP_PRO (dep);
gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
&& QUEUE_INDEX (check) == QUEUE_NOWHERE);
-
+
rec = BLOCK_FOR_INSN (check);
-
+
twin = emit_insn_before (copy_rtx (PATTERN (insn)), BB_END (rec));
extend_global (twin);
- copy_deps_list_change_con (INSN_RESOLVED_BACK_DEPS (twin),
- INSN_RESOLVED_BACK_DEPS (insn),
- twin);
+ sd_copy_back_deps (twin, insn, true);
if (sched_verbose && spec_info->dump)
/* INSN_BB (insn) isn't determined for twin insns yet.
@@ -3315,47 +3339,38 @@ add_to_speculative_block (rtx insn)
/* Add dependences between TWIN and all appropriate
instructions from REC. */
- do
- {
- add_back_forw_dep (twin, check, REG_DEP_TRUE, DEP_TRUE);
-
- do
- {
- link = DEP_LINK_NEXT (link);
+ FOR_EACH_DEP (insn, SD_LIST_SPEC_BACK, sd_it, dep)
+ {
+ rtx pro = DEP_PRO (dep);
- if (link != NULL)
- {
- check = DEP_LINK_PRO (link);
- if (BLOCK_FOR_INSN (check) == rec)
- break;
- }
- else
- break;
+ gcc_assert (DEP_TYPE (dep) == REG_DEP_TRUE);
+
+ /* INSN might have dependencies from the instructions from
+ several recovery blocks. At this iteration we process those
+ producers that reside in REC. */
+ if (BLOCK_FOR_INSN (pro) == rec)
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep (new_dep, pro, twin, REG_DEP_TRUE);
+ sd_add_dep (new_dep, false);
}
- while (1);
}
- while (link != NULL);
- process_insn_forw_deps_be_in_spec (INSN_FORW_DEPS (insn), twin, ts);
+ process_insn_forw_deps_be_in_spec (insn, twin, ts);
/* Remove all dependencies between INSN and insns in REC. */
- for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
{
- check = DEP_LINK_PRO (link);
+ rtx pro = DEP_PRO (dep);
- if (BLOCK_FOR_INSN (check) == rec)
- {
- delete_back_forw_dep (link);
-
- /* Restart search. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
- }
+ if (BLOCK_FOR_INSN (pro) == rec)
+ sd_delete_dep (sd_it);
else
- /* Continue search. */
- link = DEP_LINK_NEXT (link);
+ sd_iterator_next (&sd_it);
}
}
- while (!deps_list_empty_p (INSN_BACK_DEPS (insn)));
/* We couldn't have added the dependencies between INSN and TWINS earlier
because that would make TWINS appear in the INSN_BACK_DEPS (INSN). */
@@ -3364,7 +3379,13 @@ add_to_speculative_block (rtx insn)
rtx twin;
twin = XEXP (twins, 0);
- add_back_forw_dep (twin, insn, REG_DEP_OUTPUT, DEP_OUTPUT);
+
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
+ sd_add_dep (new_dep, false);
+ }
twin = XEXP (twins, 1);
free_INSN_LIST_node (twins);
@@ -3520,7 +3541,8 @@ create_recovery_block (void)
rtx barrier;
basic_block rec;
- added_recovery_block_p = true;
+ haifa_recovery_block_was_added_during_current_schedule_block_p = true;
+ haifa_recovery_block_was_added_during_scheduling_p = true;
if (!before_recovery)
init_before_recovery ();
@@ -3554,8 +3576,10 @@ create_check_block_twin (rtx insn, bool
{
basic_block rec;
rtx label, check, twin;
- dep_link_t link;
ds_t fs;
+ sd_iterator_def sd_it;
+ dep_t dep;
+ dep_def _new_dep, *new_dep = &_new_dep;
gcc_assert (ORIG_PAT (insn)
&& (!mutate_p
@@ -3604,14 +3628,17 @@ create_check_block_twin (rtx insn, bool
in the recovery block). */
if (rec != EXIT_BLOCK_PTR)
{
- FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (insn))
- if ((DEP_LINK_STATUS (link) & DEP_OUTPUT) != 0)
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
+ if ((DEP_STATUS (dep) & DEP_OUTPUT) != 0)
{
- struct _dep _dep, *dep = &_dep;
+ struct _dep _dep2, *dep2 = &_dep2;
- init_dep (dep, DEP_LINK_PRO (link), check, REG_DEP_TRUE);
+ init_dep (dep2, DEP_PRO (dep), check, REG_DEP_TRUE);
- add_back_dep_to_deps_list (INSN_RESOLVED_BACK_DEPS (check), dep);
+ sd_add_dep (dep2, true);
}
twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
@@ -3632,9 +3659,9 @@ create_check_block_twin (rtx insn, bool
(TRUE | OUTPUT). */
}
- copy_deps_list_change_con (INSN_RESOLVED_BACK_DEPS (twin),
- INSN_RESOLVED_BACK_DEPS (insn),
- twin);
+ /* Copy all resolved back dependencies of INSN to TWIN. This will
+ provide correct value for INSN_TICK (TWIN). */
+ sd_copy_back_deps (twin, insn, true);
if (rec != EXIT_BLOCK_PTR)
/* In case of branchy check, fix CFG. */
@@ -3697,10 +3724,11 @@ create_check_block_twin (rtx insn, bool
/* Move backward dependences from INSN to CHECK and
move forward dependences from INSN to TWIN. */
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
+
+ /* First, create dependencies between INSN's producers and CHECK & TWIN. */
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
{
- rtx pro = DEP_LINK_PRO (link);
- enum reg_note dk = DEP_LINK_KIND (link);
+ rtx pro = DEP_PRO (dep);
ds_t ds;
/* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
@@ -3718,7 +3746,7 @@ create_check_block_twin (rtx insn, bool
twin ~~TRUE~~> producer
twin --ANTI--> check */
- ds = DEP_LINK_STATUS (link);
+ ds = DEP_STATUS (dep);
if (ds & BEGIN_SPEC)
{
@@ -3726,30 +3754,30 @@ create_check_block_twin (rtx insn, bool
ds &= ~BEGIN_SPEC;
}
+ init_dep_1 (new_dep, pro, check, DEP_TYPE (dep), ds);
+ sd_add_dep (new_dep, false);
+
if (rec != EXIT_BLOCK_PTR)
{
- add_back_forw_dep (check, pro, dk, ds);
- add_back_forw_dep (twin, pro, dk, ds);
+ DEP_CON (new_dep) = twin;
+ sd_add_dep (new_dep, false);
}
- else
- add_back_forw_dep (check, pro, dk, ds);
}
- for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
- if ((DEP_LINK_STATUS (link) & BEGIN_SPEC)
- || mutate_p)
- /* We can delete this dep only if we totally overcome it with
- BEGIN_SPECULATION. */
- {
- delete_back_forw_dep (link);
-
- /* Restart search. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
- }
- else
- /* Continue search. */
- link = DEP_LINK_NEXT (link);
+ /* Second, remove backward dependencies of INSN. */
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
+ {
+ if ((DEP_STATUS (dep) & BEGIN_SPEC)
+ || mutate_p)
+ /* We can delete this dep because we overcome it with
+ BEGIN_SPECULATION. */
+ sd_delete_dep (sd_it);
+ else
+ sd_iterator_next (&sd_it);
+ }
+ /* Future Speculations. Determine what BE_IN speculations will be like. */
fs = 0;
/* Fields (DONE_SPEC (x) & BEGIN_SPEC) and CHECK_SPEC (x) are set only
@@ -3764,16 +3792,19 @@ create_check_block_twin (rtx insn, bool
DONE_SPEC (insn) = ts & BEGIN_SPEC;
CHECK_SPEC (check) = ts & BEGIN_SPEC;
+ /* Luckyness of future speculations solely depends upon initial
+ BEGIN speculation. */
if (ts & BEGIN_DATA)
fs = set_dep_weak (fs, BE_IN_DATA, get_dep_weak (ts, BEGIN_DATA));
if (ts & BEGIN_CONTROL)
- fs = set_dep_weak (fs, BE_IN_CONTROL, get_dep_weak (ts, BEGIN_CONTROL));
+ fs = set_dep_weak (fs, BE_IN_CONTROL,
+ get_dep_weak (ts, BEGIN_CONTROL));
}
else
CHECK_SPEC (check) = CHECK_SPEC (insn);
/* Future speculations: call the helper. */
- process_insn_forw_deps_be_in_spec (INSN_FORW_DEPS (insn), twin, fs);
+ process_insn_forw_deps_be_in_spec (insn, twin, fs);
if (rec != EXIT_BLOCK_PTR)
{
@@ -3783,35 +3814,45 @@ create_check_block_twin (rtx insn, bool
if (!mutate_p)
{
- add_back_forw_dep (check, insn, REG_DEP_TRUE, DEP_TRUE);
- add_back_forw_dep (twin, insn, REG_DEP_OUTPUT, DEP_OUTPUT);
+ init_dep (new_dep, insn, check, REG_DEP_TRUE);
+ sd_add_dep (new_dep, false);
+
+ init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
+ sd_add_dep (new_dep, false);
}
else
{
- dep_link_t link;
-
if (spec_info->dump)
fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
(*current_sched_info->print_insn) (insn, 0));
- /* Remove all forward dependencies of the INSN. */
- link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
- while (link != NULL)
- {
- delete_back_forw_dep (link);
- link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
- }
+ /* Remove all dependencies of the INSN. */
+ {
+ sd_it = sd_iterator_start (insn, (SD_LIST_FORW
+ | SD_LIST_BACK
+ | SD_LIST_RES_BACK));
+ while (sd_iterator_cond (&sd_it, &dep))
+ sd_delete_dep (sd_it);
+ }
+ /* If former check (INSN) already was moved to the ready (or queue)
+ list, add new check (CHECK) there too. */
if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
try_ready (check);
+ /* Remove old check from instruction stream and free its
+ data. */
sched_remove_insn (insn);
}
- add_back_forw_dep (twin, check, REG_DEP_ANTI, DEP_ANTI);
+ init_dep (new_dep, check, twin, REG_DEP_ANTI);
+ sd_add_dep (new_dep, false);
}
else
- add_back_forw_dep (check, insn, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
+ {
+ init_dep_1 (new_dep, insn, check, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
+ sd_add_dep (new_dep, false);
+ }
if (!mutate_p)
/* Fix priorities. If MUTATE_P is nonzero, this is not necessary,
@@ -3831,10 +3872,9 @@ create_check_block_twin (rtx insn, bool
static void
fix_recovery_deps (basic_block rec)
{
- dep_link_t link;
rtx note, insn, jump, ready_list = 0;
bitmap_head in_ready;
- rtx link1;
+ rtx link;
bitmap_initialize (&in_ready, 0);
@@ -3846,32 +3886,30 @@ fix_recovery_deps (basic_block rec)
insn = PREV_INSN (insn);
do
- {
- for (link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn)); link != NULL;)
- {
- rtx consumer;
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
- consumer = DEP_LINK_CON (link);
+ for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
+ sd_iterator_cond (&sd_it, &dep);)
+ {
+ rtx consumer = DEP_CON (dep);
if (BLOCK_FOR_INSN (consumer) != rec)
{
- delete_back_forw_dep (link);
+ sd_delete_dep (sd_it);
if (!bitmap_bit_p (&in_ready, INSN_LUID (consumer)))
{
ready_list = alloc_INSN_LIST (consumer, ready_list);
bitmap_set_bit (&in_ready, INSN_LUID (consumer));
}
-
- /* Restart search. */
- link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
}
else
{
- gcc_assert ((DEP_LINK_STATUS (link) & DEP_TYPES) == DEP_TRUE);
+ gcc_assert ((DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
- /* Continue search. */
- link = DEP_LINK_NEXT (link);
+ sd_iterator_next (&sd_it);
}
}
@@ -3882,8 +3920,8 @@ fix_recovery_deps (basic_block rec)
bitmap_clear (&in_ready);
/* Try to add instructions to the ready or queue list. */
- for (link1 = ready_list; link1; link1 = XEXP (link1, 1))
- try_ready (XEXP (link1, 0));
+ for (link = ready_list; link; link = XEXP (link, 1))
+ try_ready (XEXP (link, 0));
free_INSN_LIST_list (&ready_list);
/* Fixing jump's dependences. */
@@ -3912,6 +3950,31 @@ change_pattern (rtx insn, rtx new_pat)
dfa_clear_single_insn_cache (insn);
}
+/* Return true if INSN can potentially be speculated with type DS. */
+bool
+sched_insn_is_legitimate_for_speculation_p (rtx insn, ds_t ds)
+{
+ if (HAS_INTERNAL_DEP (insn))
+ return false;
+
+ if (!NONJUMP_INSN_P (insn))
+ return false;
+
+ if (SCHED_GROUP_P (insn))
+ return false;
+
+ if (IS_SPECULATION_CHECK_P (insn))
+ return false;
+
+ if (side_effects_p (PATTERN (insn)))
+ return false;
+
+ if ((ds & BE_IN_SPEC)
+ && may_trap_p (PATTERN (insn)))
+ return false;
+
+ return true;
+}
/* -1 - can't speculate,
0 - for speculation with REQUEST mode it is OK to use
@@ -3921,25 +3984,15 @@ static int
speculate_insn (rtx insn, ds_t request, rtx *new_pat)
{
gcc_assert (current_sched_info->flags & DO_SPECULATION
- && (request & SPECULATIVE));
+ && (request & SPECULATIVE)
+ && sched_insn_is_legitimate_for_speculation_p (insn, request));
- if (!NONJUMP_INSN_P (insn)
- || HAS_INTERNAL_DEP (insn)
- || SCHED_GROUP_P (insn)
- || side_effects_p (PATTERN (insn))
- || (request & spec_info->mask) != request)
+ if ((request & spec_info->mask) != request)
return -1;
-
- gcc_assert (!IS_SPECULATION_CHECK_P (insn));
- if (request & BE_IN_SPEC)
- {
- if (may_trap_p (PATTERN (insn)))
- return -1;
-
- if (!(request & BEGIN_SPEC))
- return 0;
- }
+ if (request & BE_IN_SPEC
+ && !(request & BEGIN_SPEC))
+ return 0;
return targetm.sched.speculate_insn (insn, request & BEGIN_SPEC, new_pat);
}
@@ -4303,6 +4356,8 @@ free_glat (void)
static void
sched_remove_insn (rtx insn)
{
+ sd_finish_insn (insn);
+
change_queue_index (insn, QUEUE_NOWHERE);
current_sched_info->add_remove_insn (insn, 1);
remove_insn (insn);
@@ -4314,14 +4369,14 @@ sched_remove_insn (rtx insn)
static void
clear_priorities (rtx insn, rtx_vec_t *roots_ptr)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
bool insn_is_root_p = true;
gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
{
- dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
if (INSN_PRIORITY_KNOWN (pro) >= 0
@@ -4366,12 +4421,17 @@ add_jump_dependencies (rtx insn, rtx jum
if (insn == jump)
break;
- if (deps_list_empty_p (INSN_FORW_DEPS (insn)))
- add_back_forw_dep (jump, insn, REG_DEP_ANTI, DEP_ANTI);
+ if (sd_lists_empty_p (insn, SD_LIST_FORW))
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep (new_dep, insn, jump, REG_DEP_ANTI);
+ sd_add_dep (new_dep, false);
+ }
}
while (1);
- gcc_assert (!deps_list_empty_p (INSN_BACK_DEPS (jump)));
+ gcc_assert (!sd_lists_empty_p (jump, SD_LIST_BACK));
}
/* Return the NOTE_INSN_BASIC_BLOCK of BB. */
@@ -4389,36 +4449,6 @@ bb_note (basic_block bb)
}
#ifdef ENABLE_CHECKING
-extern void debug_spec_status (ds_t);
-
-/* Dump information about the dependence status S. */
-void
-debug_spec_status (ds_t s)
-{
- FILE *f = stderr;
-
- if (s & BEGIN_DATA)
- fprintf (f, "BEGIN_DATA: %d; ", get_dep_weak (s, BEGIN_DATA));
- if (s & BE_IN_DATA)
- fprintf (f, "BE_IN_DATA: %d; ", get_dep_weak (s, BE_IN_DATA));
- if (s & BEGIN_CONTROL)
- fprintf (f, "BEGIN_CONTROL: %d; ", get_dep_weak (s, BEGIN_CONTROL));
- if (s & BE_IN_CONTROL)
- fprintf (f, "BE_IN_CONTROL: %d; ", get_dep_weak (s, BE_IN_CONTROL));
-
- if (s & HARD_DEP)
- fprintf (f, "HARD_DEP; ");
-
- if (s & DEP_TRUE)
- fprintf (f, "DEP_TRUE; ");
- if (s & DEP_ANTI)
- fprintf (f, "DEP_ANTI; ");
- if (s & DEP_OUTPUT)
- fprintf (f, "DEP_OUTPUT; ");
-
- fprintf (f, "\n");
-}
-
/* Helper function for check_cfg.
Return nonzero, if edge vector pointed to by EL has edge with TYPE in
its flags. */
--- sched-deps.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
+++ sched-deps.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
@@ -86,12 +86,12 @@ dk_to_ds (enum reg_note dk)
/* Functions to operate with dependence information container - dep_t. */
/* Init DEP with the arguments. */
-static void
-init_dep_1 (dep_t dep, rtx pro, rtx con, enum reg_note kind, ds_t ds)
+void
+init_dep_1 (dep_t dep, rtx pro, rtx con, enum reg_note type, ds_t ds)
{
DEP_PRO (dep) = pro;
DEP_CON (dep) = con;
- DEP_KIND (dep) = kind;
+ DEP_TYPE (dep) = type;
DEP_STATUS (dep) = ds;
}
@@ -103,7 +103,7 @@ init_dep (dep_t dep, rtx pro, rtx con, e
{
ds_t ds;
- if ((current_sched_info->flags & USE_DEPS_LIST) != 0)
+ if ((current_sched_info->flags & USE_DEPS_LIST))
ds = dk_to_ds (kind);
else
ds = -1;
@@ -118,19 +118,56 @@ copy_dep (dep_t to, dep_t from)
memcpy (to, from, sizeof (*to));
}
-/* Functions to operate with a single link from the dependencies lists -
- dep_link_t. */
+static void dump_ds (FILE *, ds_t);
-/* Return true if dep_link L is consistent. */
-static bool
-dep_link_consistent_p (dep_link_t l)
+/* Define flags for dump_dep (). */
+
+#define DUMP_DEP_PRO (2)
+#define DUMP_DEP_CON (4)
+#define DUMP_DEP_STATUS (8)
+
+#define DUMP_DEP_ALL (DUMP_DEP_PRO | DUMP_DEP_CON | DUMP_DEP_STATUS)
+
+/* Dump DEP to DUMP.
+ FLAGS is a bit mask specifying what information about DEP needs
+ to be printed. If FLAGS has the very first bit set then dump
+ all fields of DEP. */
+static void
+dump_dep (FILE *dump, dep_t dep, int flags)
{
- dep_link_t next = DEP_LINK_NEXT (l);
+ if (flags & 1)
+ flags |= DUMP_DEP_ALL;
- return (next == NULL
- || &DEP_LINK_NEXT (l) == DEP_LINK_PREV_NEXTP (next));
+ fprintf (dump, "<");
+
+ if (flags & DUMP_DEP_PRO)
+ fprintf (dump, "%d; ", INSN_UID (DEP_PRO (dep)));
+
+ if (flags & DUMP_DEP_CON)
+ fprintf (dump, "%d; ", INSN_UID (DEP_CON (dep)));
+
+ if (flags & DUMP_DEP_STATUS)
+ {
+ if (current_sched_info->flags & USE_DEPS_LIST)
+ dump_ds (dump, DEP_STATUS (dep));
+ }
+
+ fprintf (dump, ">");
+}
+
+/* Default flags for dump_dep (). */
+static int dump_dep_flags = (DUMP_DEP_PRO | DUMP_DEP_CON);
+
+/* Dump all fields of DEP to STDERR. */
+void
+debug_dep (dep_t dep)
+{
+ dump_dep (stderr, dep, 1);
}
+/* Functions to operate with a single link from the dependencies lists -
+ dep_link_t. */
+
/* Attach L to appear after link X whose &DEP_LINK_NEXT (X) is given by
PREV_NEXT_P. */
static void
@@ -162,6 +199,8 @@ static void
add_to_deps_list (dep_link_t link, deps_list_t l)
{
attach_dep_link (link, &DEPS_LIST_FIRST (l));
+
+ ++DEPS_LIST_N_LINKS (l);
}
/* Detach dep_link L from the list. */
@@ -176,232 +215,142 @@ detach_dep_link (dep_link_t l)
if (next != NULL)
DEP_LINK_PREV_NEXTP (next) = prev_nextp;
- /* Though this is property is not used anywhere but in the assert in
- attach_dep_link (), this can prevent latent errors. */
DEP_LINK_PREV_NEXTP (l) = NULL;
DEP_LINK_NEXT (l) = NULL;
}
-/* Move LINK from whatever list it is now to L. */
-void
-move_dep_link (dep_link_t link, deps_list_t l)
+/* Remove link LINK from list LIST. */
+static void
+remove_from_deps_list (dep_link_t link, deps_list_t list)
{
detach_dep_link (link);
- add_to_deps_list (link, l);
+
+ --DEPS_LIST_N_LINKS (list);
}
-/* Check L's and its successors' consistency.
- This is, potentially, an expensive check, hence it should be guarded by
- ENABLE_CHECKING at all times. */
-static bool
-dep_links_consistent_p (dep_link_t l)
+/* Move link LINK from list FROM to list TO. */
+static void
+move_dep_link (dep_link_t link, deps_list_t from, deps_list_t to)
{
- while (l != NULL)
- {
- if (dep_link_consistent_p (l))
- l = DEP_LINK_NEXT (l);
- else
- return false;
- }
-
- return true;
+ remove_from_deps_list (link, from);
+ add_to_deps_list (link, to);
}
-/* Dump dep_nodes starting from l. */
-static void
-dump_dep_links (FILE *dump, dep_link_t l)
+/* Return true of LINK is not attached to any list. */
+static bool
+dep_link_is_detached_p (dep_link_t link)
{
- while (l != NULL)
+ if (DEP_LINK_PREV_NEXTP (link) == NULL)
{
- dep_t d = DEP_LINK_DEP (l);
-
- fprintf (dump, "%d%c>%d ", INSN_UID (DEP_PRO (d)),
- dep_link_consistent_p (l) ? '-' : '!', INSN_UID (DEP_CON (d)));
+ gcc_assert (DEP_LINK_NEXT (link) == NULL);
- l = DEP_LINK_NEXT (l);
+ return true;
}
- fprintf (dump, "\n");
-}
-
-/* Dump dep_nodes starting from L to stderr. */
-void
-debug_dep_links (dep_link_t l)
-{
- dump_dep_links (stderr, l);
+ return false;
}
-/* Obstack to allocate dep_nodes and deps_lists on. */
-static struct obstack deps_obstack;
+/* Pool to hold all dependency nodes (dep_node_t). */
+static alloc_pool dn_pool;
-/* Obstack to hold forward dependencies lists (deps_list_t). */
-static struct obstack *dl_obstack = &deps_obstack;
+/* Number of dep_nodes out there. */
+static int dn_pool_diff = 0;
-/* Obstack to hold all dependency nodes (dep_node_t). */
-static struct obstack *dn_obstack = &deps_obstack;
+/* Create a dep_node. */
+static dep_node_t
+create_dep_node (void)
+{
+ dep_node_t n = (dep_node_t) pool_alloc (dn_pool);
+ dep_link_t back = DEP_NODE_BACK (n);
+ dep_link_t forw = DEP_NODE_FORW (n);
-/* Functions to operate with dependences lists - deps_list_t. */
+ DEP_LINK_NODE (back) = n;
+ DEP_LINK_NEXT (back) = NULL;
+ DEP_LINK_PREV_NEXTP (back) = NULL;
-/* Allocate deps_list.
+ DEP_LINK_NODE (forw) = n;
+ DEP_LINK_NEXT (forw) = NULL;
+ DEP_LINK_PREV_NEXTP (forw) = NULL;
- If ON_OBSTACK_P is true, allocate the list on the obstack. This is done for
- INSN_FORW_DEPS lists because they should live till the end of scheduling.
+ ++dn_pool_diff;
- INSN_BACK_DEPS and INSN_RESOLVED_BACK_DEPS lists are allocated on the free
- store and are being freed in haifa-sched.c: schedule_insn (). */
-static deps_list_t
-alloc_deps_list (bool on_obstack_p)
-{
- if (on_obstack_p)
- return obstack_alloc (dl_obstack, sizeof (struct _deps_list));
- else
- return xmalloc (sizeof (struct _deps_list));
+ return n;
}
-/* Initialize deps_list L. */
+/* Delete dep_node N. N must not be connected to any deps_list. */
static void
-init_deps_list (deps_list_t l)
+delete_dep_node (dep_node_t n)
{
- DEPS_LIST_FIRST (l) = NULL;
-}
+ gcc_assert (dep_link_is_detached_p (DEP_NODE_BACK (n))
+ && dep_link_is_detached_p (DEP_NODE_FORW (n)));
-/* Create (allocate and init) deps_list.
- The meaning of ON_OBSTACK_P is the same as in alloc_deps_list (). */
-deps_list_t
-create_deps_list (bool on_obstack_p)
-{
- deps_list_t l = alloc_deps_list (on_obstack_p);
+ --dn_pool_diff;
- init_deps_list (l);
- return l;
+ pool_free (dn_pool, n);
}
-/* Free dep_data_nodes that present in L. */
-static void
-clear_deps_list (deps_list_t l)
-{
- /* All dep_nodes are allocated on the dn_obstack. They'll be freed with
- the obstack. */
-
- DEPS_LIST_FIRST (l) = NULL;
-}
+/* Pool to hold dependencies lists (deps_list_t). */
+static alloc_pool dl_pool;
-/* Free deps_list L. */
-void
-free_deps_list (deps_list_t l)
-{
- gcc_assert (deps_list_empty_p (l));
- free (l);
-}
+/* Number of deps_lists out there. */
+static int dl_pool_diff = 0;
-/* Delete (clear and free) deps_list L. */
-void
-delete_deps_list (deps_list_t l)
-{
- clear_deps_list (l);
- free_deps_list (l);
-}
+/* Functions to operate with dependences lists - deps_list_t. */
-/* Return true if L is empty. */
-bool
+/* Return true if list L is empty. */
+static bool
deps_list_empty_p (deps_list_t l)
{
- return DEPS_LIST_FIRST (l) == NULL;
+ return DEPS_LIST_N_LINKS (l) == 0;
}
-/* Check L's consistency.
- This is, potentially, an expensive check, hence it should be guarded by
- ENABLE_CHECKING at all times. */
-static bool
-deps_list_consistent_p (deps_list_t l)
+/* Create a new deps_list. */
+static deps_list_t
+create_deps_list (void)
{
- dep_link_t first = DEPS_LIST_FIRST (l);
+ deps_list_t l = (deps_list_t) pool_alloc (dl_pool);
- return (first == NULL
- || (&DEPS_LIST_FIRST (l) == DEP_LINK_PREV_NEXTP (first)
- && dep_links_consistent_p (first)));
-}
-
-/* Dump L to F. */
-static void
-dump_deps_list (FILE *f, deps_list_t l)
-{
- dump_dep_links (f, DEPS_LIST_FIRST (l));
-}
+ DEPS_LIST_FIRST (l) = NULL;
+ DEPS_LIST_N_LINKS (l) = 0;
-/* Dump L to STDERR. */
-void
-debug_deps_list (deps_list_t l)
-{
- dump_deps_list (stderr, l);
+ ++dl_pool_diff;
+ return l;
}
-/* Add a dependency described by DEP to the list L.
- L should be either INSN_BACK_DEPS or INSN_RESOLVED_BACK_DEPS. */
-void
-add_back_dep_to_deps_list (deps_list_t l, dep_t dep_from)
+/* Free deps_list L. */
+static void
+free_deps_list (deps_list_t l)
{
- dep_node_t n = (dep_node_t) obstack_alloc (dn_obstack,
- sizeof (*n));
- dep_t dep_to = DEP_NODE_DEP (n);
- dep_link_t back = DEP_NODE_BACK (n);
- dep_link_t forw = DEP_NODE_FORW (n);
-
- copy_dep (dep_to, dep_from);
-
- DEP_LINK_NODE (back) = n;
- DEP_LINK_NODE (forw) = n;
+ gcc_assert (deps_list_empty_p (l));
- /* There is no particular need to initialize these four fields except to make
- assert in attach_dep_link () happy. */
- DEP_LINK_NEXT (back) = NULL;
- DEP_LINK_PREV_NEXTP (back) = NULL;
- DEP_LINK_NEXT (forw) = NULL;
- DEP_LINK_PREV_NEXTP (forw) = NULL;
+ --dl_pool_diff;
- add_to_deps_list (back, l);
+ pool_free (dl_pool, l);
}
-/* Find the dep_link with producer PRO in deps_list L. */
-dep_link_t
-find_link_by_pro_in_deps_list (deps_list_t l, rtx pro)
+/* Return true if there is no dep_nodes and deps_lists out there.
+ After the region is scheduled all the depedency nodes and lists
+ should [generally] be returned to pool. */
+bool
+deps_pools_are_empty_p (void)
{
- dep_link_t link;
-
- FOR_EACH_DEP_LINK (link, l)
- if (DEP_LINK_PRO (link) == pro)
- return link;
-
- return NULL;
-}
-
-/* Find the dep_link with consumer CON in deps_list L. */
-dep_link_t
-find_link_by_con_in_deps_list (deps_list_t l, rtx con)
-{
- dep_link_t link;
-
- FOR_EACH_DEP_LINK (link, l)
- if (DEP_LINK_CON (link) == con)
- return link;
-
- return NULL;
+ return dn_pool_diff == 0 && dl_pool_diff == 0;
}
-/* Make a copy of FROM in TO with substituting consumer with CON.
- TO and FROM should be RESOLVED_BACK_DEPS lists. */
-void
-copy_deps_list_change_con (deps_list_t to, deps_list_t from, rtx con)
+/* Remove all elements from L. */
+static void
+clear_deps_list (deps_list_t l)
{
- dep_link_t l;
+ do
+ {
+ dep_link_t link = DEPS_LIST_FIRST (l);
- gcc_assert (deps_list_empty_p (to));
+ if (link == NULL)
+ break;
- FOR_EACH_DEP_LINK (l, from)
- {
- add_back_dep_to_deps_list (to, DEP_LINK_DEP (l));
- DEP_LINK_CON (DEPS_LIST_FIRST (to)) = con;
+ remove_from_deps_list (link, l);
}
+ while (1);
}
static regset reg_pending_sets;
@@ -439,14 +388,6 @@ static bitmap_head *anti_dependency_cach
static bitmap_head *spec_dependency_cache;
static int cache_size;
-/* To speed up checking consistency of formed forward insn
- dependencies we use the following cache. Another possible solution
- could be switching off checking duplication of insns in forward
- dependencies. */
-#ifdef ENABLE_CHECKING
-static bitmap_head *forward_dependency_cache;
-#endif
-
static int deps_may_trap_p (rtx);
static void add_dependence_list (rtx, rtx, int, enum reg_note);
static void add_dependence_list_and_free (rtx, rtx *, int, enum reg_note);
@@ -461,19 +402,14 @@ static void sched_analyze_insn (struct d
static rtx sched_get_condition (rtx);
static int conditions_mutex_p (rtx, rtx);
-static enum DEPS_ADJUST_RESULT maybe_add_or_update_back_dep_1 (rtx, rtx,
- enum reg_note, ds_t, rtx, rtx, dep_link_t **);
-static enum DEPS_ADJUST_RESULT add_or_update_back_dep_1 (rtx, rtx,
- enum reg_note, ds_t, rtx, rtx, dep_link_t **);
-static void add_back_dep (rtx, rtx, enum reg_note, ds_t);
-
-static void adjust_add_sorted_back_dep (rtx, dep_link_t, dep_link_t *);
-static void adjust_back_add_forw_dep (rtx, dep_link_t *);
-static void delete_forw_dep (dep_link_t);
+static enum DEPS_ADJUST_RESULT maybe_add_or_update_dep_1 (dep_t, bool,
+ rtx, rtx);
+static enum DEPS_ADJUST_RESULT add_or_update_dep_1 (dep_t, bool, rtx, rtx);
+
static dw_t estimate_dep_weak (rtx, rtx);
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
-static void check_dep_status (enum reg_note, ds_t, bool);
+static void check_dep (dep_t, bool);
#endif
#endif
\f
@@ -569,23 +505,218 @@ sched_insns_conditions_mutex_p (rtx insn
return false;
}
\f
-/* Add ELEM wrapped in an dep_link with reg note kind DEP_TYPE to the
- INSN_BACK_DEPS (INSN), if it is not already there. DEP_TYPE indicates the
- type of dependence that this link represents. DS, if nonzero,
- indicates speculations, through which this dependence can be overcome.
- MEM1 and MEM2, if non-null, corresponds to memory locations in case of
- data speculation. The function returns a value indicating if an old entry
- has been changed or a new entry has been added to insn's backward deps.
- In case of changed entry CHANGED_LINKPP sets to its address.
- See also the definition of enum DEPS_ADJUST_RESULT in sched-int.h.
- Actual manipulation of dependence data structures is performed in
- add_or_update_back_dep_1. */
+/* Initialize LIST_PTR to point to one of the lists present in TYPES_PTR,
+ initialize RESOLVED_P_PTR with true if that list consists of resolved deps,
+ and remove the type of returned [through LIST_PTR] list from TYPES_PTR.
+ This function is used to switch sd_iterator to the next list.
+ !!! For internal use only. Might consider moving it to sched-int.h. */
+void
+sd_next_list (rtx insn, sd_list_types_def *types_ptr,
+ deps_list_t *list_ptr, bool *resolved_p_ptr)
+{
+ sd_list_types_def types = *types_ptr;
+
+ if (types & SD_LIST_HARD_BACK)
+ {
+ *list_ptr = INSN_HARD_BACK_DEPS (insn);
+ *resolved_p_ptr = false;
+ *types_ptr = types & ~SD_LIST_HARD_BACK;
+ }
+ else if (types & SD_LIST_SPEC_BACK)
+ {
+ *list_ptr = INSN_SPEC_BACK_DEPS (insn);
+ *resolved_p_ptr = false;
+ *types_ptr = types & ~SD_LIST_SPEC_BACK;
+ }
+ else if (types & SD_LIST_FORW)
+ {
+ *list_ptr = INSN_FORW_DEPS (insn);
+ *resolved_p_ptr = false;
+ *types_ptr = types & ~SD_LIST_FORW;
+ }
+ else if (types & SD_LIST_RES_BACK)
+ {
+ *list_ptr = INSN_RESOLVED_BACK_DEPS (insn);
+ *resolved_p_ptr = true;
+ *types_ptr = types & ~SD_LIST_RES_BACK;
+ }
+ else if (types & SD_LIST_RES_FORW)
+ {
+ *list_ptr = INSN_RESOLVED_FORW_DEPS (insn);
+ *resolved_p_ptr = true;
+ *types_ptr = types & ~SD_LIST_RES_FORW;
+ }
+ else
+ {
+ *list_ptr = NULL;
+ *resolved_p_ptr = false;
+ *types_ptr = SD_LIST_NONE;
+ }
+}
+
+/* Return the summary size of INSN's lists defined by LIST_TYPES. */
+int
+sd_lists_size (rtx insn, sd_list_types_def list_types)
+{
+ int size = 0;
+
+ while (list_types != SD_LIST_NONE)
+ {
+ deps_list_t list;
+ bool resolved_p;
+
+ sd_next_list (insn, &list_types, &list, &resolved_p);
+ size += DEPS_LIST_N_LINKS (list);
+ }
+
+ return size;
+}
+
+/* Return true if INSN's lists defined by LIST_TYPES are all empty. */
+bool
+sd_lists_empty_p (rtx insn, sd_list_types_def list_types)
+{
+ return sd_lists_size (insn, list_types) == 0;
+}
+
+/* Initialize data for INSN. */
+void
+sd_init_insn (rtx insn)
+{
+ INSN_HARD_BACK_DEPS (insn) = create_deps_list ();
+ INSN_SPEC_BACK_DEPS (insn) = create_deps_list ();
+ INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list ();
+ INSN_FORW_DEPS (insn) = create_deps_list ();
+ INSN_RESOLVED_FORW_DEPS (insn) = create_deps_list ();
+
+ /* ??? It would be nice to allocate dependency caches here. */
+}
+
+/* Free data for INSN. */
+void
+sd_finish_insn (rtx insn)
+{
+ /* ??? It would be nice to deallocate dependency caches here. */
+
+ free_deps_list (INSN_HARD_BACK_DEPS (insn));
+ INSN_HARD_BACK_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_SPEC_BACK_DEPS (insn));
+ INSN_SPEC_BACK_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_RESOLVED_BACK_DEPS (insn));
+ INSN_RESOLVED_BACK_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_FORW_DEPS (insn));
+ INSN_FORW_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_RESOLVED_FORW_DEPS (insn));
+ INSN_RESOLVED_FORW_DEPS (insn) = NULL;
+}
+
+/* Find a dependency between producer PRO and consumer CON.
+ Search through resolved dependency lists if RESOLVED_P is true.
+ If no such dependency is found return NULL,
+ overwise return the dependency and initialize SD_IT_PTR [if it is nonnull]
+ with an iterator pointing to it. */
+static dep_t
+sd_find_dep_between_no_cache (rtx pro, rtx con, bool resolved_p,
+ sd_iterator_def *sd_it_ptr)
+{
+ sd_list_types_def pro_list_type;
+ sd_list_types_def con_list_type;
+ sd_iterator_def sd_it;
+ dep_t dep;
+ bool found_p = false;
+
+ if (resolved_p)
+ {
+ pro_list_type = SD_LIST_RES_FORW;
+ con_list_type = SD_LIST_RES_BACK;
+ }
+ else
+ {
+ pro_list_type = SD_LIST_FORW;
+ con_list_type = SD_LIST_BACK;
+ }
+
+ /* Walk through either back list of INSN or forw list of ELEM
+ depending on which one is shorter. */
+ if (sd_lists_size (con, con_list_type) < sd_lists_size (pro, pro_list_type))
+ {
+ /* Find the dep_link with producer PRO in consumer's back_deps. */
+ FOR_EACH_DEP (con, con_list_type, sd_it, dep)
+ if (DEP_PRO (dep) == pro)
+ {
+ found_p = true;
+ break;
+ }
+ }
+ else
+ {
+ /* Find the dep_link with consumer CON in producer's forw_deps. */
+ FOR_EACH_DEP (pro, pro_list_type, sd_it, dep)
+ if (DEP_CON (dep) == con)
+ {
+ found_p = true;
+ break;
+ }
+ }
+
+ if (found_p)
+ {
+ if (sd_it_ptr != NULL)
+ *sd_it_ptr = sd_it;
+
+ return dep;
+ }
+
+ return NULL;
+}
+
+/* Find a dependency between producer PRO and consumer CON.
+ Use dependency [if available] to check if dependency is present at all.
+ Search through resolved dependency lists if RESOLVED_P is true.
+ If the dependency or NULL if none found. */
+dep_t
+sd_find_dep_between (rtx pro, rtx con, bool resolved_p)
+{
+ if (true_dependency_cache != NULL)
+ /* Avoiding the list walk below can cut compile times dramatically
+ for some code. */
+ {
+ int elem_luid = INSN_LUID (pro);
+ int insn_luid = INSN_LUID (con);
+
+ gcc_assert (output_dependency_cache != NULL
+ && anti_dependency_cache != NULL);
+
+ if (!bitmap_bit_p (&true_dependency_cache[insn_luid], elem_luid)
+ && !bitmap_bit_p (&output_dependency_cache[insn_luid], elem_luid)
+ && !bitmap_bit_p (&anti_dependency_cache[insn_luid], elem_luid))
+ return NULL;
+ }
+
+ return sd_find_dep_between_no_cache (pro, con, resolved_p, NULL);
+}
+
+/* Add or update a dependence described by DEP.
+ MEM1 and MEM2, if non-null, correspond to memory locations in case of
+ data speculation.
+
+ The function returns a value indicating if an old entry has been changed
+ or a new entry has been added to insn's backward deps.
+
+ This function merely checks if producer and consumer is the same insn
+ and doesn't create a dep in this case. Actual manipulation of
+ dependence data structures is performed in add_or_update_dep_1. */
static enum DEPS_ADJUST_RESULT
-maybe_add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
- ds_t ds, rtx mem1, rtx mem2,
- dep_link_t **changed_linkpp)
+maybe_add_or_update_dep_1 (dep_t dep, bool resolved_p, rtx mem1, rtx mem2)
{
+ rtx elem = DEP_PRO (dep);
+ rtx insn = DEP_CON (dep);
+
gcc_assert (INSN_P (insn) && INSN_P (elem));
/* Don't depend an insn on itself. */
@@ -596,319 +727,571 @@ maybe_add_or_update_back_dep_1 (rtx insn
/* INSN has an internal dependence, which we can't overcome. */
HAS_INTERNAL_DEP (insn) = 1;
#endif
- return 0;
+
+ return DEP_NODEP;
}
- return add_or_update_back_dep_1 (insn, elem, dep_type,
- ds, mem1, mem2, changed_linkpp);
+ return add_or_update_dep_1 (dep, resolved_p, mem1, mem2);
}
-/* This function has the same meaning of parameters and return values
- as maybe_add_or_update_back_dep_1. The only difference between these
- two functions is that INSN and ELEM are guaranteed not to be the same
- in this one. */
+#ifdef INSN_SCHEDULING
+/* Ask dependency caches what needs to be done for dependence DEP.
+ Return DEP_CREATED if new dependence should be created and there is no
+ need to try to find one searching the dependencies lists.
+ Return DEP_PRESENT if there already is a dependence described by DEP and
+ hence nothing is to be done.
+ Return DEP_CHANGED if we there already is a dependence but it should be
+ updated to incorporate additional information from DEP. */
static enum DEPS_ADJUST_RESULT
-add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
- ds_t ds ATTRIBUTE_UNUSED,
- rtx mem1 ATTRIBUTE_UNUSED, rtx mem2 ATTRIBUTE_UNUSED,
- dep_link_t **changed_linkpp ATTRIBUTE_UNUSED)
+ask_dependency_caches (dep_t dep)
{
- bool maybe_present_p = true, present_p = false;
-
- gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
-
-#ifdef INSN_SCHEDULING
+ int elem_luid = INSN_LUID (DEP_PRO (dep));
+ int insn_luid = INSN_LUID (DEP_CON (dep));
-#ifdef ENABLE_CHECKING
- check_dep_status (dep_type, ds, mem1 != NULL);
-#endif
+ gcc_assert (true_dependency_cache != NULL
+ && output_dependency_cache != NULL
+ && anti_dependency_cache != NULL);
- /* If we already have a dependency for ELEM, then we do not need to
- do anything. Avoiding the list walk below can cut compile times
- dramatically for some code. */
- if (true_dependency_cache != NULL)
- {
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
enum reg_note present_dep_type;
-
- gcc_assert (output_dependency_cache);
- gcc_assert (anti_dependency_cache);
- if (!(current_sched_info->flags & USE_DEPS_LIST))
- {
- if (bitmap_bit_p (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_type = REG_DEP_TRUE;
- else if (bitmap_bit_p (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_type = REG_DEP_OUTPUT;
- else if (bitmap_bit_p (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_type = REG_DEP_ANTI;
- else
- maybe_present_p = false;
- if (maybe_present_p)
- {
- if ((int) dep_type >= (int) present_dep_type)
- return DEP_PRESENT;
-
- present_p = true;
- }
- }
+ if (bitmap_bit_p (&true_dependency_cache[insn_luid], elem_luid))
+ present_dep_type = REG_DEP_TRUE;
+ else if (bitmap_bit_p (&output_dependency_cache[insn_luid], elem_luid))
+ present_dep_type = REG_DEP_OUTPUT;
+ else if (bitmap_bit_p (&anti_dependency_cache[insn_luid], elem_luid))
+ present_dep_type = REG_DEP_ANTI;
else
- {
- ds_t present_dep_types = 0;
+ /* There is no existing dep so it should be created. */
+ return DEP_CREATED;
+
+ if ((int) DEP_TYPE (dep) >= (int) present_dep_type)
+ /* DEP does not add anything to the existing dependence. */
+ return DEP_PRESENT;
+ }
+ else
+ {
+ ds_t present_dep_types = 0;
- if (bitmap_bit_p (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_types |= DEP_TRUE;
- if (bitmap_bit_p (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_types |= DEP_OUTPUT;
- if (bitmap_bit_p (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_types |= DEP_ANTI;
+ if (bitmap_bit_p (&true_dependency_cache[insn_luid], elem_luid))
+ present_dep_types |= DEP_TRUE;
+ if (bitmap_bit_p (&output_dependency_cache[insn_luid], elem_luid))
+ present_dep_types |= DEP_OUTPUT;
+ if (bitmap_bit_p (&anti_dependency_cache[insn_luid], elem_luid))
+ present_dep_types |= DEP_ANTI;
+
+ if (present_dep_types == 0)
+ /* There is no existing dep so it should be created. */
+ return DEP_CREATED;
+
+ if (!(current_sched_info->flags & DO_SPECULATION)
+ || !bitmap_bit_p (&spec_dependency_cache[insn_luid], elem_luid))
+ {
+ if ((present_dep_types | (DEP_STATUS (dep) & DEP_TYPES))
+ == present_dep_types)
+ /* DEP does not add anything to the existing dependence. */
+ return DEP_PRESENT;
+ }
+ else
+ {
+ /* Only true dependencies can be data speculative and
+ only anti dependencies can be control speculative. */
+ gcc_assert ((present_dep_types & (DEP_TRUE | DEP_ANTI))
+ == present_dep_types);
+
+ /* if (DEP is SPECULATIVE) then
+ ..we should update DEP_STATUS
+ else
+ ..we should reset existing dep to non-speculative. */
+ }
+ }
+
+ return DEP_CHANGED;
+}
+
+/* Set dependency caches according to DEP. */
+static void
+set_dependency_caches (dep_t dep)
+{
+ int elem_luid = INSN_LUID (DEP_PRO (dep));
+ int insn_luid = INSN_LUID (DEP_CON (dep));
+
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
+ switch (DEP_TYPE (dep))
+ {
+ case REG_DEP_TRUE:
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_OUTPUT:
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+ break;
- if (present_dep_types)
+ case REG_DEP_ANTI:
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+ }
+ else
+ {
+ ds_t ds = DEP_STATUS (dep);
+
+ if (ds & DEP_TRUE)
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_OUTPUT)
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_ANTI)
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+
+ if (ds & SPECULATIVE)
+ {
+ gcc_assert (current_sched_info->flags & DO_SPECULATION);
+ bitmap_set_bit (&spec_dependency_cache[insn_luid], elem_luid);
+ }
+ }
+}
+
+/* Type of dependence DEP have changed from OLD_TYPE. Update dependency
+ caches accordingly. */
+static void
+update_dependency_caches (dep_t dep, enum reg_note old_type)
+{
+ int elem_luid = INSN_LUID (DEP_PRO (dep));
+ int insn_luid = INSN_LUID (DEP_CON (dep));
+
+ /* Clear corresponding cache entry because type of the link
+ may have changed. Keep them if we use_deps_list. */
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
+ switch (old_type)
+ {
+ case REG_DEP_OUTPUT:
+ bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_ANTI:
+ bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+
+ switch (DEP_TYPE (dep))
+ {
+ case REG_DEP_TRUE:
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_OUTPUT:
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_ANTI:
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+ }
+ else
+ {
+ ds_t ds = DEP_STATUS (dep);
+
+ switch (old_type)
+ {
+ case REG_DEP_TRUE:
+ if (ds & DEP_OUTPUT)
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_ANTI)
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+
+ break;
+
+ case REG_DEP_OUTPUT:
+ if (ds & DEP_TRUE)
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_ANTI)
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+
+ break;
+
+ case REG_DEP_ANTI:
+ if (ds & DEP_TRUE)
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_OUTPUT)
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+
+ /* Note, that the dep can become speculative only at the moment
+ of creation, thus we don't need to check for it during update. */
+ }
+}
+
+/* Convert a dependence pointed to by SD_IT to be non-speculative. */
+static void
+change_spec_dep_to_hard (sd_iterator_def sd_it)
+{
+ dep_node_t node = DEP_LINK_NODE (*sd_it.linkp);
+ dep_link_t link = DEP_NODE_BACK (node);
+ dep_t dep = DEP_NODE_DEP (node);
+ rtx elem = DEP_PRO (dep);
+ rtx insn = DEP_CON (dep);
+
+ move_dep_link (link, INSN_SPEC_BACK_DEPS (insn), INSN_HARD_BACK_DEPS (insn));
+
+ DEP_STATUS (dep) &= ~SPECULATIVE;
+
+ if (true_dependency_cache != NULL)
+ /* Clear the cache entry. */
+ bitmap_clear_bit (&spec_dependency_cache[INSN_LUID (insn)],
+ INSN_LUID (elem));
+}
+#endif
+
+/* Update DEP to incorporate information from NEW_DEP.
+ SD_IT points to DEP in case it should be moved to another list.
+ MEM1 and MEM2, if nonnull, correspond to memory locations in case if
+ data-speculative dependence should be updated. */
+static enum DEPS_ADJUST_RESULT
+update_dep (dep_t dep, dep_t new_dep,
+ sd_iterator_def sd_it, rtx mem1, rtx mem2)
+{
+ enum DEPS_ADJUST_RESULT res = DEP_PRESENT;
+ enum reg_note old_type = DEP_TYPE (dep);
+
+ /* If this is a more restrictive type of dependence than the
+ existing one, then change the existing dependence to this
+ type. */
+ if ((int) DEP_TYPE (new_dep) < (int) old_type)
+ {
+ DEP_TYPE (dep) = DEP_TYPE (new_dep);
+ res = DEP_CHANGED;
+ }
+
+#ifdef INSN_SCHEDULING
+ if (current_sched_info->flags & USE_DEPS_LIST)
+ /* Update DEP_STATUS. */
+ {
+ ds_t dep_status = DEP_STATUS (dep);
+ ds_t ds = DEP_STATUS (new_dep);
+ ds_t new_status = ds | dep_status;
+
+ if (new_status & SPECULATIVE)
+ /* Either existing dep or a dep we're adding or both are
+ speculative. */
+ {
+ if (!(ds & SPECULATIVE)
+ || !(dep_status & SPECULATIVE))
+ /* The new dep can't be speculative. */
+ {
+ new_status &= ~SPECULATIVE;
+
+ if (dep_status & SPECULATIVE)
+ /* The old dep was speculative, but now it
+ isn't. */
+ change_spec_dep_to_hard (sd_it);
+ }
+ else
{
- if (!(current_sched_info->flags & DO_SPECULATION)
- || !bitmap_bit_p (&spec_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- {
- if ((present_dep_types | (ds & DEP_TYPES))
- == present_dep_types)
- /* We already have all these bits. */
- return DEP_PRESENT;
- }
- else
+ /* Both are speculative. Merge probabilities. */
+ if (mem1 != NULL)
{
- /* Only true dependencies can be data speculative and
- only anti dependencies can be control speculative. */
- gcc_assert ((present_dep_types & (DEP_TRUE | DEP_ANTI))
- == present_dep_types);
-
- /* if (additional dep is SPECULATIVE) then
- we should update DEP_STATUS
- else
- we should reset existing dep to non-speculative. */
+ dw_t dw;
+
+ dw = estimate_dep_weak (mem1, mem2);
+ ds = set_dep_weak (ds, BEGIN_DATA, dw);
}
-
- present_p = true;
+
+ new_status = ds_merge (dep_status, ds);
}
- else
- maybe_present_p = false;
- }
+ }
+
+ ds = new_status;
+
+ if (dep_status != ds)
+ {
+ DEP_STATUS (dep) = ds;
+ res = DEP_CHANGED;
+ }
}
-#endif
- /* Check that we don't already have this dependence. */
- if (maybe_present_p)
- {
- dep_link_t *linkp;
+ if (true_dependency_cache != NULL
+ && res == DEP_CHANGED)
+ update_dependency_caches (dep, old_type);
+#endif
- for (linkp = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
- *linkp != NULL;
- linkp = &DEP_LINK_NEXT (*linkp))
- {
- dep_t link = DEP_LINK_DEP (*linkp);
+ return res;
+}
- gcc_assert (true_dependency_cache == 0 || present_p);
-
- if (DEP_PRO (link) == elem)
- {
- enum DEPS_ADJUST_RESULT changed_p = DEP_PRESENT;
+/* Add or update a dependence described by DEP.
+ MEM1 and MEM2, if non-null, correspond to memory locations in case of
+ data speculation.
-#ifdef INSN_SCHEDULING
- if (current_sched_info->flags & USE_DEPS_LIST)
- {
- ds_t new_status = ds | DEP_STATUS (link);
-
- if (new_status & SPECULATIVE)
- {
- if (!(ds & SPECULATIVE)
- || !(DEP_STATUS (link) & SPECULATIVE))
- /* Then this dep can't be speculative. */
- {
- new_status &= ~SPECULATIVE;
- if (true_dependency_cache
- && (DEP_STATUS (link) & SPECULATIVE))
- bitmap_clear_bit (&spec_dependency_cache
- [INSN_LUID (insn)],
- INSN_LUID (elem));
- }
- else
- {
- /* Both are speculative. Merging probabilities. */
- if (mem1)
- {
- dw_t dw;
-
- dw = estimate_dep_weak (mem1, mem2);
- ds = set_dep_weak (ds, BEGIN_DATA, dw);
- }
-
- new_status = ds_merge (DEP_STATUS (link), ds);
- }
- }
-
- ds = new_status;
- }
-
- /* Clear corresponding cache entry because type of the link
- may have changed. Keep them if we use_deps_list. */
- if (true_dependency_cache != NULL
- && !(current_sched_info->flags & USE_DEPS_LIST))
- {
- enum reg_note kind = DEP_KIND (link);
+ The function returns a value indicating if an old entry has been changed
+ or a new entry has been added to insn's backward deps or nothing has
+ been updated at all. */
+static enum DEPS_ADJUST_RESULT
+add_or_update_dep_1 (dep_t new_dep, bool resolved_p,
+ rtx mem1 ATTRIBUTE_UNUSED, rtx mem2 ATTRIBUTE_UNUSED)
+{
+ bool maybe_present_p = true;
+ bool present_p = false;
- switch (kind)
- {
- case REG_DEP_OUTPUT:
- bitmap_clear_bit (&output_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- break;
- case REG_DEP_ANTI:
- bitmap_clear_bit (&anti_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- break;
- default:
- gcc_unreachable ();
- }
- }
+ gcc_assert (INSN_P (DEP_PRO (new_dep)) && INSN_P (DEP_CON (new_dep))
+ && DEP_PRO (new_dep) != DEP_CON (new_dep));
+
+#ifdef INSN_SCHEDULING
- if ((current_sched_info->flags & USE_DEPS_LIST)
- && DEP_STATUS (link) != ds)
- {
- DEP_STATUS (link) = ds;
- changed_p = DEP_CHANGED;
- }
+#ifdef ENABLE_CHECKING
+ check_dep (new_dep, mem1 != NULL);
#endif
- /* If this is a more restrictive type of dependence than the
- existing one, then change the existing dependence to this
- type. */
- if ((int) dep_type < (int) DEP_KIND (link))
- {
- DEP_KIND (link) = dep_type;
- changed_p = DEP_CHANGED;
- }
+ if (true_dependency_cache != NULL)
+ {
+ switch (ask_dependency_caches (new_dep))
+ {
+ case DEP_PRESENT:
+ return DEP_PRESENT;
-#ifdef INSN_SCHEDULING
- /* If we are adding a dependency to INSN's LOG_LINKs, then
- note that in the bitmap caches of dependency information. */
- if (true_dependency_cache != NULL)
- {
- if (!(current_sched_info->flags & USE_DEPS_LIST))
- {
- if (DEP_KIND (link) == REG_DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- else if (DEP_KIND (link) == REG_DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- else if (DEP_KIND (link) == REG_DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- }
- else
- {
- if (ds & DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- if (ds & DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- if (ds & DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- /* Note, that dep can become speculative only
- at the moment of creation. Thus, we don't need to
- check for it here. */
- }
- }
-
- if (changed_linkpp && changed_p == DEP_CHANGED)
- *changed_linkpp = linkp;
+ case DEP_CHANGED:
+ maybe_present_p = true;
+ present_p = true;
+ break;
+
+ case DEP_CREATED:
+ maybe_present_p = false;
+ present_p = false;
+ break;
+
+ default:
+ gcc_unreachable ();
+ break;
+ }
+ }
#endif
- return changed_p;
- }
- }
- /* We didn't find a dep. It shouldn't be present in the cache. */
- gcc_assert (!present_p);
+
+ /* Check that we don't already have this dependence. */
+ if (maybe_present_p)
+ {
+ dep_t present_dep;
+ sd_iterator_def sd_it;
+
+ gcc_assert (true_dependency_cache == NULL || present_p);
+
+ present_dep = sd_find_dep_between_no_cache (DEP_PRO (new_dep),
+ DEP_CON (new_dep),
+ resolved_p, &sd_it);
+
+ if (present_dep != NULL)
+ /* We found an existing dependency between ELEM and INSN. */
+ return update_dep (present_dep, new_dep, sd_it, mem1, mem2);
+ else
+ /* We didn't find a dep, it shouldn't present in the cache. */
+ gcc_assert (!present_p);
}
/* Might want to check one level of transitivity to save conses.
- This check should be done in maybe_add_or_update_back_dep_1.
- Since we made it to add_or_update_back_dep_1, we must create
+ This check should be done in maybe_add_or_update_dep_1.
+ Since we made it to add_or_update_dep_1, we must create
(or update) a link. */
- if (mem1)
+ if (mem1 != NULL_RTX)
{
gcc_assert (current_sched_info->flags & DO_SPECULATION);
- ds = set_dep_weak (ds, BEGIN_DATA, estimate_dep_weak (mem1, mem2));
+ DEP_STATUS (new_dep) = set_dep_weak (DEP_STATUS (new_dep), BEGIN_DATA,
+ estimate_dep_weak (mem1, mem2));
}
-
- add_back_dep (insn, elem, dep_type, ds);
+
+ sd_add_dep (new_dep, resolved_p);
return DEP_CREATED;
}
-/* This function creates a link between INSN and ELEM under any
- conditions. DS describes speculative status of the link. */
+/* Initialize BACK_LIST_PTR with consumer's backward list and
+ FORW_LIST_PTR with producer's forward list. If RESOLVED_P is true
+ initialize with lists that hold resolved deps. */
static void
-add_back_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
+get_back_and_forw_lists (dep_t dep, bool resolved_p,
+ deps_list_t *back_list_ptr,
+ deps_list_t *forw_list_ptr)
{
- struct _dep _dep, *dep = &_dep;
+ rtx con = DEP_CON (dep);
- gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
+ if (!resolved_p)
+ {
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (DEP_STATUS (dep) & SPECULATIVE))
+ *back_list_ptr = INSN_SPEC_BACK_DEPS (con);
+ else
+ *back_list_ptr = INSN_HARD_BACK_DEPS (con);
- if (current_sched_info->flags & USE_DEPS_LIST)
- init_dep_1 (dep, elem, insn, dep_type, ds);
+ *forw_list_ptr = INSN_FORW_DEPS (DEP_PRO (dep));
+ }
else
- init_dep_1 (dep, elem, insn, dep_type, -1);
+ {
+ *back_list_ptr = INSN_RESOLVED_BACK_DEPS (con);
+ *forw_list_ptr = INSN_RESOLVED_FORW_DEPS (DEP_PRO (dep));
+ }
+}
+
+/* Add dependence described by DEP.
+ If RESOLVED_P is true treat the dependence as a resolved one. */
+void
+sd_add_dep (dep_t dep, bool resolved_p)
+{
+ dep_node_t n = create_dep_node ();
+ deps_list_t con_back_deps;
+ deps_list_t pro_forw_deps;
+ rtx elem = DEP_PRO (dep);
+ rtx insn = DEP_CON (dep);
+
+ gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
+
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && !sched_insn_is_legitimate_for_speculation_p (insn, DEP_STATUS (dep)))
+ DEP_STATUS (dep) &= ~SPECULATIVE;
+
+ copy_dep (DEP_NODE_DEP (n), dep);
+
+ get_back_and_forw_lists (dep, resolved_p, &con_back_deps, &pro_forw_deps);
- add_back_dep_to_deps_list (INSN_BACK_DEPS (insn), dep);
+ add_to_deps_list (DEP_NODE_BACK (n), con_back_deps);
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
- check_dep_status (dep_type, ds, false);
+ check_dep (dep, false);
#endif
+ add_to_deps_list (DEP_NODE_FORW (n), pro_forw_deps);
+
/* If we are adding a dependency to INSN's LOG_LINKs, then note that
in the bitmap caches of dependency information. */
if (true_dependency_cache != NULL)
+ set_dependency_caches (dep);
+#endif
+}
+
+/* Add or update backward dependence between INSN and ELEM
+ with given type DEP_TYPE and dep_status DS.
+ This function is a convenience wrapper. */
+enum DEPS_ADJUST_RESULT
+sd_add_or_update_dep (dep_t dep, bool resolved_p)
+{
+ return add_or_update_dep_1 (dep, resolved_p, NULL_RTX, NULL_RTX);
+}
+
+/* Resolved dependence pointed to by SD_IT.
+ SD_IT will advance to the next element. */
+void
+sd_resolve_dep (sd_iterator_def sd_it)
+{
+ dep_node_t node = DEP_LINK_NODE (*sd_it.linkp);
+ dep_t dep = DEP_NODE_DEP (node);
+ rtx pro = DEP_PRO (dep);
+ rtx con = DEP_CON (dep);
+
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (DEP_STATUS (dep) & SPECULATIVE))
+ move_dep_link (DEP_NODE_BACK (node), INSN_SPEC_BACK_DEPS (con),
+ INSN_RESOLVED_BACK_DEPS (con));
+ else
+ move_dep_link (DEP_NODE_BACK (node), INSN_HARD_BACK_DEPS (con),
+ INSN_RESOLVED_BACK_DEPS (con));
+
+ move_dep_link (DEP_NODE_FORW (node), INSN_FORW_DEPS (pro),
+ INSN_RESOLVED_FORW_DEPS (pro));
+}
+
+/* Make TO depend on all the FROM's producers.
+ If RESOLVED_P is true add dependencies to the resolved lists. */
+void
+sd_copy_back_deps (rtx to, rtx from, bool resolved_p)
+{
+ sd_list_types_def list_type;
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ list_type = resolved_p ? SD_LIST_RES_BACK : SD_LIST_BACK;
+
+ FOR_EACH_DEP (from, list_type, sd_it, dep)
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ copy_dep (new_dep, dep);
+ DEP_CON (new_dep) = to;
+ sd_add_dep (new_dep, resolved_p);
+ }
+}
+
+/* Remove a dependency referred to by SD_IT.
+ SD_IT will point to the next dependence after removal. */
+void
+sd_delete_dep (sd_iterator_def sd_it)
+{
+ dep_node_t n = DEP_LINK_NODE (*sd_it.linkp);
+ dep_t dep = DEP_NODE_DEP (n);
+ rtx pro = DEP_PRO (dep);
+ rtx con = DEP_CON (dep);
+ deps_list_t con_back_deps;
+ deps_list_t pro_forw_deps;
+
+ if (true_dependency_cache != NULL)
{
- if (!(current_sched_info->flags & USE_DEPS_LIST))
- {
- if (dep_type == REG_DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- else if (dep_type == REG_DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- else if (dep_type == REG_DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- }
- else
- {
- if (ds & DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- if (ds & DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- if (ds & DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- if (ds & SPECULATIVE)
- {
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
- bitmap_set_bit (&spec_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- }
- }
+ int elem_luid = INSN_LUID (pro);
+ int insn_luid = INSN_LUID (con);
+
+ bitmap_clear_bit (&true_dependency_cache[insn_luid], elem_luid);
+ bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
+
+ if (current_sched_info->flags & DO_SPECULATION)
+ bitmap_clear_bit (&spec_dependency_cache[insn_luid], elem_luid);
}
-#endif
+
+ get_back_and_forw_lists (dep, sd_it.resolved_p,
+ &con_back_deps, &pro_forw_deps);
+
+ remove_from_deps_list (DEP_NODE_BACK (n), con_back_deps);
+ remove_from_deps_list (DEP_NODE_FORW (n), pro_forw_deps);
+
+ delete_dep_node (n);
+}
+
+/* Dump deps_lists of INSN specified by TYPES. */
+static void
+sd_dump_lists (FILE *dump, rtx insn, sd_list_types_def types, int flags)
+{
+ sd_iterator_def sd_it;
+ dep_t dep;
+ int all;
+
+ all = (flags & 1);
+
+ fprintf (dump, "%d: ", sd_lists_size (insn, types));
+
+ FOR_EACH_DEP (insn, types, sd_it, dep)
+ dump_dep (dump, dep, dump_dep_flags | all);
+
+ fprintf (dump, "\n");
+}
+
+/* Dump deps_lists of INSN specified by TYPES to STDERR. */
+void
+sd_debug_lists (rtx insn, sd_list_types_def types)
+{
+ sd_dump_lists (stderr, insn, types, 1);
}
/* A convenience wrapper to operate on an entire list. */
@@ -940,26 +1323,19 @@ add_dependence_list_and_free (rtx insn,
}
/* Clear all dependencies for an insn. */
-
static void
delete_all_dependences (rtx insn)
{
- /* Clear caches, if they exist, as well as free the dependence. */
-
-#ifdef INSN_SCHEDULING
- if (true_dependency_cache != NULL)
- {
- bitmap_clear (&true_dependency_cache[INSN_LUID (insn)]);
- bitmap_clear (&output_dependency_cache[INSN_LUID (insn)]);
- bitmap_clear (&anti_dependency_cache[INSN_LUID (insn)]);
- /* We don't have to clear forward_dependency_cache here,
- because it is formed later. */
- if (current_sched_info->flags & DO_SPECULATION)
- bitmap_clear (&spec_dependency_cache[INSN_LUID (insn)]);
- }
-#endif
+ sd_iterator_def sd_it;
+ dep_t dep;
- clear_deps_list (INSN_BACK_DEPS (insn));
+ /* The below cycle can be optimized to clear the caches and back_deps
+ in one call but that would provoke duplication of code from
+ delete_dep (). */
+
+ for (sd_it = sd_iterator_start (insn, SD_LIST_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
+ sd_delete_dep (sd_it);
}
/* All insns in a scheduling group except the first should only have
@@ -971,13 +1347,13 @@ delete_all_dependences (rtx insn)
static void
fixup_sched_groups (rtx insn)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
rtx prev_nonnote;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
{
rtx i = insn;
- dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
do
@@ -989,7 +1365,7 @@ fixup_sched_groups (rtx insn)
} while (SCHED_GROUP_P (i));
if (! sched_insns_conditions_mutex_p (i, pro))
- add_dependence (i, pro, DEP_KIND (dep));
+ add_dependence (i, pro, DEP_TYPE (dep));
next_link:;
}
@@ -1361,11 +1737,19 @@ sched_analyze_2 (struct deps *deps, rtx
t, rtx_varies_p)
&& ! sched_insns_conditions_mutex_p (insn, XEXP (pending, 0)))
{
- if (current_sched_info->flags & DO_SPECULATION)
- maybe_add_or_update_back_dep_1 (insn, XEXP (pending, 0),
- REG_DEP_TRUE,
- BEGIN_DATA | DEP_TRUE,
- XEXP (pending_mem, 0), t, 0);
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (spec_info->mask & BEGIN_DATA))
+ /* Create a data-speculative dependence between producer
+ and consumer. */
+ {
+ dep_def _dep, *dep = &_dep;
+
+ init_dep_1 (dep, XEXP (pending, 0), insn, REG_DEP_TRUE,
+ BEGIN_DATA | DEP_TRUE);
+
+ maybe_add_or_update_dep_1 (dep, false,
+ XEXP (pending_mem, 0), t);
+ }
else
add_dependence (insn, XEXP (pending, 0), REG_DEP_TRUE);
}
@@ -1795,6 +2179,19 @@ sched_analyze_insn (struct deps *deps, r
/* Fixup the dependencies in the sched group. */
if (SCHED_GROUP_P (insn))
fixup_sched_groups (insn);
+
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && !sched_insn_is_legitimate_for_speculation_p (insn, 0))
+ /* INSN has an internal dependency (e.g. r14 = [r14]) and thus cannot
+ be speculated. */
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
+ change_spec_dep_to_hard (sd_it);
+ }
}
/* Analyze every insn between HEAD and TAIL inclusive, creating backward
@@ -1826,13 +2223,8 @@ sched_analyze (struct deps *deps, rtx he
/* Clear out the stale LOG_LINKS from flow. */
free_INSN_LIST_list (&LOG_LINKS (insn));
- /* These two lists will be freed in schedule_insn (). */
- INSN_BACK_DEPS (insn) = create_deps_list (false);
- INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list (false);
-
- /* This one should be allocated on the obstack because it should live
- till the scheduling ends. */
- INSN_FORW_DEPS (insn) = create_deps_list (true);
+ /* And initialize deps_lists. */
+ sd_init_insn (insn);
}
if (NONJUMP_INSN_P (insn) || JUMP_P (insn))
@@ -1974,94 +2366,58 @@ sched_analyze (struct deps *deps, rtx he
}
gcc_unreachable ();
}
-\f
-/* The following function adds forward dependence (FROM, TO) with
- given DEP_TYPE. The forward dependence should be not exist before. */
-
-void
-add_forw_dep (dep_link_t link)
+/* Helper for sched_free_deps ().
+ Delete INSN's (RESOLVED_P) backward dependencies. */
+static void
+delete_dep_nodes_in_back_deps (rtx insn, bool resolved_p)
{
- dep_t dep = DEP_LINK_DEP (link);
- rtx to = DEP_CON (dep);
- rtx from = DEP_PRO (dep);
-
-#ifdef ENABLE_CHECKING
- /* If add_dependence is working properly there should never
- be notes, deleted insns or duplicates in the backward
- links. Thus we need not check for them here.
-
- However, if we have enabled checking we might as well go
- ahead and verify that add_dependence worked properly. */
- gcc_assert (INSN_P (from));
- gcc_assert (!INSN_DELETED_P (from));
- if (true_dependency_cache)
- {
- gcc_assert (!bitmap_bit_p (&forward_dependency_cache[INSN_LUID (from)],
- INSN_LUID (to)));
- bitmap_set_bit (&forward_dependency_cache[INSN_LUID (from)],
- INSN_LUID (to));
- }
+ sd_iterator_def sd_it;
+ dep_t dep;
+ sd_list_types_def types;
- gcc_assert (find_link_by_con_in_deps_list (INSN_FORW_DEPS (from), to)
- == NULL);
-#endif
+ if (resolved_p)
+ types = SD_LIST_RES_BACK;
+ else
+ types = SD_LIST_BACK;
- add_to_deps_list (DEP_NODE_FORW (DEP_LINK_NODE (link)),
- INSN_FORW_DEPS (from));
+ for (sd_it = sd_iterator_start (insn, types);
+ sd_iterator_cond (&sd_it, &dep);)
+ {
+ dep_link_t link = *sd_it.linkp;
+ dep_node_t node = DEP_LINK_NODE (link);
+ deps_list_t back_list;
+ deps_list_t forw_list;
- INSN_DEP_COUNT (to) += 1;
+ get_back_and_forw_lists (dep, resolved_p, &back_list, &forw_list);
+ remove_from_deps_list (link, back_list);
+ delete_dep_node (node);
+ }
}
-/* Examine insns in the range [ HEAD, TAIL ] and Use the backward
- dependences from INSN_BACK_DEPS list to build forward dependences in
- INSN_FORW_DEPS. */
-
+/* Delete (RESOLVED_P) dependencies between HEAD and TAIL together with
+ deps_lists. */
void
-compute_forward_dependences (rtx head, rtx tail)
+sched_free_deps (rtx head, rtx tail, bool resolved_p)
{
rtx insn;
- rtx next_tail;
+ rtx next_tail = NEXT_INSN (tail);
- next_tail = NEXT_INSN (tail);
for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
- {
- dep_link_t link;
-
- if (! INSN_P (insn))
- continue;
-
- if (current_sched_info->flags & DO_SPECULATION)
- {
- /* We will add links, preserving order, from INSN_BACK_DEPS to
- NEW. */
- dep_link_t new = NULL;
-
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
-
- while (link != NULL)
- {
- dep_link_t next = DEP_LINK_NEXT (link);
-
- detach_dep_link (link);
- adjust_add_sorted_back_dep (insn, link, &new);
-
- link = next;
- }
-
- /* Attach NEW to be the list of backward dependencies. */
- if (new != NULL)
- {
- DEP_LINK_PREV_NEXTP (new)
- = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
+ if (INSN_P (insn) && INSN_LUID (insn) > 0)
+ {
+ /* Clear resolved back deps together with its dep_nodes. */
+ delete_dep_nodes_in_back_deps (insn, resolved_p);
- DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)) = new;
- }
- }
+ /* Clear forward deps and leave the dep_nodes to the
+ corresponding back_deps list. */
+ if (resolved_p)
+ clear_deps_list (INSN_RESOLVED_FORW_DEPS (insn));
+ else
+ clear_deps_list (INSN_FORW_DEPS (insn));
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
- add_forw_dep (link);
- }
+ sd_finish_insn (insn);
+ }
}
\f
/* Initialize variables for region data dependence analysis.
@@ -2130,27 +2486,31 @@ free_deps (struct deps *deps)
void
init_dependency_caches (int luid)
{
+ /* Average number of insns in the basic block.
+ '+ 1' is used to make it nonzero. */
+ int insns_in_block = luid / n_basic_blocks + 1;
+
/* ?!? We could save some memory by computing a per-region luid mapping
which could reduce both the number of vectors in the cache and the size
of each vector. Instead we just avoid the cache entirely unless the
average number of instructions in a basic block is very high. See
the comment before the declaration of true_dependency_cache for
what we consider "very high". */
- if (luid / n_basic_blocks > 100 * 5)
+ if (insns_in_block > 100 * 5)
{
cache_size = 0;
extend_dependency_caches (luid, true);
}
- /* Lifetime of this obstack is whole function scheduling (not single region
- scheduling) because some dependencies can be manually generated for
- outside regions. See dont_calc_deps in sched-{rgn, ebb}.c .
-
- Possible solution would be to have two obstacks:
- * the big one for regular dependencies with region scheduling lifetime,
- * and the small one for manually generated dependencies with function
- scheduling lifetime. */
- gcc_obstack_init (&deps_obstack);
+ dl_pool = create_alloc_pool ("deps_list", sizeof (struct _deps_list),
+ /* Allocate lists for one block at a time. */
+ insns_in_block);
+
+ dn_pool = create_alloc_pool ("dep_node", sizeof (struct _dep_node),
+ /* Allocate nodes for one block at a time.
+ We assume that average insn has
+ 5 producers. */
+ 5 * insns_in_block);
}
/* Create or extend (depending on CREATE_P) dependency caches to
@@ -2168,10 +2528,7 @@ extend_dependency_caches (int n, bool cr
output_dependency_cache, luid);
anti_dependency_cache = XRESIZEVEC (bitmap_head, anti_dependency_cache,
luid);
-#ifdef ENABLE_CHECKING
- forward_dependency_cache = XRESIZEVEC (bitmap_head,
- forward_dependency_cache, luid);
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
spec_dependency_cache = XRESIZEVEC (bitmap_head, spec_dependency_cache,
luid);
@@ -2181,9 +2538,7 @@ extend_dependency_caches (int n, bool cr
bitmap_initialize (&true_dependency_cache[i], 0);
bitmap_initialize (&output_dependency_cache[i], 0);
bitmap_initialize (&anti_dependency_cache[i], 0);
-#ifdef ENABLE_CHECKING
- bitmap_initialize (&forward_dependency_cache[i], 0);
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
bitmap_initialize (&spec_dependency_cache[i], 0);
}
@@ -2196,7 +2551,10 @@ extend_dependency_caches (int n, bool cr
void
free_dependency_caches (void)
{
- obstack_free (&deps_obstack, NULL);
+ gcc_assert (deps_pools_are_empty_p ());
+ free_alloc_pool_if_empty (&dn_pool);
+ free_alloc_pool_if_empty (&dl_pool);
+ gcc_assert (dn_pool == NULL && dl_pool == NULL);
if (true_dependency_cache)
{
@@ -2207,9 +2565,7 @@ free_dependency_caches (void)
bitmap_clear (&true_dependency_cache[i]);
bitmap_clear (&output_dependency_cache[i]);
bitmap_clear (&anti_dependency_cache[i]);
-#ifdef ENABLE_CHECKING
- bitmap_clear (&forward_dependency_cache[i]);
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
bitmap_clear (&spec_dependency_cache[i]);
}
@@ -2219,10 +2575,7 @@ free_dependency_caches (void)
output_dependency_cache = NULL;
free (anti_dependency_cache);
anti_dependency_cache = NULL;
-#ifdef ENABLE_CHECKING
- free (forward_dependency_cache);
- forward_dependency_cache = NULL;
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
{
free (spec_dependency_cache);
@@ -2253,72 +2606,6 @@ finish_deps_global (void)
FREE_REG_SET (reg_pending_uses);
}
-/* Insert LINK into the dependence chain pointed to by LINKP and
- maintain the sort order. */
-static void
-adjust_add_sorted_back_dep (rtx insn, dep_link_t link, dep_link_t *linkp)
-{
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
- /* If the insn cannot move speculatively, but the link is speculative,
- make it hard dependence. */
- if (HAS_INTERNAL_DEP (insn)
- && (DEP_LINK_STATUS (link) & SPECULATIVE))
- {
- DEP_LINK_STATUS (link) &= ~SPECULATIVE;
-
- if (true_dependency_cache)
- bitmap_clear_bit (&spec_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (DEP_LINK_PRO (link)));
- }
-
- /* Non-speculative links go at the head of deps_list, followed by
- speculative links. */
- if (DEP_LINK_STATUS (link) & SPECULATIVE)
- while (*linkp && !(DEP_LINK_STATUS (*linkp) & SPECULATIVE))
- linkp = &DEP_LINK_NEXT (*linkp);
-
- attach_dep_link (link, linkp);
-
- if (CHECK)
- gcc_assert (deps_list_consistent_p (INSN_BACK_DEPS (insn)));
-}
-
-/* Move the dependence pointed to by LINKP to the back dependencies
- of INSN, and also add this dependence to the forward ones. All dep_links,
- except one pointed to by LINKP, must be sorted. */
-static void
-adjust_back_add_forw_dep (rtx insn, dep_link_t *linkp)
-{
- dep_link_t link;
-
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
- link = *linkp;
- detach_dep_link (link);
-
- adjust_add_sorted_back_dep (insn, link,
- &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
- add_forw_dep (link);
-}
-
-/* Remove forward dependence described by L. */
-static void
-delete_forw_dep (dep_link_t l)
-{
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
-#ifdef ENABLE_CHECKING
- if (true_dependency_cache)
- bitmap_clear_bit (&forward_dependency_cache[INSN_LUID (DEP_LINK_PRO (l))],
- INSN_LUID (DEP_LINK_CON (l)));
-#endif
-
- detach_dep_link (l);
-
- INSN_DEP_COUNT (DEP_LINK_CON (l))--;
-}
-
/* Estimate the weakness of dependence between MEM1 and MEM2. */
static dw_t
estimate_dep_weak (rtx mem1, rtx mem2)
@@ -2353,90 +2640,15 @@ estimate_dep_weak (rtx mem1, rtx mem2)
void
add_dependence (rtx insn, rtx elem, enum reg_note dep_type)
{
- ds_t ds;
-
- if (dep_type == REG_DEP_TRUE)
- ds = DEP_TRUE;
- else if (dep_type == REG_DEP_OUTPUT)
- ds = DEP_OUTPUT;
- else if (dep_type == REG_DEP_ANTI)
- ds = DEP_ANTI;
- else
- gcc_unreachable ();
-
- maybe_add_or_update_back_dep_1 (insn, elem, dep_type, ds, 0, 0, 0);
-}
-
-/* Add or update backward dependence between INSN and ELEM
- with given type DEP_TYPE and dep_status DS.
- This function is a convenience wrapper. */
-enum DEPS_ADJUST_RESULT
-add_or_update_back_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
-{
- return add_or_update_back_dep_1 (insn, elem, dep_type, ds, 0, 0, 0);
-}
-
-/* Add or update both backward and forward dependencies between INSN and ELEM
- with given type DEP_TYPE and dep_status DS. */
-void
-add_or_update_back_forw_dep (rtx insn, rtx elem, enum reg_note dep_type,
- ds_t ds)
-{
- enum DEPS_ADJUST_RESULT res;
- dep_link_t *linkp;
-
- res = add_or_update_back_dep_1 (insn, elem, dep_type, ds, 0, 0, &linkp);
-
- if (res == DEP_CHANGED || res == DEP_CREATED)
- {
- if (res == DEP_CHANGED)
- delete_forw_dep (DEP_NODE_FORW (DEP_LINK_NODE (*linkp)));
- else if (res == DEP_CREATED)
- linkp = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
-
- adjust_back_add_forw_dep (insn, linkp);
- }
-}
-
-/* Add both backward and forward dependencies between INSN and ELEM
- with given type DEP_TYPE and dep_status DS. */
-void
-add_back_forw_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
-{
- add_back_dep (insn, elem, dep_type, ds);
- adjust_back_add_forw_dep (insn, &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
-
- if (CHECK)
- gcc_assert (deps_list_consistent_p (INSN_BACK_DEPS (insn)));
-}
-
-/* Remove a dependency referred to by L. */
-void
-delete_back_forw_dep (dep_link_t l)
-{
- dep_node_t n = DEP_LINK_NODE (l);
-
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
- if (true_dependency_cache != NULL)
- {
- dep_t dep = DEP_NODE_DEP (n);
- int elem_luid = INSN_LUID (DEP_PRO (dep));
- int insn_luid = INSN_LUID (DEP_CON (dep));
-
- bitmap_clear_bit (&true_dependency_cache[insn_luid], elem_luid);
- bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
- bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
- bitmap_clear_bit (&spec_dependency_cache[insn_luid], elem_luid);
- }
+ dep_def _dep, *dep = &_dep;
- delete_forw_dep (DEP_NODE_FORW (n));
- detach_dep_link (DEP_NODE_BACK (n));
+ init_dep (dep, elem, insn, dep_type);
+ maybe_add_or_update_dep_1 (dep, false, NULL_RTX, NULL_RTX);
}
/* Return weakness of speculative type TYPE in the dep_status DS. */
-dw_t
-get_dep_weak (ds_t ds, ds_t type)
+static dw_t
+get_dep_weak_1 (ds_t ds, ds_t type)
{
ds = ds & type;
switch (type)
@@ -2448,10 +2660,20 @@ get_dep_weak (ds_t ds, ds_t type)
default: gcc_unreachable ();
}
- gcc_assert (MIN_DEP_WEAK <= ds && ds <= MAX_DEP_WEAK);
return (dw_t) ds;
}
+/* Return weakness of speculative type TYPE in the dep_status DS. */
+dw_t
+get_dep_weak (ds_t ds, ds_t type)
+{
+ dw_t dw = get_dep_weak_1 (ds, type);
+
+ gcc_assert (MIN_DEP_WEAK <= dw && dw <= MAX_DEP_WEAK);
+
+ return dw;
+}
+
/* Return the dep_status, which has the same parameters as DS, except for
speculative type TYPE, that will have weakness DW. */
ds_t
@@ -2509,13 +2731,59 @@ ds_merge (ds_t ds1, ds_t ds2)
return ds;
}
+/* Dump information about the dependence status S. */
+static void
+dump_ds (FILE *f, ds_t s)
+{
+ fprintf (f, "{");
+
+ if (s & BEGIN_DATA)
+ fprintf (f, "BEGIN_DATA: %d; ", get_dep_weak_1 (s, BEGIN_DATA));
+ if (s & BE_IN_DATA)
+ fprintf (f, "BE_IN_DATA: %d; ", get_dep_weak_1 (s, BE_IN_DATA));
+ if (s & BEGIN_CONTROL)
+ fprintf (f, "BEGIN_CONTROL: %d; ", get_dep_weak_1 (s, BEGIN_CONTROL));
+ if (s & BE_IN_CONTROL)
+ fprintf (f, "BE_IN_CONTROL: %d; ", get_dep_weak_1 (s, BE_IN_CONTROL));
+
+ if (s & HARD_DEP)
+ fprintf (f, "HARD_DEP; ");
+
+ if (s & DEP_TRUE)
+ fprintf (f, "DEP_TRUE; ");
+ if (s & DEP_ANTI)
+ fprintf (f, "DEP_ANTI; ");
+ if (s & DEP_OUTPUT)
+ fprintf (f, "DEP_OUTPUT; ");
+
+ fprintf (f, "}");
+}
+
+void
+debug_ds (ds_t s)
+{
+ dump_ds (stderr, s);
+ fprintf (stderr, "\n");
+}
+
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
/* Verify that dependence type and status are consistent.
If RELAXED_P is true, then skip dep_weakness checks. */
static void
-check_dep_status (enum reg_note dt, ds_t ds, bool relaxed_p)
+check_dep (dep_t dep, bool relaxed_p)
{
+ enum reg_note dt = DEP_TYPE (dep);
+ ds_t ds = DEP_STATUS (dep);
+
+ gcc_assert (DEP_PRO (dep) != DEP_CON (dep));
+
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
+ gcc_assert (ds == -1);
+ return;
+ }
+
/* Check that dependence type contains the same bits as the status. */
if (dt == REG_DEP_TRUE)
gcc_assert (ds & DEP_TRUE);
--- sched-int.h (/mirror/endeed/calc-priorities/gcc) (revision 1077)
+++ sched-int.h (/mirror/endeed/fdl-pool/gcc) (revision 1077)
@@ -54,35 +54,40 @@ struct _dep
/* Consumer. */
rtx con;
- /* Dependency kind (aka dependency major type). This field is superseded
- by STATUS below. Though, it is still in place because all the backends
- use it. */
- enum reg_note kind;
+ /* Dependency major type. This field is superseded by STATUS below.
+ Though, it is still in place because all the backends use it. */
+ enum reg_note type;
/* Dependency status. This field holds all dependency types and additional
information for speculative dependencies. */
ds_t status;
};
-typedef struct _dep *dep_t;
+
+typedef struct _dep dep_def;
+typedef dep_def *dep_t;
#define DEP_PRO(D) ((D)->pro)
#define DEP_CON(D) ((D)->con)
-#define DEP_KIND(D) ((D)->kind)
+#define DEP_TYPE(D) ((D)->type)
#define DEP_STATUS(D) ((D)->status)
/* Functions to work with dep. */
+extern void init_dep_1 (dep_t, rtx, rtx, enum reg_note, ds_t);
extern void init_dep (dep_t, rtx, rtx, enum reg_note);
+extern void debug_dep (dep_t);
+
/* Definition of this struct resides below. */
struct _dep_node;
+typedef struct _dep_node *dep_node_t;
/* A link in the dependency list. This is essentially an equivalent of a
single {INSN, DEPS}_LIST rtx. */
struct _dep_link
{
/* Dep node with all the data. */
- struct _dep_node *node;
+ dep_node_t node;
/* Next link in the list. For the last one it is NULL. */
struct _dep_link *next;
@@ -107,40 +112,23 @@ typedef struct _dep_link *dep_link_t;
#define DEP_LINK_DEP(N) (DEP_NODE_DEP (DEP_LINK_NODE (N)))
#define DEP_LINK_PRO(N) (DEP_PRO (DEP_LINK_DEP (N)))
#define DEP_LINK_CON(N) (DEP_CON (DEP_LINK_DEP (N)))
-#define DEP_LINK_KIND(N) (DEP_KIND (DEP_LINK_DEP (N)))
+#define DEP_LINK_TYPE(N) (DEP_TYPE (DEP_LINK_DEP (N)))
#define DEP_LINK_STATUS(N) (DEP_STATUS (DEP_LINK_DEP (N)))
-void debug_dep_links (dep_link_t);
-
/* A list of dep_links. Lists of this type are now used instead of rtx
LOG_LINKS and alike lists. */
struct _deps_list
{
+ /* First element. */
dep_link_t first;
+
+ /* Total number of elements in the list. */
+ int n_links;
};
typedef struct _deps_list *deps_list_t;
#define DEPS_LIST_FIRST(L) ((L)->first)
-
-/* Macro to walk through deps_list. */
-#define FOR_EACH_DEP_LINK(LINK, LIST) \
- for ((LINK) = DEPS_LIST_FIRST (LIST); \
- (LINK) != NULL; \
- (LINK) = DEP_LINK_NEXT (LINK))
-
-/* Functions to work with deps_list. */
-
-deps_list_t create_deps_list (bool);
-void free_deps_list (deps_list_t);
-void delete_deps_list (deps_list_t);
-bool deps_list_empty_p (deps_list_t);
-void debug_deps_list (deps_list_t);
-void add_back_dep_to_deps_list (deps_list_t, dep_t);
-dep_link_t find_link_by_pro_in_deps_list (deps_list_t, rtx);
-dep_link_t find_link_by_con_in_deps_list (deps_list_t, rtx);
-void copy_deps_list_change_con (deps_list_t, deps_list_t, rtx);
-
-void move_dep_link (dep_link_t, deps_list_t);
+#define DEPS_LIST_N_LINKS(L) ((L)->n_links)
/* Suppose we have a dependence Y between insn pro1 and con1, where pro1 has
additional dependents con0 and con2, and con1 is dependent on additional
@@ -248,7 +236,6 @@ struct _dep_node
/* Forward link. */
struct _dep_link forw;
};
-typedef struct _dep_node *dep_node_t;
#define DEP_NODE_BACK(N) (&(N)->back)
#define DEP_NODE_DEP(N) (&(N)->dep)
@@ -475,15 +462,17 @@ extern struct sched_info *current_sched_
struct haifa_insn_data
{
- /* NB: We can't place 'struct _deps_list' here instead of deps_list_t into
- h_i_d because when h_i_d extends, addresses of the deps_list->first
- change without updating deps_list->first->next->prev_nextp. Thus
- BACK_DEPS and RESOLVED_BACK_DEPS are allocated on the heap and FORW_DEPS
- list is allocated on the obstack. */
+ /* We can't place 'struct _deps_list' into h_i_d instead of deps_list_t
+ because when h_i_d extends, addresses of the deps_list->first
+ change without updating deps_list->first->next->prev_nextp. */
- /* A list of backward dependencies. The insn is a consumer of all the
+ /* A list of hard backward dependencies. The insn is a consumer of all the
deps mentioned here. */
- deps_list_t back_deps;
+ deps_list_t hard_back_deps;
+
+ /* A list of speculative (weak) dependencies. The insn is a consumer of all
+ the deps mentioned here. */
+ deps_list_t spec_back_deps;
/* A list of insns which depend on the instruction. Unlike 'back_deps',
it represents forward dependencies. */
@@ -492,6 +481,11 @@ struct haifa_insn_data
/* A list of scheduled producers of the instruction. Links are being moved
from 'back_deps' to 'resolved_back_deps' while scheduling. */
deps_list_t resolved_back_deps;
+
+ /* A list of scheduled consumers of the instruction. Links are being moved
+ from 'forw_deps' to 'resolved_forw_deps' while scheduling to fasten the
+ search in 'forw_deps'. */
+ deps_list_t resolved_forw_deps;
/* Logical uid gives the original ordering of the insns. */
int luid;
@@ -499,11 +493,6 @@ struct haifa_insn_data
/* A priority for each insn. */
int priority;
- /* The number of incoming edges in the forward dependency graph.
- As scheduling proceeds, counts are decreased. An insn moves to
- the ready queue when its counter reaches zero. */
- int dep_count;
-
/* Number of instructions referring to this insn. */
int ref_count;
@@ -563,13 +552,16 @@ extern regset *glat_start, *glat_end;
/* Accessor macros for h_i_d. There are more in haifa-sched.c and
sched-rgn.c. */
-#define INSN_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].back_deps)
+
+#define INSN_HARD_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].hard_back_deps)
+#define INSN_SPEC_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].spec_back_deps)
#define INSN_FORW_DEPS(INSN) (h_i_d[INSN_UID (INSN)].forw_deps)
#define INSN_RESOLVED_BACK_DEPS(INSN) \
(h_i_d[INSN_UID (INSN)].resolved_back_deps)
+#define INSN_RESOLVED_FORW_DEPS(INSN) \
+ (h_i_d[INSN_UID (INSN)].resolved_forw_deps)
#define INSN_LUID(INSN) (h_i_d[INSN_UID (INSN)].luid)
#define CANT_MOVE(insn) (h_i_d[INSN_UID (insn)].cant_move)
-#define INSN_DEP_COUNT(INSN) (h_i_d[INSN_UID (INSN)].dep_count)
#define INSN_PRIORITY(INSN) (h_i_d[INSN_UID (INSN)].priority)
#define INSN_PRIORITY_KNOWN(INSN) (h_i_d[INSN_UID (INSN)].priority_known)
#define INSN_PRIORITY_KNOWN_P(INSN) \
@@ -705,13 +697,16 @@ enum SPEC_TYPES_OFFSETS {
#define HARD_DEP (DEP_ANTI << 1)
/* This represents the results of calling sched-deps.c functions,
- which modify dependencies. Possible choices are: a dependence
- is already present and nothing has been changed; a dependence type
- has been changed; brand new dependence has been created. */
+ which modify dependencies. */
enum DEPS_ADJUST_RESULT {
- DEP_PRESENT = 1,
- DEP_CHANGED = 2,
- DEP_CREATED = 3
+ /* No dependence needed (e.g. producer == consumer). */
+ DEP_NODEP,
+ /* Dependence is already present and wasn't modified. */
+ DEP_PRESENT,
+ /* Existing dependence was modified to include additional information. */
+ DEP_CHANGED,
+ /* New dependence has been created. */
+ DEP_CREATED
};
/* Represents the bits that can be set in the flags field of the
@@ -747,6 +742,9 @@ enum SPEC_SCHED_FLAGS {
extern FILE *sched_dump;
extern int sched_verbose;
+extern spec_info_t spec_info;
+extern bool haifa_recovery_block_was_added_during_scheduling_p;
+
/* Exception Free Loads:
We define five classes of speculative loads: IFREE, IRISKY,
@@ -832,23 +830,19 @@ extern void print_insn (char *, rtx, int
extern bool sched_insns_conditions_mutex_p (rtx, rtx);
extern void add_dependence (rtx, rtx, enum reg_note);
extern void sched_analyze (struct deps *, rtx, rtx);
+extern bool deps_pools_are_empty_p (void);
+extern void sched_free_deps (rtx, rtx, bool);
extern void init_deps (struct deps *);
extern void free_deps (struct deps *);
extern void init_deps_global (void);
extern void finish_deps_global (void);
-extern void add_forw_dep (dep_link_t);
-extern void compute_forward_dependences (rtx, rtx);
extern void init_dependency_caches (int);
extern void free_dependency_caches (void);
extern void extend_dependency_caches (int, bool);
-extern enum DEPS_ADJUST_RESULT add_or_update_back_dep (rtx, rtx,
- enum reg_note, ds_t);
-extern void add_or_update_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
-extern void add_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
-extern void delete_back_forw_dep (dep_link_t);
extern dw_t get_dep_weak (ds_t, ds_t);
extern ds_t set_dep_weak (ds_t, ds_t, dw_t);
extern ds_t ds_merge (ds_t, ds_t);
+extern void debug_ds (ds_t);
/* Functions in haifa-sched.c. */
extern int haifa_classify_insn (rtx);
@@ -867,6 +861,7 @@ extern void sched_finish (void);
extern int try_ready (rtx);
extern void * xrecalloc (void *, size_t, size_t, size_t);
+extern bool sched_insn_is_legitimate_for_speculation_p (rtx, ds_t);
extern void unlink_bb_notes (basic_block, basic_block);
extern void add_block (basic_block, basic_block);
extern void attach_life_info (void);
@@ -877,6 +872,145 @@ extern void check_reg_live (bool);
#endif
/* Functions in sched-rgn.c. */
+
extern void debug_dependencies (rtx, rtx);
+/* sched-deps.c interface to walk, add, search, update, resolve, delete
+ and debug instruction dependencies. */
+
+/* Constants defining dependences lists. */
+
+/* No list. */
+#define SD_LIST_NONE (0)
+
+/* hard_back_deps. */
+#define SD_LIST_HARD_BACK (1)
+
+/* spec_back_deps. */
+#define SD_LIST_SPEC_BACK (2)
+
+/* forw_deps. */
+#define SD_LIST_FORW (4)
+
+/* resolved_back_deps. */
+#define SD_LIST_RES_BACK (8)
+
+/* resolved_forw_deps. */
+#define SD_LIST_RES_FORW (16)
+
+#define SD_LIST_BACK (SD_LIST_HARD_BACK | SD_LIST_SPEC_BACK)
+
+/* A type to hold above flags. */
+typedef int sd_list_types_def;
+
+extern void sd_next_list (rtx, sd_list_types_def *, deps_list_t *, bool *);
+
+/* Iterator to walk through, resolve and delete dependencies. */
+struct _sd_iterator
+{
+ /* What lists to walk. Can be any combination of SD_LIST_* flags. */
+ sd_list_types_def types;
+
+ /* Instruction dependencies lists of which will be walked. */
+ rtx insn;
+
+ /* Pointer to the next field of the previous element. This is not
+ simply a pointer to the next element to allow easy deletion from the
+ list. When a dep is being removed from the list the iterator
+ will automatically advance because the value in *linkp will start
+ reffering to the next element. */
+ dep_link_t *linkp;
+
+ /* True if the current list is a resolved one. */
+ bool resolved_p;
+};
+
+typedef struct _sd_iterator sd_iterator_def;
+
+/* ??? We can move some definitions that are used in below inline functions
+ out of sched-int.h to sched-deps.c provided that the below functions will
+ become global externals.
+ These definitions include:
+ * struct _deps_list: opaque pointer is needed at global scope.
+ * struct _dep_link: opaque pointer is needed at scope of sd_iterator_def.
+ * struct _dep_node: opaque pointer is needed at scope of
+ struct _deps_link. */
+
+/* Return initialized iterator. */
+static inline sd_iterator_def
+sd_iterator_start (rtx insn, sd_list_types_def types)
+{
+ /* Some dep_link a pointer to which will return NULL. */
+ static dep_link_t null_link = NULL;
+
+ sd_iterator_def i;
+
+ i.types = types;
+ i.insn = insn;
+ i.linkp = &null_link;
+
+ /* Avoid 'uninitialized warning'. */
+ i.resolved_p = false;
+
+ return i;
+}
+
+/* Return the current element. */
+static inline bool
+sd_iterator_cond (sd_iterator_def *it_ptr, dep_t *dep_ptr)
+{
+ dep_link_t link = *it_ptr->linkp;
+
+ if (link != NULL)
+ {
+ *dep_ptr = DEP_LINK_DEP (link);
+ return true;
+ }
+ else
+ {
+ sd_list_types_def types = it_ptr->types;
+
+ if (types != SD_LIST_NONE)
+ /* Switch to next list. */
+ {
+ deps_list_t list;
+
+ sd_next_list (it_ptr->insn,
+ &it_ptr->types, &list, &it_ptr->resolved_p);
+
+ it_ptr->linkp = &DEPS_LIST_FIRST (list);
+
+ return sd_iterator_cond (it_ptr, dep_ptr);
+ }
+
+ *dep_ptr = NULL;
+ return false;
+ }
+}
+
+/* Advance iterator. */
+static inline void
+sd_iterator_next (sd_iterator_def *it_ptr)
+{
+ it_ptr->linkp = &DEP_LINK_NEXT (*it_ptr->linkp);
+}
+
+/* A cycle wrapper. */
+#define FOR_EACH_DEP(INSN, LIST_TYPES, ITER, DEP) \
+ for ((ITER) = sd_iterator_start ((INSN), (LIST_TYPES)); \
+ sd_iterator_cond (&(ITER), &(DEP)); \
+ sd_iterator_next (&(ITER)))
+
+extern int sd_lists_size (rtx, sd_list_types_def);
+extern bool sd_lists_empty_p (rtx, sd_list_types_def);
+extern void sd_init_insn (rtx);
+extern void sd_finish_insn (rtx);
+extern dep_t sd_find_dep_between (rtx, rtx, bool);
+extern void sd_add_dep (dep_t, bool);
+extern enum DEPS_ADJUST_RESULT sd_add_or_update_dep (dep_t, bool);
+extern void sd_resolve_dep (sd_iterator_def);
+extern void sd_copy_back_deps (rtx, rtx, bool);
+extern void sd_delete_dep (sd_iterator_def);
+extern void sd_debug_lists (rtx, sd_list_types_def);
+
#endif /* GCC_SCHED_INT_H */
--- sched-rgn.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
+++ sched-rgn.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
@@ -287,7 +287,7 @@ static int is_exception_free (rtx, int,
static bool sets_likely_spilled (rtx);
static void sets_likely_spilled_1 (rtx, rtx, void *);
static void add_branch_dependences (rtx, rtx);
-static void compute_block_backward_dependences (int);
+static void compute_block_dependences (int);
static void init_regions (void);
static void schedule_region (int);
@@ -1710,11 +1710,12 @@ update_live (rtx insn, int src)
static void
set_spec_fed (rtx load_insn)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (load_insn))
- if (DEP_LINK_KIND (link) == REG_DEP_TRUE)
- FED_BY_SPEC_LOAD (DEP_LINK_CON (link)) = 1;
+ FOR_EACH_DEP (load_insn, SD_LIST_FORW, sd_it, dep)
+ if (DEP_TYPE (dep) == REG_DEP_TRUE)
+ FED_BY_SPEC_LOAD (DEP_CON (dep)) = 1;
}
/* On the path from the insn to load_insn_bb, find a conditional
@@ -1723,18 +1724,19 @@ branch depending on insn, that guards th
static int
find_conditional_protection (rtx insn, int load_insn_bb)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
/* Iterate through DEF-USE forward dependences. */
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
{
- rtx next = DEP_LINK_CON (link);
+ rtx next = DEP_CON (dep);
if ((CONTAINING_RGN (BLOCK_NUM (next)) ==
CONTAINING_RGN (BB_TO_BLOCK (load_insn_bb)))
&& IS_REACHABLE (INSN_BB (next), load_insn_bb)
&& load_insn_bb != INSN_BB (next)
- && DEP_LINK_KIND (link) == REG_DEP_TRUE
+ && DEP_TYPE (dep) == REG_DEP_TRUE
&& (JUMP_P (next)
|| find_conditional_protection (next, load_insn_bb)))
return 1;
@@ -1759,14 +1761,15 @@ find_conditional_protection (rtx insn, i
static int
is_conditionally_protected (rtx load_insn, int bb_src, int bb_trg)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (load_insn))
+ FOR_EACH_DEP (load_insn, SD_LIST_BACK, sd_it, dep)
{
- rtx insn1 = DEP_LINK_PRO (link);
+ rtx insn1 = DEP_PRO (dep);
/* Must be a DEF-USE dependence upon non-branch. */
- if (DEP_LINK_KIND (link) != REG_DEP_TRUE
+ if (DEP_TYPE (dep) != REG_DEP_TRUE
|| JUMP_P (insn1))
continue;
@@ -1809,27 +1812,29 @@ is_conditionally_protected (rtx load_ins
static int
is_pfree (rtx load_insn, int bb_src, int bb_trg)
{
- dep_link_t back_link;
+ sd_iterator_def back_sd_it;
+ dep_t back_dep;
candidate *candp = candidate_table + bb_src;
if (candp->split_bbs.nr_members != 1)
/* Must have exactly one escape block. */
return 0;
- FOR_EACH_DEP_LINK (back_link, INSN_BACK_DEPS (load_insn))
+ FOR_EACH_DEP (load_insn, SD_LIST_BACK, back_sd_it, back_dep)
{
- rtx insn1 = DEP_LINK_PRO (back_link);
+ rtx insn1 = DEP_PRO (back_dep);
- if (DEP_LINK_KIND (back_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (back_dep) == REG_DEP_TRUE)
+ /* Found a DEF-USE dependence (insn1, load_insn). */
{
- /* Found a DEF-USE dependence (insn1, load_insn). */
- dep_link_t fore_link;
+ sd_iterator_def fore_sd_it;
+ dep_t fore_dep;
- FOR_EACH_DEP_LINK (fore_link, INSN_FORW_DEPS (insn1))
+ FOR_EACH_DEP (insn1, SD_LIST_FORW, fore_sd_it, fore_dep)
{
- rtx insn2 = DEP_LINK_CON (fore_link);
+ rtx insn2 = DEP_CON (fore_dep);
- if (DEP_LINK_KIND (fore_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (fore_dep) == REG_DEP_TRUE)
{
/* Found a DEF-USE dependence (insn1, insn2). */
if (haifa_classify_insn (insn2) != PFREE_CANDIDATE)
@@ -1862,7 +1867,7 @@ is_prisky (rtx load_insn, int bb_src, in
if (FED_BY_SPEC_LOAD (load_insn))
return 1;
- if (deps_list_empty_p (INSN_BACK_DEPS (load_insn)))
+ if (sd_lists_empty_p (load_insn, SD_LIST_BACK))
/* Dependence may 'hide' out of the region. */
return 1;
@@ -2098,7 +2103,8 @@ new_ready (rtx next, ds_t ts)
if (not_ex_free
/* We are here because is_exception_free () == false.
But we possibly can handle that with control speculation. */
- && current_sched_info->flags & DO_SPECULATION)
+ && (current_sched_info->flags & DO_SPECULATION)
+ && (spec_info->mask & BEGIN_CONTROL))
/* Here we got new control-speculative instruction. */
ts = set_dep_weak (ts, BEGIN_CONTROL, MAX_DEP_WEAK);
else
@@ -2286,8 +2292,7 @@ add_branch_dependences (rtx head, rtx ta
if (!NOTE_P (insn))
{
if (last != 0
- && (find_link_by_pro_in_deps_list (INSN_BACK_DEPS (last), insn)
- == NULL))
+ && sd_find_dep_between (insn, last, false) == NULL)
{
if (! sched_insns_conditions_mutex_p (last, insn))
add_dependence (last, insn, REG_DEP_ANTI);
@@ -2492,7 +2497,7 @@ propagate_deps (int bb, struct deps *pre
pred_deps->pending_write_mems = 0;
}
-/* Compute backward dependences inside bb. In a multiple blocks region:
+/* Compute dependences inside bb. In a multiple blocks region:
(1) a bb is analyzed after its predecessors, and (2) the lists in
effect at the end of bb (after analyzing for bb) are inherited by
bb's successors.
@@ -2510,7 +2515,7 @@ propagate_deps (int bb, struct deps *pre
similar, and the result is interblock dependences in the region. */
static void
-compute_block_backward_dependences (int bb)
+compute_block_dependences (int bb)
{
rtx head, tail;
struct deps tmp_deps;
@@ -2520,6 +2525,7 @@ compute_block_backward_dependences (int
/* Do the analysis for this block. */
gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
+
sched_analyze (&tmp_deps, head, tail);
add_branch_dependences (head, tail);
@@ -2528,6 +2534,20 @@ compute_block_backward_dependences (int
/* Free up the INSN_LISTs. */
free_deps (&tmp_deps);
+
+ if (targetm.sched.dependencies_evaluation_hook)
+ targetm.sched.dependencies_evaluation_hook (head, tail);
+}
+
+static void
+free_block_dependencies (int bb)
+{
+ rtx head;
+ rtx tail;
+
+ get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
+
+ sched_free_deps (head, tail, true);
}
/* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
@@ -2547,7 +2567,6 @@ free_pending_lists (void)
}
}
\f
-
/* Print dependences for debugging starting from FROM_BB.
Callable from debugger. */
void
@@ -2591,8 +2610,6 @@ void debug_dependencies (rtx head, rtx t
for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
{
- dep_link_t link;
-
if (! INSN_P (insn))
{
int n;
@@ -2614,7 +2631,7 @@ void debug_dependencies (rtx head, rtx t
INSN_UID (insn),
INSN_CODE (insn),
BLOCK_NUM (insn),
- INSN_DEP_COUNT (insn),
+ sd_lists_size (insn, SD_LIST_BACK),
INSN_PRIORITY (insn),
insn_cost (insn));
@@ -2624,8 +2641,13 @@ void debug_dependencies (rtx head, rtx t
print_reservation (sched_dump, insn);
fprintf (sched_dump, "\t: ");
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
- fprintf (sched_dump, "%d ", INSN_UID (DEP_LINK_CON (link)));
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
+ fprintf (sched_dump, "%d ", INSN_UID (DEP_CON (dep)));
+ }
fprintf (sched_dump, "\n");
}
}
@@ -2683,23 +2705,9 @@ schedule_region (int rgn)
for (bb = 0; bb < current_nr_blocks; bb++)
init_deps (bb_deps + bb);
- /* Compute backward dependencies. */
+ /* Compute dependencies. */
for (bb = 0; bb < current_nr_blocks; bb++)
- compute_block_backward_dependences (bb);
-
- /* Compute forward dependencies. */
- for (bb = current_nr_blocks - 1; bb >= 0; bb--)
- {
- rtx head, tail;
-
- gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
- get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
-
- compute_forward_dependences (head, tail);
-
- if (targetm.sched.dependencies_evaluation_hook)
- targetm.sched.dependencies_evaluation_hook (head, tail);
- }
+ compute_block_dependences (bb);
free_pending_lists ();
@@ -2844,7 +2852,6 @@ schedule_region (int rgn)
/* Sanity check: verify that all region insns were scheduled. */
gcc_assert (sched_rgn_n_insns == rgn_n_insns);
-
/* Done with this region. */
if (current_nr_blocks > 1)
@@ -2855,6 +2862,13 @@ schedule_region (int rgn)
sbitmap_vector_free (ancestor_edges);
free (rgn_edges);
}
+
+ /* Free dependencies. */
+ for (bb = 0; bb < current_nr_blocks; ++bb)
+ free_block_dependencies (bb);
+
+ gcc_assert (haifa_recovery_block_was_added_during_scheduling_p
+ || deps_pools_are_empty_p ());
}
/* Indexed by region, holds the number of death notes found in that region.
--- config/ia64/ia64.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
+++ config/ia64/ia64.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
@@ -6234,28 +6234,37 @@ ia64_dependencies_evaluation_hook (rtx h
if (INSN_P (insn)
&& ia64_safe_itanium_class (insn) == ITANIUM_CLASS_IALU)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
+ bool has_mem_op_consumer_p = false;
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
{
enum attr_itanium_class c;
- if (DEP_LINK_KIND (link) != REG_DEP_TRUE)
+ if (DEP_TYPE (dep) != REG_DEP_TRUE)
continue;
- next = DEP_LINK_CON (link);
+ next = DEP_CON (dep);
c = ia64_safe_itanium_class (next);
if ((c == ITANIUM_CLASS_ST
|| c == ITANIUM_CLASS_STF)
&& ia64_st_address_bypass_p (insn, next))
- break;
+ {
+ has_mem_op_consumer_p = true;
+ break;
+ }
else if ((c == ITANIUM_CLASS_LD
|| c == ITANIUM_CLASS_FLD
|| c == ITANIUM_CLASS_FLDP)
&& ia64_ld_address_bypass_p (insn, next))
- break;
+ {
+ has_mem_op_consumer_p = true;
+ break;
+ }
}
- insn->call = link != 0;
+
+ insn->call = has_mem_op_consumer_p;
}
}
@@ -6532,14 +6541,15 @@ ia64_dfa_new_cycle (FILE *dump, int verb
if (c != ITANIUM_CLASS_MMMUL && c != ITANIUM_CLASS_MMSHF)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
int d = -1;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
- if (DEP_LINK_KIND (link) == REG_DEP_TRUE)
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
+ if (DEP_TYPE (dep) == REG_DEP_TRUE)
{
enum attr_itanium_class dep_class;
- rtx dep_insn = DEP_LINK_PRO (link);
+ rtx dep_insn = DEP_PRO (dep);
dep_class = ia64_safe_itanium_class (dep_insn);
if ((dep_class == ITANIUM_CLASS_MMMUL
@@ -7069,13 +7079,14 @@ ia64_gen_check (rtx insn, rtx label, boo
As long as patterns are unique for each instruction, this can be
accomplished by matching ORIG_PAT fields. */
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
int check_no = 0;
rtx orig_pat = ORIG_PAT (insn);
- FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
{
- rtx x = DEP_LINK_PRO (link);
+ rtx x = DEP_PRO (dep);
if (ORIG_PAT (x) == orig_pat)
check_no = spec_check_no[INSN_UID (x)];
--- config/rs6000/rs6000.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
+++ config/rs6000/rs6000.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
@@ -17691,7 +17691,7 @@ rs6000_is_costly_dependence (dep_t dep,
if (rs6000_sched_costly_dep == true_store_to_load_dep_costly
&& is_load_insn (next)
&& is_store_insn (insn)
- && DEP_KIND (dep) == REG_DEP_TRUE)
+ && DEP_TYPE (dep) == REG_DEP_TRUE)
/* Prevent load after store in the same group if it is a true
dependence. */
return true;
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PING][PATCH]: Improvements to sched-deps.c interface
2007-04-13 7:00 [PATCH]: Improvements to sched-deps.c interface Maxim Kuvyrkov
@ 2007-06-07 8:27 ` Maxim Kuvyrkov
2007-06-14 21:43 ` Vladimir Makarov
0 siblings, 1 reply; 15+ messages in thread
From: Maxim Kuvyrkov @ 2007-06-07 8:27 UTC (permalink / raw)
To: Maxim Kuvyrkov; +Cc: Vladimir Makarov, James E Wilson, Ayal Zaks, gcc-patches
Maxim Kuvyrkov wrote:
> Hi!
>
> This patch concludes stages <1>-<3> in the scheduler improvements plan
> posted at http://gcc.gnu.org/ml/gcc/2007-03/msg00105.html :
>
> 1. Define clear interfaces to sched-deps.c
...
>
> 2. Use alloc_pools for dependencies creation / deletion.
...
> 3. Generate forward dependencies simultaneously with backward ones.
...
PING!
--
Maxim
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-06-07 8:27 ` [PING][PATCH]: " Maxim Kuvyrkov
@ 2007-06-14 21:43 ` Vladimir Makarov
2007-06-14 22:49 ` David Edelsohn
2007-07-01 18:20 ` Maxim Kuvyrkov
0 siblings, 2 replies; 15+ messages in thread
From: Vladimir Makarov @ 2007-06-14 21:43 UTC (permalink / raw)
To: Maxim Kuvyrkov; +Cc: James E Wilson, Ayal Zaks, gcc-patches, David Edelsohn
Maxim Kuvyrkov wrote:
> Maxim Kuvyrkov wrote:
>> Hi!
>>
>> This patch concludes stages <1>-<3> in the scheduler improvements
>> plan posted at http://gcc.gnu.org/ml/gcc/2007-03/msg00105.html :
>>
>> 1. Define clear interfaces to sched-deps.c
>
> ...
>
>>
>> 2. Use alloc_pools for dependencies creation / deletion.
>
> ...
>
>> 3. Generate forward dependencies simultaneously with backward ones.
>
> ...
>
> PING!
>
>
Maxim, sorry for delay with review. As I see that was your second ping.
In general, the patch is ok. Although I have minor comments (see below).
It is very hard to read sched-deps.c patch, please next time check the
patches and if they look like a mess then use context format
instead unified.
I don't think that you need approval for rs6000.c patch. It is so
obvious (just renaming). In any case, I CCed this email to rs6000
maintainer David Edelhson if he wants to comment rs6000.c part (David,
rs6000.c is at end of email).
Please wait for approval from Jim Wilson for ia64.c (this part is at
the end of email) and from Ayal Zacks for ddg.c (although ddg.c uses
functions from the scheduler I believe is a part of modulo scheduler.
Ayal, ddg.c part is right after ChangeLog.). They looks ok for me but
I am not a maintainer of these files.
The patch is ok for me with the following comments fixed.
Vlad
ChangeLog:
...
> * sched-deps.c (init_dep_1): Make global.
> (deps_list_empty_p): Use 'n_links' field.
Please add 'Make it static.'
...
> forward_dependency_cache.
Please add entry for missed free_dependency_caches.
...
>
> * haifa-sched.c (spec_info): Make global.
...
> (create_recovery_block): Set new variable.
^ variables
Two variables are set up.
...
> (create_check_block_twin, fix_recovery_deps): Update to use new
> interfaces.
> (sched_insn_is_legitimate_for_speculation_p): New function.
> (speculate_insn): Move checking logic to
> sched_insn_is_legitimate_for_speculation_p ().
Please add missed entry for missed sched_remove_insn.
...
>
> * ddg.c (create_ddg_dependence, build_intra_loop_deps): Update to use
> new interfaces. Remove separate computation of forward dependencies.
> Free dependencies sched-deps dependencies.
>
Please change onto
* ddg.c (create_ddg_dependence): Update to use
new interfaces.
(build_intra_loop_deps): Ditto. Remove separate computation
of forward dependencies. Free dependencies sched-deps
dependencies.
...
> --- ddg.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
> +++ ddg.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
> @@ -166,9 +166,9 @@ create_ddg_dependence (ddg_ptr g, ddg_no
> gcc_assert (link);
>
> /* Note: REG_DEP_ANTI applies to MEM ANTI_DEP as well!! */
> - if (DEP_KIND (link) == REG_DEP_ANTI)
> + if (DEP_TYPE (link) == REG_DEP_ANTI)
> t = ANTI_DEP;
> - else if (DEP_KIND (link) == REG_DEP_OUTPUT)
> + else if (DEP_TYPE (link) == REG_DEP_OUTPUT)
> t = OUTPUT_DEP;
> latency = dep_cost (link);
>
> @@ -384,7 +384,6 @@ build_intra_loop_deps (ddg_ptr g)
> /* Hold the dependency analysis state during dependency calculations. */
> struct deps tmp_deps;
> rtx head, tail;
> - dep_link_t link;
>
> /* Build the dependence information, using the sched_analyze function. */
> init_deps_global ();
> @@ -399,19 +398,19 @@ build_intra_loop_deps (ddg_ptr g)
> for (i = 0; i < g->num_nodes; i++)
> {
> ddg_node_ptr dest_node = &g->nodes[i];
> + sd_iterator_def sd_it;
> + dep_t dep;
>
> if (! INSN_P (dest_node->insn))
> continue;
>
> - FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (dest_node->insn))
> + FOR_EACH_DEP (dest_node->insn, SD_LIST_BACK, sd_it, dep)
> {
> - dep_t dep = DEP_LINK_DEP (link);
> ddg_node_ptr src_node = get_node_of_insn (g, DEP_PRO (dep));
>
> if (!src_node)
> continue;
>
> - add_forw_dep (link);
> create_ddg_dependence (g, src_node, dest_node, dep);
> }
>
> @@ -436,6 +435,9 @@ build_intra_loop_deps (ddg_ptr g)
> /* Free the INSN_LISTs. */
> finish_deps_global ();
> free_deps (&tmp_deps);
> +
> + /* Free dependencies. */
> + sched_free_deps (head, tail, false);
> }
>
>
> --- haifa-sched.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
> +++ haifa-sched.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
...
>
> + /* This is the place where scheduler doesn't *basically* need backward and
> + forward dependencies for INSN anymore. Nevertheless their are used in
they
> + heuristics in rank_for_schedule (), early_queue_to_ready () and in
> + some targets (e.g. rs6000). Thus the earliest place where we *can*
> + remove dependencies is after targetm.sched.md_finish () call in
> + schedule_block (). But, on the other side, the safest place to remove
> + dependencies is when we finishing scheduling entire region. As we
^ are
...
> --- sched-deps.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
> +++ sched-deps.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
...
> +/* Define flags for dump_dep (). */
> +
Please, describe the constants. Their meaning is pretty obvious for
you and me but may be not for others.
> +#define DUMP_DEP_PRO (2)
> +#define DUMP_DEP_CON (4)
> +#define DUMP_DEP_STATUS (8)
> +
> +#define DUMP_DEP_ALL (DUMP_DEP_PRO | DUMP_DEP_CON | DUMP_DEP_STATUS)
> +
> +/* Dump DEP to DUMP.
> + FLAGS is a bit mask specifying what information about DEP needs
> + to be printed. If FLAGS has the very first bit set then dump
> + all fields of DEP. */
I don't understand why you need a special case with 1. You can and
should use DUMP_DEP_ALL and you should start the enumeration of the
above constants with 1 (not 2).
> +static void
> +dump_dep (FILE *dump, dep_t dep, int flags)
> {
...
> +
> +/* Dump all fields of DEP to STDERR. */
> +void
> +debug_dep (dep_t dep)
> +{
> + dump_dep (stderr, dep, 1);
^ DUMP_DEP_ALL
> }
...
> -/* Dump dep_nodes starting from l. */
> -static void
> -dump_dep_links (FILE *dump, dep_link_t l)
> +/* Return true of LINK is not attached to any list. */
> +static bool
> +dep_link_is_detached_p (dep_link_t link)
> {
> - while (l != NULL)
> + if (DEP_LINK_PREV_NEXTP (link) == NULL)
> {
> - dep_t d = DEP_LINK_DEP (l);
> -
> - fprintf (dump, "%d%c>%d ", INSN_UID (DEP_PRO (d)),
> - dep_link_consistent_p (l) ? '-' : '!', INSN_UID (DEP_CON (d)));
> + gcc_assert (DEP_LINK_NEXT (link) == NULL);
>
> - l = DEP_LINK_NEXT (l);
> + return true;
> }
>
> - fprintf (dump, "\n");
> -}
> -
> -/* Dump dep_nodes starting from L to stderr. */
> -void
> -debug_dep_links (dep_link_t l)
> -{
> - dump_dep_links (stderr, l);
> + return false;
> }
>
Why don't just 'return DEP_LINK_PREV_NEXTP (link) == NULL;'
...
> +/* Ask dependency caches what needs to be done for dependence DEP.
> + Return DEP_CREATED if new dependence should be created and there is no
> + need to try to find one searching the dependencies lists.
> + Return DEP_PRESENT if there already is a dependence described by DEP and
> + hence nothing is to be done.
> + Return DEP_CHANGED if we there already is a dependence but it should be
^ typo (please remove 'we').
...
...
> +/* Type of dependence DEP have changed from OLD_TYPE. Update dependency
> + caches accordingly. */
> +static void
> +update_dependency_caches (dep_t dep, enum reg_note old_type)
> +{
> + int elem_luid = INSN_LUID (DEP_PRO (dep));
> + int insn_luid = INSN_LUID (DEP_CON (dep));
> +
> + /* Clear corresponding cache entry because type of the link
> + may have changed. Keep them if we use_deps_list. */
> + if (!(current_sched_info->flags & USE_DEPS_LIST))
> + {
> + switch (old_type)
> + {
> + case REG_DEP_OUTPUT:
> + bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
> + break;
> +
> + case REG_DEP_ANTI:
> + bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
> + break;
> +
> + default:
> + gcc_unreachable ();
> + }
> +
> + switch (DEP_TYPE (dep))
> + {
> + case REG_DEP_TRUE:
> + bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
> + break;
> +
> + case REG_DEP_OUTPUT:
> + bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
> + break;
> +
> + case REG_DEP_ANTI:
> + bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
> + break;
> +
> + default:
> + gcc_unreachable ();
> + }
Why don't just call set_dependency_caches instead of the switch.
...
> --- sched-int.h (/mirror/endeed/calc-priorities/gcc) (revision 1077)
> +++ sched-int.h (/mirror/endeed/fdl-pool/gcc) (revision 1077)
> @@ -54,35 +54,40 @@ struct _dep
> /* Consumer. */
> rtx con;
>
> - /* Dependency kind (aka dependency major type). This field is superseded
> - by STATUS below. Though, it is still in place because all the backends
> - use it. */
> - enum reg_note kind;
> + /* Dependency major type. This field is superseded by STATUS below.
> + Though, it is still in place because all the backends use it. */
please change 'all the backends' on 'some backends'
...
> +extern bool haifa_recovery_block_was_added_during_scheduling_p;
Personaly I like longer names because they behave like comments. But probably it is too long. Another one is
haifa_recovery_block_was_added_during_current_schedule_block_p
Moreover the both have the same prefix of 38 chars long. As I
remember, ANSI C standard differ only first 31 chars for external
identifiers.
Please, make them shorter.
...
> --- sched-rgn.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
> +++ sched-rgn.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
> +}
> +
Please add a comment. It is obvious for you and me what function
does, but it may be not obvious for others.
...
> /* Indexed by region, holds the number of death notes found in that region.
> --- config/ia64/ia64.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
> +++ config/ia64/ia64.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
> @@ -6234,28 +6234,37 @@ ia64_dependencies_evaluation_hook (rtx h
> if (INSN_P (insn)
> && ia64_safe_itanium_class (insn) == ITANIUM_CLASS_IALU)
> {
> - dep_link_t link;
> + sd_iterator_def sd_it;
> + dep_t dep;
> + bool has_mem_op_consumer_p = false;
>
> - FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
> + FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
> {
> enum attr_itanium_class c;
>
> - if (DEP_LINK_KIND (link) != REG_DEP_TRUE)
> + if (DEP_TYPE (dep) != REG_DEP_TRUE)
> continue;
>
> - next = DEP_LINK_CON (link);
> + next = DEP_CON (dep);
> c = ia64_safe_itanium_class (next);
> if ((c == ITANIUM_CLASS_ST
> || c == ITANIUM_CLASS_STF)
> && ia64_st_address_bypass_p (insn, next))
> - break;
> + {
> + has_mem_op_consumer_p = true;
> + break;
> + }
> else if ((c == ITANIUM_CLASS_LD
> || c == ITANIUM_CLASS_FLD
> || c == ITANIUM_CLASS_FLDP)
> && ia64_ld_address_bypass_p (insn, next))
> - break;
> + {
> + has_mem_op_consumer_p = true;
> + break;
> + }
> }
> - insn->call = link != 0;
> +
> + insn->call = has_mem_op_consumer_p;
> }
> }
>
> @@ -6532,14 +6541,15 @@ ia64_dfa_new_cycle (FILE *dump, int verb
>
> if (c != ITANIUM_CLASS_MMMUL && c != ITANIUM_CLASS_MMSHF)
> {
> - dep_link_t link;
> + sd_iterator_def sd_it;
> + dep_t dep;
> int d = -1;
>
> - FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
> - if (DEP_LINK_KIND (link) == REG_DEP_TRUE)
> + FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
> + if (DEP_TYPE (dep) == REG_DEP_TRUE)
> {
> enum attr_itanium_class dep_class;
> - rtx dep_insn = DEP_LINK_PRO (link);
> + rtx dep_insn = DEP_PRO (dep);
>
> dep_class = ia64_safe_itanium_class (dep_insn);
> if ((dep_class == ITANIUM_CLASS_MMMUL
> @@ -7069,13 +7079,14 @@ ia64_gen_check (rtx insn, rtx label, boo
> As long as patterns are unique for each instruction, this can be
> accomplished by matching ORIG_PAT fields. */
> {
> - dep_link_t link;
> + sd_iterator_def sd_it;
> + dep_t dep;
> int check_no = 0;
> rtx orig_pat = ORIG_PAT (insn);
>
> - FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (insn))
> + FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
> {
> - rtx x = DEP_LINK_PRO (link);
> + rtx x = DEP_PRO (dep);
>
> if (ORIG_PAT (x) == orig_pat)
> check_no = spec_check_no[INSN_UID (x)];
> --- config/rs6000/rs6000.c (/mirror/endeed/calc-priorities/gcc) (revision 1077)
> +++ config/rs6000/rs6000.c (/mirror/endeed/fdl-pool/gcc) (revision 1077)
> @@ -17691,7 +17691,7 @@ rs6000_is_costly_dependence (dep_t dep,
> if (rs6000_sched_costly_dep == true_store_to_load_dep_costly
> && is_load_insn (next)
> && is_store_insn (insn)
> - && DEP_KIND (dep) == REG_DEP_TRUE)
> + && DEP_TYPE (dep) == REG_DEP_TRUE)
> /* Prevent load after store in the same group if it is a true
> dependence. */
> return true;
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-06-14 21:43 ` Vladimir Makarov
@ 2007-06-14 22:49 ` David Edelsohn
2007-06-15 11:51 ` Ayal Zaks
2007-07-01 18:20 ` Maxim Kuvyrkov
1 sibling, 1 reply; 15+ messages in thread
From: David Edelsohn @ 2007-06-14 22:49 UTC (permalink / raw)
To: Vladimir Makarov, Maxim Kuvyrkov; +Cc: James E Wilson, Ayal Zaks, gcc-patches
The rs6000 part of the patch is fine with me.
Thanks, David
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-06-14 22:49 ` David Edelsohn
@ 2007-06-15 11:51 ` Ayal Zaks
2007-06-15 13:10 ` Maxim Kuvyrkov
0 siblings, 1 reply; 15+ messages in thread
From: Ayal Zaks @ 2007-06-15 11:51 UTC (permalink / raw)
To: Maxim Kuvyrkov
Cc: gcc-patches, Vladimir Makarov, James E Wilson, David Edelsohn
The ddg.c part of the patch is fine with me.
Have you tried running it with -fmodulo-sched? I'll give it a try on
powerpc.
Thanks, Ayal.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-06-15 11:51 ` Ayal Zaks
@ 2007-06-15 13:10 ` Maxim Kuvyrkov
2007-06-15 22:08 ` Ayal Zaks
0 siblings, 1 reply; 15+ messages in thread
From: Maxim Kuvyrkov @ 2007-06-15 13:10 UTC (permalink / raw)
To: Ayal Zaks; +Cc: gcc-patches, Vladimir Makarov, James E Wilson, David Edelsohn
Ayal Zaks wrote:
> The ddg.c part of the patch is fine with me.
>
> Have you tried running it with -fmodulo-sched? I'll give it a try on
> powerpc.
No, I didn't test the patch on powerpc and would appreciate if you
bootstrap it on this platform.
Thanks,
Maxim
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-06-15 13:10 ` Maxim Kuvyrkov
@ 2007-06-15 22:08 ` Ayal Zaks
2007-06-17 20:02 ` Maxim Kuvyrkov
0 siblings, 1 reply; 15+ messages in thread
From: Ayal Zaks @ 2007-06-15 22:08 UTC (permalink / raw)
To: Maxim Kuvyrkov
Cc: David Edelsohn, gcc-patches, Vladimir Makarov, James E Wilson
Maxim Kuvyrkov <mkuvyrkov@ispras.ru> wrote on 15/06/2007 15:17:05:
> Ayal Zaks wrote:
> > The ddg.c part of the patch is fine with me.
> >
> > Have you tried running it with -fmodulo-sched? I'll give it a try on
> > powerpc.
>
> No, I didn't test the patch on powerpc and would appreciate if you
> bootstrap it on this platform.
Sure. Do you have an updated patch I can use? I get a couple of conflicts
trying to apply the original patch to current trunk.
Ayal.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-06-15 22:08 ` Ayal Zaks
@ 2007-06-17 20:02 ` Maxim Kuvyrkov
0 siblings, 0 replies; 15+ messages in thread
From: Maxim Kuvyrkov @ 2007-06-17 20:02 UTC (permalink / raw)
To: Ayal Zaks; +Cc: David Edelsohn, gcc-patches, Vladimir Makarov, James E Wilson
[-- Attachment #1: Type: text/plain, Size: 656 bytes --]
Ayal Zaks wrote:
> Maxim Kuvyrkov <mkuvyrkov@ispras.ru> wrote on 15/06/2007 15:17:05:
>
>> Ayal Zaks wrote:
...
>>> Have you tried running it with -fmodulo-sched? I'll give it a try on
>>> powerpc.
>> No, I didn't test the patch on powerpc and would appreciate if you
>> bootstrap it on this platform.
>
> Sure. Do you have an updated patch I can use? I get a couple of conflicts
> trying to apply the original patch to current trunk.
> Ayal.
>
Thanks Ayal. Here is an updated patch - it was made against trunk dated
15/06/07.
It looks to me though, that now I will have an access to powerpc box and
will test the patch there too.
Thanks,
Maxim
[-- Attachment #2: fdl-pool.patch --]
[-- Type: text/x-patch, Size: 130638 bytes --]
--- sched-ebb.c (/mirror/endeed/trunk/gcc) (revision 1284)
+++ sched-ebb.c (/mirror/endeed/fdl-pool/gcc) (revision 1284)
@@ -304,24 +304,26 @@ static struct sched_info ebb_sched_info
static basic_block
earliest_block_with_similiar_load (basic_block last_block, rtx load_insn)
{
- dep_link_t back_link;
+ sd_iterator_def back_sd_it;
+ dep_t back_dep;
basic_block bb, earliest_block = NULL;
- FOR_EACH_DEP_LINK (back_link, INSN_BACK_DEPS (load_insn))
+ FOR_EACH_DEP (load_insn, SD_LIST_BACK, back_sd_it, back_dep)
{
- rtx insn1 = DEP_LINK_PRO (back_link);
+ rtx insn1 = DEP_PRO (back_dep);
- if (DEP_LINK_KIND (back_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (back_dep) == REG_DEP_TRUE)
+ /* Found a DEF-USE dependence (insn1, load_insn). */
{
- /* Found a DEF-USE dependence (insn1, load_insn). */
- dep_link_t fore_link;
+ sd_iterator_def fore_sd_it;
+ dep_t fore_dep;
- FOR_EACH_DEP_LINK (fore_link, INSN_FORW_DEPS (insn1))
+ FOR_EACH_DEP (insn1, SD_LIST_FORW, fore_sd_it, fore_dep)
{
- rtx insn2 = DEP_LINK_CON (fore_link);
+ rtx insn2 = DEP_CON (fore_dep);
basic_block insn2_block = BLOCK_FOR_INSN (insn2);
- if (DEP_LINK_KIND (fore_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (fore_dep) == REG_DEP_TRUE)
{
if (earliest_block != NULL
&& earliest_block->index < insn2_block->index)
@@ -396,23 +398,32 @@ add_deps_for_risky_insns (rtx head, rtx
rank. */
if (! sched_insns_conditions_mutex_p (insn, prev))
{
- if (!(current_sched_info->flags & DO_SPECULATION))
+ dep_def _dep, *dep = &_dep;
+
+ init_dep (dep, prev, insn, REG_DEP_ANTI);
+
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
{
enum DEPS_ADJUST_RESULT res;
-
- res = add_or_update_back_dep (insn, prev,
- REG_DEP_ANTI, DEP_ANTI);
-
- if (res == DEP_CREATED)
- add_forw_dep (DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
- else
- gcc_assert (res != DEP_CHANGED);
+
+ res = sd_add_or_update_dep (dep, false);
+
+ /* We can't change an existing dependency with
+ DEP_ANTI. */
+ gcc_assert (res != DEP_CHANGED);
}
else
- add_or_update_back_forw_dep (insn, prev, REG_DEP_ANTI,
- set_dep_weak (DEP_ANTI,
- BEGIN_CONTROL,
- MAX_DEP_WEAK));
+ {
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (spec_info->mask & BEGIN_CONTROL))
+ DEP_STATUS (dep) = set_dep_weak (DEP_ANTI, BEGIN_CONTROL,
+ MAX_DEP_WEAK);
+
+ sd_add_or_update_dep (dep, false);
+
+ /* Dep_status could have been changed.
+ No assertion here. */
+ }
}
break;
@@ -451,14 +462,11 @@ schedule_ebb (rtx head, rtx tail)
{
init_deps_global ();
- /* Compute backward dependencies. */
+ /* Compute dependencies. */
init_deps (&tmp_deps);
sched_analyze (&tmp_deps, head, tail);
free_deps (&tmp_deps);
- /* Compute forward dependencies. */
- compute_forward_dependences (head, tail);
-
add_deps_for_risky_insns (head, tail);
if (targetm.sched.dependencies_evaluation_hook)
@@ -511,8 +519,12 @@ schedule_ebb (rtx head, rtx tail)
/* Sanity check: verify that all region insns were scheduled. */
gcc_assert (sched_n_insns == n_insns);
- head = current_sched_info->head;
- tail = current_sched_info->tail;
+
+ /* Free dependencies. */
+ sched_free_deps (current_sched_info->head, current_sched_info->tail, true);
+
+ gcc_assert (haifa_recovery_block_was_added_during_scheduling_p
+ || deps_pools_are_empty_p ());
if (EDGE_COUNT (last_bb->preds) == 0)
/* LAST_BB is unreachable. */
--- ddg.c (/mirror/endeed/trunk/gcc) (revision 1284)
+++ ddg.c (/mirror/endeed/fdl-pool/gcc) (revision 1284)
@@ -165,9 +165,9 @@ create_ddg_dependence (ddg_ptr g, ddg_no
gcc_assert (link);
/* Note: REG_DEP_ANTI applies to MEM ANTI_DEP as well!! */
- if (DEP_KIND (link) == REG_DEP_ANTI)
+ if (DEP_TYPE (link) == REG_DEP_ANTI)
t = ANTI_DEP;
- else if (DEP_KIND (link) == REG_DEP_OUTPUT)
+ else if (DEP_TYPE (link) == REG_DEP_OUTPUT)
t = OUTPUT_DEP;
latency = dep_cost (link);
@@ -383,7 +383,6 @@ build_intra_loop_deps (ddg_ptr g)
/* Hold the dependency analysis state during dependency calculations. */
struct deps tmp_deps;
rtx head, tail;
- dep_link_t link;
/* Build the dependence information, using the sched_analyze function. */
init_deps_global ();
@@ -398,19 +397,19 @@ build_intra_loop_deps (ddg_ptr g)
for (i = 0; i < g->num_nodes; i++)
{
ddg_node_ptr dest_node = &g->nodes[i];
+ sd_iterator_def sd_it;
+ dep_t dep;
if (! INSN_P (dest_node->insn))
continue;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (dest_node->insn))
+ FOR_EACH_DEP (dest_node->insn, SD_LIST_BACK, sd_it, dep)
{
- dep_t dep = DEP_LINK_DEP (link);
ddg_node_ptr src_node = get_node_of_insn (g, DEP_PRO (dep));
if (!src_node)
continue;
- add_forw_dep (link);
create_ddg_dependence (g, src_node, dest_node, dep);
}
@@ -435,6 +434,9 @@ build_intra_loop_deps (ddg_ptr g)
/* Free the INSN_LISTs. */
finish_deps_global ();
free_deps (&tmp_deps);
+
+ /* Free dependencies. */
+ sched_free_deps (head, tail, false);
}
--- haifa-sched.c (/mirror/endeed/trunk/gcc) (revision 1284)
+++ haifa-sched.c (/mirror/endeed/fdl-pool/gcc) (revision 1284)
@@ -207,11 +207,16 @@ static rtx note_list;
static struct spec_info_def spec_info_var;
/* Description of the speculative part of the scheduling.
If NULL - no speculation. */
-static spec_info_t spec_info;
+spec_info_t spec_info;
/* True, if recovery block was added during scheduling of current block.
Used to determine, if we need to fix INSN_TICKs. */
-static bool added_recovery_block_p;
+static bool haifa_recovery_block_was_added_during_current_schedule_block_p;
+
+/* True, if recovery block was added during this scheduling pass.
+ Used to determine if we should have empty memory pools of dependencies
+ after finishing current region. */
+bool haifa_recovery_block_was_added_during_scheduling_p;
/* Counters of different types of speculative instructions. */
static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
@@ -554,7 +559,7 @@ static void extend_global (rtx);
static void extend_all (rtx);
static void init_h_i_d (rtx);
static void generate_recovery_code (rtx);
-static void process_insn_forw_deps_be_in_spec (deps_list_t, rtx, ds_t);
+static void process_insn_forw_deps_be_in_spec (rtx, rtx, ds_t);
static void begin_speculative_block (rtx);
static void add_to_speculative_block (rtx);
static dw_t dep_weak (ds_t);
@@ -655,7 +660,7 @@ dep_cost (dep_t link)
else
{
rtx insn = DEP_PRO (link);
- enum reg_note dep_type = DEP_KIND (link);
+ enum reg_note dep_type = DEP_TYPE (link);
cost = insn_cost (insn);
@@ -685,7 +690,7 @@ dep_cost (dep_t link)
XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
/* Targets use only REG_NOTE_KIND of the link. */
- PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_KIND (link));
+ PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_TYPE (link));
cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
insn, cost);
@@ -727,8 +732,6 @@ contributes_to_priority_p (dep_t dep)
static int
priority (rtx insn)
{
- dep_link_t link;
-
if (! INSN_P (insn))
return 0;
@@ -739,7 +742,7 @@ priority (rtx insn)
{
int this_priority = 0;
- if (deps_list_empty_p (INSN_FORW_DEPS (insn)))
+ if (sd_lists_empty_p (insn, SD_LIST_FORW))
/* ??? We should set INSN_PRIORITY to insn_cost when and insn has
some forward deps but all of them are ignored by
contributes_to_priority hook. At the moment we set priority of
@@ -770,11 +773,13 @@ priority (rtx insn)
do
{
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (twin))
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ FOR_EACH_DEP (twin, SD_LIST_FORW, sd_it, dep)
{
rtx next;
int next_priority;
- dep_t dep = DEP_LINK_DEP (link);
next = DEP_CON (dep);
@@ -833,7 +838,6 @@ rank_for_schedule (const void *x, const
{
rtx tmp = *(const rtx *) y;
rtx tmp2 = *(const rtx *) x;
- dep_link_t link1, link2;
int tmp_class, tmp2_class;
int val, priority_val, weight_val, info_val;
@@ -886,31 +890,30 @@ rank_for_schedule (const void *x, const
/* Compare insns based on their relation to the last-scheduled-insn. */
if (INSN_P (last_scheduled_insn))
{
+ dep_t dep1;
+ dep_t dep2;
+
/* Classify the instructions into three classes:
1) Data dependent on last schedule insn.
2) Anti/Output dependent on last scheduled insn.
3) Independent of last scheduled insn, or has latency of one.
Choose the insn from the highest numbered class if different. */
- link1
- = find_link_by_con_in_deps_list (INSN_FORW_DEPS (last_scheduled_insn),
- tmp);
+ dep1 = sd_find_dep_between (last_scheduled_insn, tmp, true);
- if (link1 == NULL || dep_cost (DEP_LINK_DEP (link1)) == 1)
+ if (dep1 == NULL || dep_cost (dep1) == 1)
tmp_class = 3;
else if (/* Data dependence. */
- DEP_LINK_KIND (link1) == REG_DEP_TRUE)
+ DEP_TYPE (dep1) == REG_DEP_TRUE)
tmp_class = 1;
else
tmp_class = 2;
- link2
- = find_link_by_con_in_deps_list (INSN_FORW_DEPS (last_scheduled_insn),
- tmp2);
+ dep2 = sd_find_dep_between (last_scheduled_insn, tmp2, true);
- if (link2 == NULL || dep_cost (DEP_LINK_DEP (link2)) == 1)
+ if (dep2 == NULL || dep_cost (dep2) == 1)
tmp2_class = 3;
else if (/* Data dependence. */
- DEP_LINK_KIND (link2) == REG_DEP_TRUE)
+ DEP_TYPE (dep2) == REG_DEP_TRUE)
tmp2_class = 1;
else
tmp2_class = 2;
@@ -923,21 +926,11 @@ rank_for_schedule (const void *x, const
This gives the scheduler more freedom when scheduling later
instructions at the expense of added register pressure. */
- link1 = DEPS_LIST_FIRST (INSN_FORW_DEPS (tmp));
- link2 = DEPS_LIST_FIRST (INSN_FORW_DEPS (tmp2));
+ val = (sd_lists_size (tmp2, SD_LIST_FORW)
+ - sd_lists_size (tmp, SD_LIST_FORW));
- while (link1 != NULL && link2 != NULL)
- {
- link1 = DEP_LINK_NEXT (link1);
- link2 = DEP_LINK_NEXT (link2);
- }
-
- if (link1 != NULL && link2 == NULL)
- /* TMP (Y) has more insns that depend on it. */
- return -1;
- if (link1 == NULL && link2 != NULL)
- /* TMP2 (X) has more insns that depend on it. */
- return 1;
+ if (val != 0)
+ return val;
/* If insns are equally good, sort by INSN_LUID (original insn order),
so that we make the sort stable. This minimizes instruction movement,
@@ -1173,7 +1166,8 @@ static int last_clock_var;
static int
schedule_insn (rtx insn)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
int advance = 0;
if (sched_verbose >= 1)
@@ -1193,12 +1187,7 @@ schedule_insn (rtx insn)
/* Scheduling instruction should have all its dependencies resolved and
should have been removed from the ready list. */
- gcc_assert (INSN_DEP_COUNT (insn) == 0
- && deps_list_empty_p (INSN_BACK_DEPS (insn)));
- free_deps_list (INSN_BACK_DEPS (insn));
-
- /* Now we can free INSN_RESOLVED_BACK_DEPS list. */
- delete_deps_list (INSN_RESOLVED_BACK_DEPS (insn));
+ gcc_assert (sd_lists_empty_p (insn, SD_LIST_BACK));
gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
@@ -1214,19 +1203,15 @@ schedule_insn (rtx insn)
INSN_TICK (insn) = clock_var;
/* Update dependent instructions. */
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
+ for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
+ sd_iterator_cond (&sd_it, &dep);)
{
- rtx next = DEP_LINK_CON (link);
-
- /* Resolve the dependence between INSN and NEXT. */
-
- INSN_DEP_COUNT (next)--;
+ rtx next = DEP_CON (dep);
- move_dep_link (DEP_NODE_BACK (DEP_LINK_NODE (link)),
- INSN_RESOLVED_BACK_DEPS (next));
-
- gcc_assert ((INSN_DEP_COUNT (next) == 0)
- == deps_list_empty_p (INSN_BACK_DEPS (next)));
+ /* Resolve the dependence between INSN and NEXT.
+ sd_resolve_dep () moves current dep to another list thus
+ advancing the iterator. */
+ sd_resolve_dep (sd_it);
if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
{
@@ -1243,11 +1228,23 @@ schedule_insn (rtx insn)
/* Check always has only one forward dependence (to the first insn in
the recovery block), therefore, this will be executed only once. */
{
- gcc_assert (DEP_LINK_NEXT (link) == NULL);
+ gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW));
fix_recovery_deps (RECOVERY_BLOCK (insn));
}
}
+ /* This is the place where scheduler doesn't *basically* need backward and
+ forward dependencies for INSN anymore. Nevertheless their are used in
+ heuristics in rank_for_schedule (), early_queue_to_ready () and in
+ some targets (e.g. rs6000). Thus the earliest place where we *can*
+ remove dependencies is after targetm.sched.md_finish () call in
+ schedule_block (). But, on the other side, the safest place to remove
+ dependencies is when we finishing scheduling entire region. As we
+ don't generate [many] dependencies during scheduling itself, we won't
+ need memory until beginning of next region.
+ Bottom line: Dependencies are removed for all insns in the end of
+ scheduling the region. */
+
/* Annotate the instruction with issue information -- TImode
indicates that the instruction is expected not to be able
to issue on the same cycle as the previous insn. A machine
@@ -1581,15 +1578,12 @@ ok_for_early_queue_removal (rtx insn)
if (!NOTE_P (prev_insn))
{
- dep_link_t dep_link;
+ dep_t dep;
- dep_link = (find_link_by_con_in_deps_list
- (INSN_FORW_DEPS (prev_insn), insn));
+ dep = sd_find_dep_between (prev_insn, insn, true);
- if (dep_link)
+ if (dep != NULL)
{
- dep_t dep = DEP_LINK_DEP (dep_link);
-
cost = dep_cost (dep);
if (targetm.sched.is_costly_dependence (dep, cost,
@@ -2105,7 +2099,7 @@ schedule_block (basic_block *target_bb,
gcc_assert (head != tail || INSN_P (head));
- added_recovery_block_p = false;
+ haifa_recovery_block_was_added_during_current_schedule_block_p = false;
/* Debug info. */
if (sched_verbose)
@@ -2486,7 +2480,7 @@ bail_out:
}
if (!current_sched_info->queue_must_finish_empty
- || added_recovery_block_p)
+ || haifa_recovery_block_was_added_during_current_schedule_block_p)
{
/* INSN_TICK (minimum clock tick at which the insn becomes
ready) may be not correct for the insn in the subsequent
@@ -2498,7 +2492,15 @@ bail_out:
}
if (targetm.sched.md_finish)
- targetm.sched.md_finish (sched_dump, sched_verbose);
+ {
+ targetm.sched.md_finish (sched_dump, sched_verbose);
+
+ /* Target might have added some instructions to the scheduled block.
+ in its md_finish () hook. These new insns don't have any data
+ initialized and to identify them we extend h_i_d so that they'll
+ get zero luids.*/
+ extend_h_i_d ();
+ }
/* Update head/tail boundaries. */
head = NEXT_INSN (prev_head);
@@ -2706,6 +2708,8 @@ sched_init (void)
nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
before_recovery = 0;
+ haifa_recovery_block_was_added_during_scheduling_p = false;
+
#ifdef ENABLE_CHECKING
/* This is used preferably for finding bugs in check_cfg () itself. */
check_cfg (0, 0);
@@ -2764,6 +2768,10 @@ fix_inter_tick (rtx head, rtx tail)
{
/* Set of instructions with corrected INSN_TICK. */
bitmap_head processed;
+ /* ??? It is doubtful if we should assume that cycle advance happens on
+ basic block boundaries. Basically insns that are unconditionally ready
+ on the start of the block are more preferable then those which have
+ a one cycle dependency over insn from the previous block. */
int next_clock = clock_var + 1;
bitmap_initialize (&processed, 0);
@@ -2776,7 +2784,8 @@ fix_inter_tick (rtx head, rtx tail)
if (INSN_P (head))
{
int tick;
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
tick = INSN_TICK (head);
gcc_assert (tick >= MIN_TICK);
@@ -2793,11 +2802,11 @@ fix_inter_tick (rtx head, rtx tail)
INSN_TICK (head) = tick;
}
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (head))
+ FOR_EACH_DEP (head, SD_LIST_RES_FORW, sd_it, dep)
{
rtx next;
- next = DEP_LINK_CON (link);
+ next = DEP_CON (dep);
tick = INSN_TICK (next);
if (tick != INVALID_TICK
@@ -2816,7 +2825,7 @@ fix_inter_tick (rtx head, rtx tail)
INTER_TICK (next) = tick;
else
tick = INTER_TICK (next);
-
+
INSN_TICK (next) = tick;
}
}
@@ -2835,7 +2844,6 @@ int
try_ready (rtx next)
{
ds_t old_ts, *ts;
- dep_link_t link;
ts = &TODO_SPEC (next);
old_ts = *ts;
@@ -2844,44 +2852,52 @@ try_ready (rtx next)
&& ((old_ts & HARD_DEP)
|| (old_ts & SPECULATIVE)));
- if (!(current_sched_info->flags & DO_SPECULATION))
+ if (sd_lists_empty_p (next, SD_LIST_BACK))
+ /* NEXT has all its dependencies resolved. */
{
- if (deps_list_empty_p (INSN_BACK_DEPS (next)))
- *ts &= ~HARD_DEP;
+ /* Remove HARD_DEP bit from NEXT's status. */
+ *ts &= ~HARD_DEP;
+
+ if (current_sched_info->flags & DO_SPECULATION)
+ /* Remove all speculative bits from NEXT's status. */
+ *ts &= ~SPECULATIVE;
}
else
{
- *ts &= ~SPECULATIVE & ~HARD_DEP;
+ /* One of the NEXT's dependencies has been resolved.
+ Recalcute NEXT's status. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (next));
+ *ts &= ~SPECULATIVE & ~HARD_DEP;
- if (link != NULL)
- {
- ds_t ds = DEP_LINK_STATUS (link) & SPECULATIVE;
+ if (sd_lists_empty_p (next, SD_LIST_HARD_BACK))
+ /* Now we've got NEXT with speculative deps only.
+ 1. Look at the deps to see what we have to do.
+ 2. Check if we can do 'todo'. */
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
+ bool first_p = true;
- /* Backward dependencies of the insn are maintained sorted.
- So if DEP_STATUS of the first dep is SPECULATIVE,
- than all other deps are speculative too. */
- if (ds != 0)
- {
- /* Now we've got NEXT with speculative deps only.
- 1. Look at the deps to see what we have to do.
- 2. Check if we can do 'todo'. */
- *ts = ds;
+ FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
+ {
+ ds_t ds = DEP_STATUS (dep) & SPECULATIVE;
- while ((link = DEP_LINK_NEXT (link)) != NULL)
+ if (first_p)
{
- ds = DEP_LINK_STATUS (link) & SPECULATIVE;
- *ts = ds_merge (*ts, ds);
- }
+ first_p = false;
- if (dep_weak (*ts) < spec_info->weakness_cutoff)
- /* Too few points. */
- *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
+ *ts = ds;
+ }
+ else
+ *ts = ds_merge (*ts, ds);
}
- else
- *ts |= HARD_DEP;
- }
+
+ if (dep_weak (*ts) < spec_info->weakness_cutoff)
+ /* Too few points. */
+ *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
+ }
+ else
+ *ts |= HARD_DEP;
}
if (*ts & HARD_DEP)
@@ -3000,10 +3016,11 @@ fix_tick_ready (rtx next)
{
int tick, delay;
- if (!deps_list_empty_p (INSN_RESOLVED_BACK_DEPS (next)))
+ if (!sd_lists_empty_p (next, SD_LIST_RES_BACK))
{
int full_p;
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
tick = INSN_TICK (next);
/* if tick is not equal to INVALID_TICK, then update
@@ -3011,9 +3028,8 @@ fix_tick_ready (rtx next)
cost. Otherwise, recalculate from scratch. */
full_p = (tick == INVALID_TICK);
- FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (next))
+ FOR_EACH_DEP (next, SD_LIST_RES_BACK, sd_it, dep)
{
- dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
int tick1;
@@ -3127,11 +3143,18 @@ static void
extend_global (rtx insn)
{
gcc_assert (INSN_P (insn));
+
/* These structures have scheduler scope. */
+
+ /* Init h_i_d. */
extend_h_i_d ();
init_h_i_d (insn);
- extend_dependency_caches (1, 0);
+ /* Init data handled in sched-deps.c. */
+ sd_init_insn (insn);
+
+ /* Extend dependency caches by one element. */
+ extend_dependency_caches (1, false);
}
/* Extends global and local scheduler structures to include information
@@ -3159,14 +3182,6 @@ init_h_i_d (rtx insn)
INSN_TICK (insn) = INVALID_TICK;
INTER_TICK (insn) = INVALID_TICK;
find_insn_reg_weight1 (insn);
-
- /* These two lists will be freed in schedule_insn (). */
- INSN_BACK_DEPS (insn) = create_deps_list (false);
- INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list (false);
-
- /* This one should be allocated on the obstack because it should live till
- the scheduling ends. */
- INSN_FORW_DEPS (insn) = create_deps_list (true);
}
/* Generates recovery code for INSN. */
@@ -3187,18 +3202,19 @@ generate_recovery_code (rtx insn)
Tries to add speculative dependencies of type FS between instructions
in deps_list L and TWIN. */
static void
-process_insn_forw_deps_be_in_spec (deps_list_t l, rtx twin, ds_t fs)
+process_insn_forw_deps_be_in_spec (rtx insn, rtx twin, ds_t fs)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
- FOR_EACH_DEP_LINK (link, l)
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
{
ds_t ds;
rtx consumer;
- consumer = DEP_LINK_CON (link);
+ consumer = DEP_CON (dep);
- ds = DEP_LINK_STATUS (link);
+ ds = DEP_STATUS (dep);
if (/* If we want to create speculative dep. */
fs
@@ -3225,7 +3241,12 @@ process_insn_forw_deps_be_in_spec (deps_
ds |= fs;
}
- add_back_forw_dep (consumer, twin, DEP_LINK_KIND (link), ds);
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep_1 (new_dep, twin, consumer, DEP_TYPE (dep), ds);
+ sd_add_dep (new_dep, false);
+ }
}
}
@@ -3248,7 +3269,8 @@ static void
add_to_speculative_block (rtx insn)
{
ds_t ts;
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
rtx twins = NULL;
rtx_vec_t priorities_roots;
@@ -3266,50 +3288,52 @@ add_to_speculative_block (rtx insn)
DONE_SPEC (insn) |= ts;
/* First we convert all simple checks to branchy. */
- for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
{
- rtx check = DEP_LINK_PRO (link);
+ rtx check = DEP_PRO (dep);
if (IS_SPECULATION_SIMPLE_CHECK_P (check))
{
create_check_block_twin (check, true);
/* Restart search. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
+ sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
}
else
/* Continue search. */
- link = DEP_LINK_NEXT (link);
+ sd_iterator_next (&sd_it);
}
priorities_roots = NULL;
clear_priorities (insn, &priorities_roots);
-
- do
+
+ while (1)
{
- dep_link_t link;
rtx check, twin;
basic_block rec;
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
+ /* Get the first backward dependency of INSN. */
+ sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ if (!sd_iterator_cond (&sd_it, &dep))
+ /* INSN has no backward dependencies left. */
+ break;
- gcc_assert ((DEP_LINK_STATUS (link) & BEGIN_SPEC) == 0
- && (DEP_LINK_STATUS (link) & BE_IN_SPEC) != 0
- && (DEP_LINK_STATUS (link) & DEP_TYPES) == DEP_TRUE);
+ gcc_assert ((DEP_STATUS (dep) & BEGIN_SPEC) == 0
+ && (DEP_STATUS (dep) & BE_IN_SPEC) != 0
+ && (DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
- check = DEP_LINK_PRO (link);
+ check = DEP_PRO (dep);
gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
&& QUEUE_INDEX (check) == QUEUE_NOWHERE);
-
+
rec = BLOCK_FOR_INSN (check);
-
+
twin = emit_insn_before (copy_rtx (PATTERN (insn)), BB_END (rec));
extend_global (twin);
- copy_deps_list_change_con (INSN_RESOLVED_BACK_DEPS (twin),
- INSN_RESOLVED_BACK_DEPS (insn),
- twin);
+ sd_copy_back_deps (twin, insn, true);
if (sched_verbose && spec_info->dump)
/* INSN_BB (insn) isn't determined for twin insns yet.
@@ -3321,47 +3345,38 @@ add_to_speculative_block (rtx insn)
/* Add dependences between TWIN and all appropriate
instructions from REC. */
- do
- {
- add_back_forw_dep (twin, check, REG_DEP_TRUE, DEP_TRUE);
-
- do
- {
- link = DEP_LINK_NEXT (link);
+ FOR_EACH_DEP (insn, SD_LIST_SPEC_BACK, sd_it, dep)
+ {
+ rtx pro = DEP_PRO (dep);
- if (link != NULL)
- {
- check = DEP_LINK_PRO (link);
- if (BLOCK_FOR_INSN (check) == rec)
- break;
- }
- else
- break;
+ gcc_assert (DEP_TYPE (dep) == REG_DEP_TRUE);
+
+ /* INSN might have dependencies from the instructions from
+ several recovery blocks. At this iteration we process those
+ producers that reside in REC. */
+ if (BLOCK_FOR_INSN (pro) == rec)
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep (new_dep, pro, twin, REG_DEP_TRUE);
+ sd_add_dep (new_dep, false);
}
- while (1);
}
- while (link != NULL);
- process_insn_forw_deps_be_in_spec (INSN_FORW_DEPS (insn), twin, ts);
+ process_insn_forw_deps_be_in_spec (insn, twin, ts);
/* Remove all dependencies between INSN and insns in REC. */
- for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
{
- check = DEP_LINK_PRO (link);
+ rtx pro = DEP_PRO (dep);
- if (BLOCK_FOR_INSN (check) == rec)
- {
- delete_back_forw_dep (link);
-
- /* Restart search. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
- }
+ if (BLOCK_FOR_INSN (pro) == rec)
+ sd_delete_dep (sd_it);
else
- /* Continue search. */
- link = DEP_LINK_NEXT (link);
+ sd_iterator_next (&sd_it);
}
}
- while (!deps_list_empty_p (INSN_BACK_DEPS (insn)));
/* We couldn't have added the dependencies between INSN and TWINS earlier
because that would make TWINS appear in the INSN_BACK_DEPS (INSN). */
@@ -3370,7 +3385,13 @@ add_to_speculative_block (rtx insn)
rtx twin;
twin = XEXP (twins, 0);
- add_back_forw_dep (twin, insn, REG_DEP_OUTPUT, DEP_OUTPUT);
+
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
+ sd_add_dep (new_dep, false);
+ }
twin = XEXP (twins, 1);
free_INSN_LIST_node (twins);
@@ -3526,7 +3547,8 @@ create_recovery_block (void)
rtx barrier;
basic_block rec;
- added_recovery_block_p = true;
+ haifa_recovery_block_was_added_during_current_schedule_block_p = true;
+ haifa_recovery_block_was_added_during_scheduling_p = true;
if (!before_recovery)
init_before_recovery ();
@@ -3560,8 +3582,10 @@ create_check_block_twin (rtx insn, bool
{
basic_block rec;
rtx label, check, twin;
- dep_link_t link;
ds_t fs;
+ sd_iterator_def sd_it;
+ dep_t dep;
+ dep_def _new_dep, *new_dep = &_new_dep;
gcc_assert (ORIG_PAT (insn)
&& (!mutate_p
@@ -3610,14 +3634,17 @@ create_check_block_twin (rtx insn, bool
in the recovery block). */
if (rec != EXIT_BLOCK_PTR)
{
- FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (insn))
- if ((DEP_LINK_STATUS (link) & DEP_OUTPUT) != 0)
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
+ if ((DEP_STATUS (dep) & DEP_OUTPUT) != 0)
{
- struct _dep _dep, *dep = &_dep;
+ struct _dep _dep2, *dep2 = &_dep2;
- init_dep (dep, DEP_LINK_PRO (link), check, REG_DEP_TRUE);
+ init_dep (dep2, DEP_PRO (dep), check, REG_DEP_TRUE);
- add_back_dep_to_deps_list (INSN_RESOLVED_BACK_DEPS (check), dep);
+ sd_add_dep (dep2, true);
}
twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
@@ -3638,9 +3665,9 @@ create_check_block_twin (rtx insn, bool
(TRUE | OUTPUT). */
}
- copy_deps_list_change_con (INSN_RESOLVED_BACK_DEPS (twin),
- INSN_RESOLVED_BACK_DEPS (insn),
- twin);
+ /* Copy all resolved back dependencies of INSN to TWIN. This will
+ provide correct value for INSN_TICK (TWIN). */
+ sd_copy_back_deps (twin, insn, true);
if (rec != EXIT_BLOCK_PTR)
/* In case of branchy check, fix CFG. */
@@ -3703,10 +3730,11 @@ create_check_block_twin (rtx insn, bool
/* Move backward dependences from INSN to CHECK and
move forward dependences from INSN to TWIN. */
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
+
+ /* First, create dependencies between INSN's producers and CHECK & TWIN. */
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
{
- rtx pro = DEP_LINK_PRO (link);
- enum reg_note dk = DEP_LINK_KIND (link);
+ rtx pro = DEP_PRO (dep);
ds_t ds;
/* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
@@ -3724,7 +3752,7 @@ create_check_block_twin (rtx insn, bool
twin ~~TRUE~~> producer
twin --ANTI--> check */
- ds = DEP_LINK_STATUS (link);
+ ds = DEP_STATUS (dep);
if (ds & BEGIN_SPEC)
{
@@ -3732,30 +3760,30 @@ create_check_block_twin (rtx insn, bool
ds &= ~BEGIN_SPEC;
}
+ init_dep_1 (new_dep, pro, check, DEP_TYPE (dep), ds);
+ sd_add_dep (new_dep, false);
+
if (rec != EXIT_BLOCK_PTR)
{
- add_back_forw_dep (check, pro, dk, ds);
- add_back_forw_dep (twin, pro, dk, ds);
+ DEP_CON (new_dep) = twin;
+ sd_add_dep (new_dep, false);
}
- else
- add_back_forw_dep (check, pro, dk, ds);
}
- for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
- if ((DEP_LINK_STATUS (link) & BEGIN_SPEC)
- || mutate_p)
- /* We can delete this dep only if we totally overcome it with
- BEGIN_SPECULATION. */
- {
- delete_back_forw_dep (link);
-
- /* Restart search. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
- }
- else
- /* Continue search. */
- link = DEP_LINK_NEXT (link);
+ /* Second, remove backward dependencies of INSN. */
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
+ {
+ if ((DEP_STATUS (dep) & BEGIN_SPEC)
+ || mutate_p)
+ /* We can delete this dep because we overcome it with
+ BEGIN_SPECULATION. */
+ sd_delete_dep (sd_it);
+ else
+ sd_iterator_next (&sd_it);
+ }
+ /* Future Speculations. Determine what BE_IN speculations will be like. */
fs = 0;
/* Fields (DONE_SPEC (x) & BEGIN_SPEC) and CHECK_SPEC (x) are set only
@@ -3770,16 +3798,19 @@ create_check_block_twin (rtx insn, bool
DONE_SPEC (insn) = ts & BEGIN_SPEC;
CHECK_SPEC (check) = ts & BEGIN_SPEC;
+ /* Luckyness of future speculations solely depends upon initial
+ BEGIN speculation. */
if (ts & BEGIN_DATA)
fs = set_dep_weak (fs, BE_IN_DATA, get_dep_weak (ts, BEGIN_DATA));
if (ts & BEGIN_CONTROL)
- fs = set_dep_weak (fs, BE_IN_CONTROL, get_dep_weak (ts, BEGIN_CONTROL));
+ fs = set_dep_weak (fs, BE_IN_CONTROL,
+ get_dep_weak (ts, BEGIN_CONTROL));
}
else
CHECK_SPEC (check) = CHECK_SPEC (insn);
/* Future speculations: call the helper. */
- process_insn_forw_deps_be_in_spec (INSN_FORW_DEPS (insn), twin, fs);
+ process_insn_forw_deps_be_in_spec (insn, twin, fs);
if (rec != EXIT_BLOCK_PTR)
{
@@ -3789,35 +3820,45 @@ create_check_block_twin (rtx insn, bool
if (!mutate_p)
{
- add_back_forw_dep (check, insn, REG_DEP_TRUE, DEP_TRUE);
- add_back_forw_dep (twin, insn, REG_DEP_OUTPUT, DEP_OUTPUT);
+ init_dep (new_dep, insn, check, REG_DEP_TRUE);
+ sd_add_dep (new_dep, false);
+
+ init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
+ sd_add_dep (new_dep, false);
}
else
{
- dep_link_t link;
-
if (spec_info->dump)
fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
(*current_sched_info->print_insn) (insn, 0));
- /* Remove all forward dependencies of the INSN. */
- link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
- while (link != NULL)
- {
- delete_back_forw_dep (link);
- link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
- }
+ /* Remove all dependencies of the INSN. */
+ {
+ sd_it = sd_iterator_start (insn, (SD_LIST_FORW
+ | SD_LIST_BACK
+ | SD_LIST_RES_BACK));
+ while (sd_iterator_cond (&sd_it, &dep))
+ sd_delete_dep (sd_it);
+ }
+ /* If former check (INSN) already was moved to the ready (or queue)
+ list, add new check (CHECK) there too. */
if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
try_ready (check);
+ /* Remove old check from instruction stream and free its
+ data. */
sched_remove_insn (insn);
}
- add_back_forw_dep (twin, check, REG_DEP_ANTI, DEP_ANTI);
+ init_dep (new_dep, check, twin, REG_DEP_ANTI);
+ sd_add_dep (new_dep, false);
}
else
- add_back_forw_dep (check, insn, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
+ {
+ init_dep_1 (new_dep, insn, check, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
+ sd_add_dep (new_dep, false);
+ }
if (!mutate_p)
/* Fix priorities. If MUTATE_P is nonzero, this is not necessary,
@@ -3837,10 +3878,9 @@ create_check_block_twin (rtx insn, bool
static void
fix_recovery_deps (basic_block rec)
{
- dep_link_t link;
rtx note, insn, jump, ready_list = 0;
bitmap_head in_ready;
- rtx link1;
+ rtx link;
bitmap_initialize (&in_ready, 0);
@@ -3852,32 +3892,30 @@ fix_recovery_deps (basic_block rec)
insn = PREV_INSN (insn);
do
- {
- for (link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn)); link != NULL;)
- {
- rtx consumer;
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
- consumer = DEP_LINK_CON (link);
+ for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
+ sd_iterator_cond (&sd_it, &dep);)
+ {
+ rtx consumer = DEP_CON (dep);
if (BLOCK_FOR_INSN (consumer) != rec)
{
- delete_back_forw_dep (link);
+ sd_delete_dep (sd_it);
if (!bitmap_bit_p (&in_ready, INSN_LUID (consumer)))
{
ready_list = alloc_INSN_LIST (consumer, ready_list);
bitmap_set_bit (&in_ready, INSN_LUID (consumer));
}
-
- /* Restart search. */
- link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
}
else
{
- gcc_assert ((DEP_LINK_STATUS (link) & DEP_TYPES) == DEP_TRUE);
+ gcc_assert ((DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
- /* Continue search. */
- link = DEP_LINK_NEXT (link);
+ sd_iterator_next (&sd_it);
}
}
@@ -3888,8 +3926,8 @@ fix_recovery_deps (basic_block rec)
bitmap_clear (&in_ready);
/* Try to add instructions to the ready or queue list. */
- for (link1 = ready_list; link1; link1 = XEXP (link1, 1))
- try_ready (XEXP (link1, 0));
+ for (link = ready_list; link; link = XEXP (link, 1))
+ try_ready (XEXP (link, 0));
free_INSN_LIST_list (&ready_list);
/* Fixing jump's dependences. */
@@ -3918,6 +3956,31 @@ change_pattern (rtx insn, rtx new_pat)
dfa_clear_single_insn_cache (insn);
}
+/* Return true if INSN can potentially be speculated with type DS. */
+bool
+sched_insn_is_legitimate_for_speculation_p (rtx insn, ds_t ds)
+{
+ if (HAS_INTERNAL_DEP (insn))
+ return false;
+
+ if (!NONJUMP_INSN_P (insn))
+ return false;
+
+ if (SCHED_GROUP_P (insn))
+ return false;
+
+ if (IS_SPECULATION_CHECK_P (insn))
+ return false;
+
+ if (side_effects_p (PATTERN (insn)))
+ return false;
+
+ if ((ds & BE_IN_SPEC)
+ && may_trap_p (PATTERN (insn)))
+ return false;
+
+ return true;
+}
/* -1 - can't speculate,
0 - for speculation with REQUEST mode it is OK to use
@@ -3927,25 +3990,15 @@ static int
speculate_insn (rtx insn, ds_t request, rtx *new_pat)
{
gcc_assert (current_sched_info->flags & DO_SPECULATION
- && (request & SPECULATIVE));
+ && (request & SPECULATIVE)
+ && sched_insn_is_legitimate_for_speculation_p (insn, request));
- if (!NONJUMP_INSN_P (insn)
- || HAS_INTERNAL_DEP (insn)
- || SCHED_GROUP_P (insn)
- || side_effects_p (PATTERN (insn))
- || (request & spec_info->mask) != request)
+ if ((request & spec_info->mask) != request)
return -1;
-
- gcc_assert (!IS_SPECULATION_CHECK_P (insn));
- if (request & BE_IN_SPEC)
- {
- if (may_trap_p (PATTERN (insn)))
- return -1;
-
- if (!(request & BEGIN_SPEC))
- return 0;
- }
+ if (request & BE_IN_SPEC
+ && !(request & BEGIN_SPEC))
+ return 0;
return targetm.sched.speculate_insn (insn, request & BEGIN_SPEC, new_pat);
}
@@ -4191,6 +4244,8 @@ move_succs (VEC(edge,gc) **succsp, basic
static void
sched_remove_insn (rtx insn)
{
+ sd_finish_insn (insn);
+
change_queue_index (insn, QUEUE_NOWHERE);
current_sched_info->add_remove_insn (insn, 1);
remove_insn (insn);
@@ -4202,14 +4257,14 @@ sched_remove_insn (rtx insn)
static void
clear_priorities (rtx insn, rtx_vec_t *roots_ptr)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
bool insn_is_root_p = true;
gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
{
- dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
if (INSN_PRIORITY_STATUS (pro) >= 0
@@ -4254,12 +4309,17 @@ add_jump_dependencies (rtx insn, rtx jum
if (insn == jump)
break;
- if (deps_list_empty_p (INSN_FORW_DEPS (insn)))
- add_back_forw_dep (jump, insn, REG_DEP_ANTI, DEP_ANTI);
+ if (sd_lists_empty_p (insn, SD_LIST_FORW))
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep (new_dep, insn, jump, REG_DEP_ANTI);
+ sd_add_dep (new_dep, false);
+ }
}
while (1);
- gcc_assert (!deps_list_empty_p (INSN_BACK_DEPS (jump)));
+ gcc_assert (!sd_lists_empty_p (jump, SD_LIST_BACK));
}
/* Return the NOTE_INSN_BASIC_BLOCK of BB. */
@@ -4277,36 +4337,6 @@ bb_note (basic_block bb)
}
#ifdef ENABLE_CHECKING
-extern void debug_spec_status (ds_t);
-
-/* Dump information about the dependence status S. */
-void
-debug_spec_status (ds_t s)
-{
- FILE *f = stderr;
-
- if (s & BEGIN_DATA)
- fprintf (f, "BEGIN_DATA: %d; ", get_dep_weak (s, BEGIN_DATA));
- if (s & BE_IN_DATA)
- fprintf (f, "BE_IN_DATA: %d; ", get_dep_weak (s, BE_IN_DATA));
- if (s & BEGIN_CONTROL)
- fprintf (f, "BEGIN_CONTROL: %d; ", get_dep_weak (s, BEGIN_CONTROL));
- if (s & BE_IN_CONTROL)
- fprintf (f, "BE_IN_CONTROL: %d; ", get_dep_weak (s, BE_IN_CONTROL));
-
- if (s & HARD_DEP)
- fprintf (f, "HARD_DEP; ");
-
- if (s & DEP_TRUE)
- fprintf (f, "DEP_TRUE; ");
- if (s & DEP_ANTI)
- fprintf (f, "DEP_ANTI; ");
- if (s & DEP_OUTPUT)
- fprintf (f, "DEP_OUTPUT; ");
-
- fprintf (f, "\n");
-}
-
/* Helper function for check_cfg.
Return nonzero, if edge vector pointed to by EL has edge with TYPE in
its flags. */
--- sched-deps.c (/mirror/endeed/trunk/gcc) (revision 1284)
+++ sched-deps.c (/mirror/endeed/fdl-pool/gcc) (revision 1284)
@@ -85,12 +85,12 @@ dk_to_ds (enum reg_note dk)
/* Functions to operate with dependence information container - dep_t. */
/* Init DEP with the arguments. */
-static void
-init_dep_1 (dep_t dep, rtx pro, rtx con, enum reg_note kind, ds_t ds)
+void
+init_dep_1 (dep_t dep, rtx pro, rtx con, enum reg_note type, ds_t ds)
{
DEP_PRO (dep) = pro;
DEP_CON (dep) = con;
- DEP_KIND (dep) = kind;
+ DEP_TYPE (dep) = type;
DEP_STATUS (dep) = ds;
}
@@ -102,7 +102,7 @@ init_dep (dep_t dep, rtx pro, rtx con, e
{
ds_t ds;
- if ((current_sched_info->flags & USE_DEPS_LIST) != 0)
+ if ((current_sched_info->flags & USE_DEPS_LIST))
ds = dk_to_ds (kind);
else
ds = -1;
@@ -117,19 +117,85 @@ copy_dep (dep_t to, dep_t from)
memcpy (to, from, sizeof (*to));
}
-/* Functions to operate with a single link from the dependencies lists -
- dep_link_t. */
+static void dump_ds (FILE *, ds_t);
-/* Return true if dep_link L is consistent. */
-static bool
-dep_link_consistent_p (dep_link_t l)
+/* Define flags for dump_dep (). */
+
+#define DUMP_DEP_PRO (2)
+#define DUMP_DEP_CON (4)
+#define DUMP_DEP_TYPE (8)
+#define DUMP_DEP_STATUS (16)
+
+#define DUMP_DEP_ALL (DUMP_DEP_PRO | DUMP_DEP_CON | DUMP_DEP_TYPE \
+ |DUMP_DEP_STATUS)
+
+/* Dump DEP to DUMP.
+ FLAGS is a bit mask specifying what information about DEP needs
+ to be printed. If FLAGS has the very first bit set then dump
+ all fields of DEP. */
+static void
+dump_dep (FILE *dump, dep_t dep, int flags)
{
- dep_link_t next = DEP_LINK_NEXT (l);
+ if (flags & 1)
+ flags |= DUMP_DEP_ALL;
+
+ fprintf (dump, "<");
+
+ if (flags & DUMP_DEP_PRO)
+ fprintf (dump, "%d; ", INSN_UID (DEP_PRO (dep)));
+
+ if (flags & DUMP_DEP_CON)
+ fprintf (dump, "%d; ", INSN_UID (DEP_CON (dep)));
+
+ if (flags & DUMP_DEP_TYPE)
+ {
+ char t;
+ enum reg_note type = DEP_TYPE (dep);
+
+ switch (type)
+ {
+ case REG_DEP_TRUE:
+ t = 't';
+ break;
+
+ case REG_DEP_OUTPUT:
+ t = 'o';
+ break;
+
+ case REG_DEP_ANTI:
+ t = 'a';
+ break;
+
+ default:
+ gcc_unreachable ();
+ break;
+ }
- return (next == NULL
- || &DEP_LINK_NEXT (l) == DEP_LINK_PREV_NEXTP (next));
+ fprintf (dump, "%c; ", t);
+ }
+
+ if (flags & DUMP_DEP_STATUS)
+ {
+ if (current_sched_info->flags & USE_DEPS_LIST)
+ dump_ds (dump, DEP_STATUS (dep));
+ }
+
+ fprintf (dump, ">");
+}
+
+/* Default flags for dump_dep (). */
+static int dump_dep_flags = (DUMP_DEP_PRO | DUMP_DEP_CON);
+
+/* Dump all fields of DEP to STDERR. */
+void
+debug_dep (dep_t dep)
+{
+ dump_dep (stderr, dep, 1);
}
+/* Functions to operate with a single link from the dependencies lists -
+ dep_link_t. */
+
/* Attach L to appear after link X whose &DEP_LINK_NEXT (X) is given by
PREV_NEXT_P. */
static void
@@ -161,6 +227,8 @@ static void
add_to_deps_list (dep_link_t link, deps_list_t l)
{
attach_dep_link (link, &DEPS_LIST_FIRST (l));
+
+ ++DEPS_LIST_N_LINKS (l);
}
/* Detach dep_link L from the list. */
@@ -175,232 +243,142 @@ detach_dep_link (dep_link_t l)
if (next != NULL)
DEP_LINK_PREV_NEXTP (next) = prev_nextp;
- /* Though this is property is not used anywhere but in the assert in
- attach_dep_link (), this can prevent latent errors. */
DEP_LINK_PREV_NEXTP (l) = NULL;
DEP_LINK_NEXT (l) = NULL;
}
-/* Move LINK from whatever list it is now to L. */
-void
-move_dep_link (dep_link_t link, deps_list_t l)
+/* Remove link LINK from list LIST. */
+static void
+remove_from_deps_list (dep_link_t link, deps_list_t list)
{
detach_dep_link (link);
- add_to_deps_list (link, l);
+
+ --DEPS_LIST_N_LINKS (list);
}
-/* Check L's and its successors' consistency.
- This is, potentially, an expensive check, hence it should be guarded by
- ENABLE_CHECKING at all times. */
-static bool
-dep_links_consistent_p (dep_link_t l)
+/* Move link LINK from list FROM to list TO. */
+static void
+move_dep_link (dep_link_t link, deps_list_t from, deps_list_t to)
{
- while (l != NULL)
- {
- if (dep_link_consistent_p (l))
- l = DEP_LINK_NEXT (l);
- else
- return false;
- }
-
- return true;
+ remove_from_deps_list (link, from);
+ add_to_deps_list (link, to);
}
-/* Dump dep_nodes starting from l. */
-static void
-dump_dep_links (FILE *dump, dep_link_t l)
+/* Return true of LINK is not attached to any list. */
+static bool
+dep_link_is_detached_p (dep_link_t link)
{
- while (l != NULL)
+ if (DEP_LINK_PREV_NEXTP (link) == NULL)
{
- dep_t d = DEP_LINK_DEP (l);
-
- fprintf (dump, "%d%c>%d ", INSN_UID (DEP_PRO (d)),
- dep_link_consistent_p (l) ? '-' : '!', INSN_UID (DEP_CON (d)));
+ gcc_assert (DEP_LINK_NEXT (link) == NULL);
- l = DEP_LINK_NEXT (l);
+ return true;
}
- fprintf (dump, "\n");
-}
-
-/* Dump dep_nodes starting from L to stderr. */
-void
-debug_dep_links (dep_link_t l)
-{
- dump_dep_links (stderr, l);
+ return false;
}
-/* Obstack to allocate dep_nodes and deps_lists on. */
-static struct obstack deps_obstack;
+/* Pool to hold all dependency nodes (dep_node_t). */
+static alloc_pool dn_pool;
-/* Obstack to hold forward dependencies lists (deps_list_t). */
-static struct obstack *dl_obstack = &deps_obstack;
+/* Number of dep_nodes out there. */
+static int dn_pool_diff = 0;
-/* Obstack to hold all dependency nodes (dep_node_t). */
-static struct obstack *dn_obstack = &deps_obstack;
+/* Create a dep_node. */
+static dep_node_t
+create_dep_node (void)
+{
+ dep_node_t n = (dep_node_t) pool_alloc (dn_pool);
+ dep_link_t back = DEP_NODE_BACK (n);
+ dep_link_t forw = DEP_NODE_FORW (n);
-/* Functions to operate with dependences lists - deps_list_t. */
+ DEP_LINK_NODE (back) = n;
+ DEP_LINK_NEXT (back) = NULL;
+ DEP_LINK_PREV_NEXTP (back) = NULL;
-/* Allocate deps_list.
+ DEP_LINK_NODE (forw) = n;
+ DEP_LINK_NEXT (forw) = NULL;
+ DEP_LINK_PREV_NEXTP (forw) = NULL;
- If ON_OBSTACK_P is true, allocate the list on the obstack. This is done for
- INSN_FORW_DEPS lists because they should live till the end of scheduling.
+ ++dn_pool_diff;
- INSN_BACK_DEPS and INSN_RESOLVED_BACK_DEPS lists are allocated on the free
- store and are being freed in haifa-sched.c: schedule_insn (). */
-static deps_list_t
-alloc_deps_list (bool on_obstack_p)
-{
- if (on_obstack_p)
- return obstack_alloc (dl_obstack, sizeof (struct _deps_list));
- else
- return xmalloc (sizeof (struct _deps_list));
+ return n;
}
-/* Initialize deps_list L. */
+/* Delete dep_node N. N must not be connected to any deps_list. */
static void
-init_deps_list (deps_list_t l)
+delete_dep_node (dep_node_t n)
{
- DEPS_LIST_FIRST (l) = NULL;
-}
+ gcc_assert (dep_link_is_detached_p (DEP_NODE_BACK (n))
+ && dep_link_is_detached_p (DEP_NODE_FORW (n)));
-/* Create (allocate and init) deps_list.
- The meaning of ON_OBSTACK_P is the same as in alloc_deps_list (). */
-deps_list_t
-create_deps_list (bool on_obstack_p)
-{
- deps_list_t l = alloc_deps_list (on_obstack_p);
+ --dn_pool_diff;
- init_deps_list (l);
- return l;
+ pool_free (dn_pool, n);
}
-/* Free dep_data_nodes that present in L. */
-static void
-clear_deps_list (deps_list_t l)
-{
- /* All dep_nodes are allocated on the dn_obstack. They'll be freed with
- the obstack. */
-
- DEPS_LIST_FIRST (l) = NULL;
-}
+/* Pool to hold dependencies lists (deps_list_t). */
+static alloc_pool dl_pool;
-/* Free deps_list L. */
-void
-free_deps_list (deps_list_t l)
-{
- gcc_assert (deps_list_empty_p (l));
- free (l);
-}
+/* Number of deps_lists out there. */
+static int dl_pool_diff = 0;
-/* Delete (clear and free) deps_list L. */
-void
-delete_deps_list (deps_list_t l)
-{
- clear_deps_list (l);
- free_deps_list (l);
-}
+/* Functions to operate with dependences lists - deps_list_t. */
-/* Return true if L is empty. */
-bool
+/* Return true if list L is empty. */
+static bool
deps_list_empty_p (deps_list_t l)
{
- return DEPS_LIST_FIRST (l) == NULL;
+ return DEPS_LIST_N_LINKS (l) == 0;
}
-/* Check L's consistency.
- This is, potentially, an expensive check, hence it should be guarded by
- ENABLE_CHECKING at all times. */
-static bool
-deps_list_consistent_p (deps_list_t l)
+/* Create a new deps_list. */
+static deps_list_t
+create_deps_list (void)
{
- dep_link_t first = DEPS_LIST_FIRST (l);
+ deps_list_t l = (deps_list_t) pool_alloc (dl_pool);
- return (first == NULL
- || (&DEPS_LIST_FIRST (l) == DEP_LINK_PREV_NEXTP (first)
- && dep_links_consistent_p (first)));
-}
-
-/* Dump L to F. */
-static void
-dump_deps_list (FILE *f, deps_list_t l)
-{
- dump_dep_links (f, DEPS_LIST_FIRST (l));
-}
+ DEPS_LIST_FIRST (l) = NULL;
+ DEPS_LIST_N_LINKS (l) = 0;
-/* Dump L to STDERR. */
-void
-debug_deps_list (deps_list_t l)
-{
- dump_deps_list (stderr, l);
+ ++dl_pool_diff;
+ return l;
}
-/* Add a dependency described by DEP to the list L.
- L should be either INSN_BACK_DEPS or INSN_RESOLVED_BACK_DEPS. */
-void
-add_back_dep_to_deps_list (deps_list_t l, dep_t dep_from)
+/* Free deps_list L. */
+static void
+free_deps_list (deps_list_t l)
{
- dep_node_t n = (dep_node_t) obstack_alloc (dn_obstack,
- sizeof (*n));
- dep_t dep_to = DEP_NODE_DEP (n);
- dep_link_t back = DEP_NODE_BACK (n);
- dep_link_t forw = DEP_NODE_FORW (n);
-
- copy_dep (dep_to, dep_from);
-
- DEP_LINK_NODE (back) = n;
- DEP_LINK_NODE (forw) = n;
+ gcc_assert (deps_list_empty_p (l));
- /* There is no particular need to initialize these four fields except to make
- assert in attach_dep_link () happy. */
- DEP_LINK_NEXT (back) = NULL;
- DEP_LINK_PREV_NEXTP (back) = NULL;
- DEP_LINK_NEXT (forw) = NULL;
- DEP_LINK_PREV_NEXTP (forw) = NULL;
+ --dl_pool_diff;
- add_to_deps_list (back, l);
+ pool_free (dl_pool, l);
}
-/* Find the dep_link with producer PRO in deps_list L. */
-dep_link_t
-find_link_by_pro_in_deps_list (deps_list_t l, rtx pro)
+/* Return true if there is no dep_nodes and deps_lists out there.
+ After the region is scheduled all the depedency nodes and lists
+ should [generally] be returned to pool. */
+bool
+deps_pools_are_empty_p (void)
{
- dep_link_t link;
-
- FOR_EACH_DEP_LINK (link, l)
- if (DEP_LINK_PRO (link) == pro)
- return link;
-
- return NULL;
+ return dn_pool_diff == 0 && dl_pool_diff == 0;
}
-/* Find the dep_link with consumer CON in deps_list L. */
-dep_link_t
-find_link_by_con_in_deps_list (deps_list_t l, rtx con)
-{
- dep_link_t link;
-
- FOR_EACH_DEP_LINK (link, l)
- if (DEP_LINK_CON (link) == con)
- return link;
-
- return NULL;
-}
-
-/* Make a copy of FROM in TO with substituting consumer with CON.
- TO and FROM should be RESOLVED_BACK_DEPS lists. */
-void
-copy_deps_list_change_con (deps_list_t to, deps_list_t from, rtx con)
+/* Remove all elements from L. */
+static void
+clear_deps_list (deps_list_t l)
{
- dep_link_t l;
+ do
+ {
+ dep_link_t link = DEPS_LIST_FIRST (l);
- gcc_assert (deps_list_empty_p (to));
+ if (link == NULL)
+ break;
- FOR_EACH_DEP_LINK (l, from)
- {
- add_back_dep_to_deps_list (to, DEP_LINK_DEP (l));
- DEP_LINK_CON (DEPS_LIST_FIRST (to)) = con;
+ remove_from_deps_list (link, l);
}
+ while (1);
}
static regset reg_pending_sets;
@@ -438,14 +416,6 @@ static bitmap_head *anti_dependency_cach
static bitmap_head *spec_dependency_cache;
static int cache_size;
-/* To speed up checking consistency of formed forward insn
- dependencies we use the following cache. Another possible solution
- could be switching off checking duplication of insns in forward
- dependencies. */
-#ifdef ENABLE_CHECKING
-static bitmap_head *forward_dependency_cache;
-#endif
-
static int deps_may_trap_p (rtx);
static void add_dependence_list (rtx, rtx, int, enum reg_note);
static void add_dependence_list_and_free (rtx, rtx *, int, enum reg_note);
@@ -460,19 +430,14 @@ static void sched_analyze_insn (struct d
static rtx sched_get_condition (rtx);
static int conditions_mutex_p (rtx, rtx);
-static enum DEPS_ADJUST_RESULT maybe_add_or_update_back_dep_1 (rtx, rtx,
- enum reg_note, ds_t, rtx, rtx, dep_link_t **);
-static enum DEPS_ADJUST_RESULT add_or_update_back_dep_1 (rtx, rtx,
- enum reg_note, ds_t, rtx, rtx, dep_link_t **);
-static void add_back_dep (rtx, rtx, enum reg_note, ds_t);
-
-static void adjust_add_sorted_back_dep (rtx, dep_link_t, dep_link_t *);
-static void adjust_back_add_forw_dep (rtx, dep_link_t *);
-static void delete_forw_dep (dep_link_t);
+static enum DEPS_ADJUST_RESULT maybe_add_or_update_dep_1 (dep_t, bool,
+ rtx, rtx);
+static enum DEPS_ADJUST_RESULT add_or_update_dep_1 (dep_t, bool, rtx, rtx);
+
static dw_t estimate_dep_weak (rtx, rtx);
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
-static void check_dep_status (enum reg_note, ds_t, bool);
+static void check_dep (dep_t, bool);
#endif
#endif
\f
@@ -568,23 +533,218 @@ sched_insns_conditions_mutex_p (rtx insn
return false;
}
\f
-/* Add ELEM wrapped in an dep_link with reg note kind DEP_TYPE to the
- INSN_BACK_DEPS (INSN), if it is not already there. DEP_TYPE indicates the
- type of dependence that this link represents. DS, if nonzero,
- indicates speculations, through which this dependence can be overcome.
- MEM1 and MEM2, if non-null, corresponds to memory locations in case of
- data speculation. The function returns a value indicating if an old entry
- has been changed or a new entry has been added to insn's backward deps.
- In case of changed entry CHANGED_LINKPP sets to its address.
- See also the definition of enum DEPS_ADJUST_RESULT in sched-int.h.
- Actual manipulation of dependence data structures is performed in
- add_or_update_back_dep_1. */
+/* Initialize LIST_PTR to point to one of the lists present in TYPES_PTR,
+ initialize RESOLVED_P_PTR with true if that list consists of resolved deps,
+ and remove the type of returned [through LIST_PTR] list from TYPES_PTR.
+ This function is used to switch sd_iterator to the next list.
+ !!! For internal use only. Might consider moving it to sched-int.h. */
+void
+sd_next_list (rtx insn, sd_list_types_def *types_ptr,
+ deps_list_t *list_ptr, bool *resolved_p_ptr)
+{
+ sd_list_types_def types = *types_ptr;
+
+ if (types & SD_LIST_HARD_BACK)
+ {
+ *list_ptr = INSN_HARD_BACK_DEPS (insn);
+ *resolved_p_ptr = false;
+ *types_ptr = types & ~SD_LIST_HARD_BACK;
+ }
+ else if (types & SD_LIST_SPEC_BACK)
+ {
+ *list_ptr = INSN_SPEC_BACK_DEPS (insn);
+ *resolved_p_ptr = false;
+ *types_ptr = types & ~SD_LIST_SPEC_BACK;
+ }
+ else if (types & SD_LIST_FORW)
+ {
+ *list_ptr = INSN_FORW_DEPS (insn);
+ *resolved_p_ptr = false;
+ *types_ptr = types & ~SD_LIST_FORW;
+ }
+ else if (types & SD_LIST_RES_BACK)
+ {
+ *list_ptr = INSN_RESOLVED_BACK_DEPS (insn);
+ *resolved_p_ptr = true;
+ *types_ptr = types & ~SD_LIST_RES_BACK;
+ }
+ else if (types & SD_LIST_RES_FORW)
+ {
+ *list_ptr = INSN_RESOLVED_FORW_DEPS (insn);
+ *resolved_p_ptr = true;
+ *types_ptr = types & ~SD_LIST_RES_FORW;
+ }
+ else
+ {
+ *list_ptr = NULL;
+ *resolved_p_ptr = false;
+ *types_ptr = SD_LIST_NONE;
+ }
+}
+
+/* Return the summary size of INSN's lists defined by LIST_TYPES. */
+int
+sd_lists_size (rtx insn, sd_list_types_def list_types)
+{
+ int size = 0;
+
+ while (list_types != SD_LIST_NONE)
+ {
+ deps_list_t list;
+ bool resolved_p;
+
+ sd_next_list (insn, &list_types, &list, &resolved_p);
+ size += DEPS_LIST_N_LINKS (list);
+ }
+
+ return size;
+}
+
+/* Return true if INSN's lists defined by LIST_TYPES are all empty. */
+bool
+sd_lists_empty_p (rtx insn, sd_list_types_def list_types)
+{
+ return sd_lists_size (insn, list_types) == 0;
+}
+
+/* Initialize data for INSN. */
+void
+sd_init_insn (rtx insn)
+{
+ INSN_HARD_BACK_DEPS (insn) = create_deps_list ();
+ INSN_SPEC_BACK_DEPS (insn) = create_deps_list ();
+ INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list ();
+ INSN_FORW_DEPS (insn) = create_deps_list ();
+ INSN_RESOLVED_FORW_DEPS (insn) = create_deps_list ();
+
+ /* ??? It would be nice to allocate dependency caches here. */
+}
+
+/* Free data for INSN. */
+void
+sd_finish_insn (rtx insn)
+{
+ /* ??? It would be nice to deallocate dependency caches here. */
+
+ free_deps_list (INSN_HARD_BACK_DEPS (insn));
+ INSN_HARD_BACK_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_SPEC_BACK_DEPS (insn));
+ INSN_SPEC_BACK_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_RESOLVED_BACK_DEPS (insn));
+ INSN_RESOLVED_BACK_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_FORW_DEPS (insn));
+ INSN_FORW_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_RESOLVED_FORW_DEPS (insn));
+ INSN_RESOLVED_FORW_DEPS (insn) = NULL;
+}
+
+/* Find a dependency between producer PRO and consumer CON.
+ Search through resolved dependency lists if RESOLVED_P is true.
+ If no such dependency is found return NULL,
+ overwise return the dependency and initialize SD_IT_PTR [if it is nonnull]
+ with an iterator pointing to it. */
+static dep_t
+sd_find_dep_between_no_cache (rtx pro, rtx con, bool resolved_p,
+ sd_iterator_def *sd_it_ptr)
+{
+ sd_list_types_def pro_list_type;
+ sd_list_types_def con_list_type;
+ sd_iterator_def sd_it;
+ dep_t dep;
+ bool found_p = false;
+
+ if (resolved_p)
+ {
+ pro_list_type = SD_LIST_RES_FORW;
+ con_list_type = SD_LIST_RES_BACK;
+ }
+ else
+ {
+ pro_list_type = SD_LIST_FORW;
+ con_list_type = SD_LIST_BACK;
+ }
+
+ /* Walk through either back list of INSN or forw list of ELEM
+ depending on which one is shorter. */
+ if (sd_lists_size (con, con_list_type) < sd_lists_size (pro, pro_list_type))
+ {
+ /* Find the dep_link with producer PRO in consumer's back_deps. */
+ FOR_EACH_DEP (con, con_list_type, sd_it, dep)
+ if (DEP_PRO (dep) == pro)
+ {
+ found_p = true;
+ break;
+ }
+ }
+ else
+ {
+ /* Find the dep_link with consumer CON in producer's forw_deps. */
+ FOR_EACH_DEP (pro, pro_list_type, sd_it, dep)
+ if (DEP_CON (dep) == con)
+ {
+ found_p = true;
+ break;
+ }
+ }
+
+ if (found_p)
+ {
+ if (sd_it_ptr != NULL)
+ *sd_it_ptr = sd_it;
+
+ return dep;
+ }
+
+ return NULL;
+}
+
+/* Find a dependency between producer PRO and consumer CON.
+ Use dependency [if available] to check if dependency is present at all.
+ Search through resolved dependency lists if RESOLVED_P is true.
+ If the dependency or NULL if none found. */
+dep_t
+sd_find_dep_between (rtx pro, rtx con, bool resolved_p)
+{
+ if (true_dependency_cache != NULL)
+ /* Avoiding the list walk below can cut compile times dramatically
+ for some code. */
+ {
+ int elem_luid = INSN_LUID (pro);
+ int insn_luid = INSN_LUID (con);
+
+ gcc_assert (output_dependency_cache != NULL
+ && anti_dependency_cache != NULL);
+
+ if (!bitmap_bit_p (&true_dependency_cache[insn_luid], elem_luid)
+ && !bitmap_bit_p (&output_dependency_cache[insn_luid], elem_luid)
+ && !bitmap_bit_p (&anti_dependency_cache[insn_luid], elem_luid))
+ return NULL;
+ }
+
+ return sd_find_dep_between_no_cache (pro, con, resolved_p, NULL);
+}
+
+/* Add or update a dependence described by DEP.
+ MEM1 and MEM2, if non-null, correspond to memory locations in case of
+ data speculation.
+
+ The function returns a value indicating if an old entry has been changed
+ or a new entry has been added to insn's backward deps.
+
+ This function merely checks if producer and consumer is the same insn
+ and doesn't create a dep in this case. Actual manipulation of
+ dependence data structures is performed in add_or_update_dep_1. */
static enum DEPS_ADJUST_RESULT
-maybe_add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
- ds_t ds, rtx mem1, rtx mem2,
- dep_link_t **changed_linkpp)
+maybe_add_or_update_dep_1 (dep_t dep, bool resolved_p, rtx mem1, rtx mem2)
{
+ rtx elem = DEP_PRO (dep);
+ rtx insn = DEP_CON (dep);
+
gcc_assert (INSN_P (insn) && INSN_P (elem));
/* Don't depend an insn on itself. */
@@ -595,319 +755,574 @@ maybe_add_or_update_back_dep_1 (rtx insn
/* INSN has an internal dependence, which we can't overcome. */
HAS_INTERNAL_DEP (insn) = 1;
#endif
- return 0;
+
+ return DEP_NODEP;
}
- return add_or_update_back_dep_1 (insn, elem, dep_type,
- ds, mem1, mem2, changed_linkpp);
+ return add_or_update_dep_1 (dep, resolved_p, mem1, mem2);
}
-/* This function has the same meaning of parameters and return values
- as maybe_add_or_update_back_dep_1. The only difference between these
- two functions is that INSN and ELEM are guaranteed not to be the same
- in this one. */
+#ifdef INSN_SCHEDULING
+/* Ask dependency caches what needs to be done for dependence DEP.
+ Return DEP_CREATED if new dependence should be created and there is no
+ need to try to find one searching the dependencies lists.
+ Return DEP_PRESENT if there already is a dependence described by DEP and
+ hence nothing is to be done.
+ Return DEP_CHANGED if we there already is a dependence but it should be
+ updated to incorporate additional information from DEP. */
static enum DEPS_ADJUST_RESULT
-add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
- ds_t ds ATTRIBUTE_UNUSED,
- rtx mem1 ATTRIBUTE_UNUSED, rtx mem2 ATTRIBUTE_UNUSED,
- dep_link_t **changed_linkpp ATTRIBUTE_UNUSED)
+ask_dependency_caches (dep_t dep)
{
- bool maybe_present_p = true, present_p = false;
-
- gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
-
-#ifdef INSN_SCHEDULING
+ int elem_luid = INSN_LUID (DEP_PRO (dep));
+ int insn_luid = INSN_LUID (DEP_CON (dep));
-#ifdef ENABLE_CHECKING
- check_dep_status (dep_type, ds, mem1 != NULL);
-#endif
+ gcc_assert (true_dependency_cache != NULL
+ && output_dependency_cache != NULL
+ && anti_dependency_cache != NULL);
- /* If we already have a dependency for ELEM, then we do not need to
- do anything. Avoiding the list walk below can cut compile times
- dramatically for some code. */
- if (true_dependency_cache != NULL)
- {
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
enum reg_note present_dep_type;
-
- gcc_assert (output_dependency_cache);
- gcc_assert (anti_dependency_cache);
- if (!(current_sched_info->flags & USE_DEPS_LIST))
- {
- if (bitmap_bit_p (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_type = REG_DEP_TRUE;
- else if (bitmap_bit_p (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_type = REG_DEP_OUTPUT;
- else if (bitmap_bit_p (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_type = REG_DEP_ANTI;
- else
- maybe_present_p = false;
- if (maybe_present_p)
- {
- if ((int) dep_type >= (int) present_dep_type)
- return DEP_PRESENT;
-
- present_p = true;
- }
- }
+ if (bitmap_bit_p (&true_dependency_cache[insn_luid], elem_luid))
+ present_dep_type = REG_DEP_TRUE;
+ else if (bitmap_bit_p (&output_dependency_cache[insn_luid], elem_luid))
+ present_dep_type = REG_DEP_OUTPUT;
+ else if (bitmap_bit_p (&anti_dependency_cache[insn_luid], elem_luid))
+ present_dep_type = REG_DEP_ANTI;
else
- {
- ds_t present_dep_types = 0;
+ /* There is no existing dep so it should be created. */
+ return DEP_CREATED;
+
+ if ((int) DEP_TYPE (dep) >= (int) present_dep_type)
+ /* DEP does not add anything to the existing dependence. */
+ return DEP_PRESENT;
+ }
+ else
+ {
+ ds_t present_dep_types = 0;
- if (bitmap_bit_p (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_types |= DEP_TRUE;
- if (bitmap_bit_p (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_types |= DEP_OUTPUT;
- if (bitmap_bit_p (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_types |= DEP_ANTI;
+ if (bitmap_bit_p (&true_dependency_cache[insn_luid], elem_luid))
+ present_dep_types |= DEP_TRUE;
+ if (bitmap_bit_p (&output_dependency_cache[insn_luid], elem_luid))
+ present_dep_types |= DEP_OUTPUT;
+ if (bitmap_bit_p (&anti_dependency_cache[insn_luid], elem_luid))
+ present_dep_types |= DEP_ANTI;
+
+ if (present_dep_types == 0)
+ /* There is no existing dep so it should be created. */
+ return DEP_CREATED;
+
+ if (!(current_sched_info->flags & DO_SPECULATION)
+ || !bitmap_bit_p (&spec_dependency_cache[insn_luid], elem_luid))
+ {
+ if ((present_dep_types | (DEP_STATUS (dep) & DEP_TYPES))
+ == present_dep_types)
+ /* DEP does not add anything to the existing dependence. */
+ return DEP_PRESENT;
+ }
+ else
+ {
+ /* Only true dependencies can be data speculative and
+ only anti dependencies can be control speculative. */
+ gcc_assert ((present_dep_types & (DEP_TRUE | DEP_ANTI))
+ == present_dep_types);
+
+ /* if (DEP is SPECULATIVE) then
+ ..we should update DEP_STATUS
+ else
+ ..we should reset existing dep to non-speculative. */
+ }
+ }
+
+ return DEP_CHANGED;
+}
+
+/* Set dependency caches according to DEP. */
+static void
+set_dependency_caches (dep_t dep)
+{
+ int elem_luid = INSN_LUID (DEP_PRO (dep));
+ int insn_luid = INSN_LUID (DEP_CON (dep));
+
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
+ switch (DEP_TYPE (dep))
+ {
+ case REG_DEP_TRUE:
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ break;
- if (present_dep_types)
+ case REG_DEP_OUTPUT:
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_ANTI:
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+ }
+ else
+ {
+ ds_t ds = DEP_STATUS (dep);
+
+ if (ds & DEP_TRUE)
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_OUTPUT)
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_ANTI)
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+
+ if (ds & SPECULATIVE)
+ {
+ gcc_assert (current_sched_info->flags & DO_SPECULATION);
+ bitmap_set_bit (&spec_dependency_cache[insn_luid], elem_luid);
+ }
+ }
+}
+
+/* Type of dependence DEP have changed from OLD_TYPE. Update dependency
+ caches accordingly. */
+static void
+update_dependency_caches (dep_t dep, enum reg_note old_type)
+{
+ int elem_luid = INSN_LUID (DEP_PRO (dep));
+ int insn_luid = INSN_LUID (DEP_CON (dep));
+
+ /* Clear corresponding cache entry because type of the link
+ may have changed. Keep them if we use_deps_list. */
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
+ switch (old_type)
+ {
+ case REG_DEP_OUTPUT:
+ bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_ANTI:
+ bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+
+ switch (DEP_TYPE (dep))
+ {
+ case REG_DEP_TRUE:
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_OUTPUT:
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_ANTI:
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+ }
+ else
+ {
+ ds_t ds = DEP_STATUS (dep);
+
+ switch (old_type)
+ {
+ case REG_DEP_TRUE:
+ if (ds & DEP_OUTPUT)
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_ANTI)
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+
+ break;
+
+ case REG_DEP_OUTPUT:
+ if (ds & DEP_TRUE)
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_ANTI)
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+
+ break;
+
+ case REG_DEP_ANTI:
+ if (ds & DEP_TRUE)
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_OUTPUT)
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+
+ /* Note, that the dep can become speculative only at the moment
+ of creation, thus we don't need to check for it during update. */
+ }
+}
+
+/* Convert a dependence pointed to by SD_IT to be non-speculative. */
+static void
+change_spec_dep_to_hard (sd_iterator_def sd_it)
+{
+ dep_node_t node = DEP_LINK_NODE (*sd_it.linkp);
+ dep_link_t link = DEP_NODE_BACK (node);
+ dep_t dep = DEP_NODE_DEP (node);
+ rtx elem = DEP_PRO (dep);
+ rtx insn = DEP_CON (dep);
+
+ move_dep_link (link, INSN_SPEC_BACK_DEPS (insn), INSN_HARD_BACK_DEPS (insn));
+
+ DEP_STATUS (dep) &= ~SPECULATIVE;
+
+ if (true_dependency_cache != NULL)
+ /* Clear the cache entry. */
+ bitmap_clear_bit (&spec_dependency_cache[INSN_LUID (insn)],
+ INSN_LUID (elem));
+}
+#endif
+
+/* Update DEP to incorporate information from NEW_DEP.
+ SD_IT points to DEP in case it should be moved to another list.
+ MEM1 and MEM2, if nonnull, correspond to memory locations in case if
+ data-speculative dependence should be updated. */
+static enum DEPS_ADJUST_RESULT
+update_dep (dep_t dep, dep_t new_dep,
+ sd_iterator_def sd_it, rtx mem1, rtx mem2)
+{
+ enum DEPS_ADJUST_RESULT res = DEP_PRESENT;
+ enum reg_note old_type = DEP_TYPE (dep);
+
+ /* If this is a more restrictive type of dependence than the
+ existing one, then change the existing dependence to this
+ type. */
+ if ((int) DEP_TYPE (new_dep) < (int) old_type)
+ {
+ DEP_TYPE (dep) = DEP_TYPE (new_dep);
+ res = DEP_CHANGED;
+ }
+
+#ifdef INSN_SCHEDULING
+ if (current_sched_info->flags & USE_DEPS_LIST)
+ /* Update DEP_STATUS. */
+ {
+ ds_t dep_status = DEP_STATUS (dep);
+ ds_t ds = DEP_STATUS (new_dep);
+ ds_t new_status = ds | dep_status;
+
+ if (new_status & SPECULATIVE)
+ /* Either existing dep or a dep we're adding or both are
+ speculative. */
+ {
+ if (!(ds & SPECULATIVE)
+ || !(dep_status & SPECULATIVE))
+ /* The new dep can't be speculative. */
+ {
+ new_status &= ~SPECULATIVE;
+
+ if (dep_status & SPECULATIVE)
+ /* The old dep was speculative, but now it
+ isn't. */
+ change_spec_dep_to_hard (sd_it);
+ }
+ else
{
- if (!(current_sched_info->flags & DO_SPECULATION)
- || !bitmap_bit_p (&spec_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- {
- if ((present_dep_types | (ds & DEP_TYPES))
- == present_dep_types)
- /* We already have all these bits. */
- return DEP_PRESENT;
- }
- else
+ /* Both are speculative. Merge probabilities. */
+ if (mem1 != NULL)
{
- /* Only true dependencies can be data speculative and
- only anti dependencies can be control speculative. */
- gcc_assert ((present_dep_types & (DEP_TRUE | DEP_ANTI))
- == present_dep_types);
-
- /* if (additional dep is SPECULATIVE) then
- we should update DEP_STATUS
- else
- we should reset existing dep to non-speculative. */
+ dw_t dw;
+
+ dw = estimate_dep_weak (mem1, mem2);
+ ds = set_dep_weak (ds, BEGIN_DATA, dw);
}
-
- present_p = true;
+
+ new_status = ds_merge (dep_status, ds);
}
- else
- maybe_present_p = false;
- }
+ }
+
+ ds = new_status;
+
+ if (dep_status != ds)
+ {
+ DEP_STATUS (dep) = ds;
+ res = DEP_CHANGED;
+ }
}
+
+ if (true_dependency_cache != NULL
+ && res == DEP_CHANGED)
+ update_dependency_caches (dep, old_type);
#endif
- /* Check that we don't already have this dependence. */
- if (maybe_present_p)
- {
- dep_link_t *linkp;
+ return res;
+}
- for (linkp = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
- *linkp != NULL;
- linkp = &DEP_LINK_NEXT (*linkp))
- {
- dep_t link = DEP_LINK_DEP (*linkp);
+/* Add or update a dependence described by DEP.
+ MEM1 and MEM2, if non-null, correspond to memory locations in case of
+ data speculation.
- gcc_assert (true_dependency_cache == 0 || present_p);
-
- if (DEP_PRO (link) == elem)
- {
- enum DEPS_ADJUST_RESULT changed_p = DEP_PRESENT;
+ The function returns a value indicating if an old entry has been changed
+ or a new entry has been added to insn's backward deps or nothing has
+ been updated at all. */
+static enum DEPS_ADJUST_RESULT
+add_or_update_dep_1 (dep_t new_dep, bool resolved_p,
+ rtx mem1 ATTRIBUTE_UNUSED, rtx mem2 ATTRIBUTE_UNUSED)
+{
+ bool maybe_present_p = true;
+ bool present_p = false;
+ gcc_assert (INSN_P (DEP_PRO (new_dep)) && INSN_P (DEP_CON (new_dep))
+ && DEP_PRO (new_dep) != DEP_CON (new_dep));
+
#ifdef INSN_SCHEDULING
- if (current_sched_info->flags & USE_DEPS_LIST)
- {
- ds_t new_status = ds | DEP_STATUS (link);
-
- if (new_status & SPECULATIVE)
- {
- if (!(ds & SPECULATIVE)
- || !(DEP_STATUS (link) & SPECULATIVE))
- /* Then this dep can't be speculative. */
- {
- new_status &= ~SPECULATIVE;
- if (true_dependency_cache
- && (DEP_STATUS (link) & SPECULATIVE))
- bitmap_clear_bit (&spec_dependency_cache
- [INSN_LUID (insn)],
- INSN_LUID (elem));
- }
- else
- {
- /* Both are speculative. Merging probabilities. */
- if (mem1)
- {
- dw_t dw;
-
- dw = estimate_dep_weak (mem1, mem2);
- ds = set_dep_weak (ds, BEGIN_DATA, dw);
- }
-
- new_status = ds_merge (DEP_STATUS (link), ds);
- }
- }
-
- ds = new_status;
- }
-
- /* Clear corresponding cache entry because type of the link
- may have changed. Keep them if we use_deps_list. */
- if (true_dependency_cache != NULL
- && !(current_sched_info->flags & USE_DEPS_LIST))
- {
- enum reg_note kind = DEP_KIND (link);
- switch (kind)
- {
- case REG_DEP_OUTPUT:
- bitmap_clear_bit (&output_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- break;
- case REG_DEP_ANTI:
- bitmap_clear_bit (&anti_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- break;
- default:
- gcc_unreachable ();
- }
- }
-
- if ((current_sched_info->flags & USE_DEPS_LIST)
- && DEP_STATUS (link) != ds)
- {
- DEP_STATUS (link) = ds;
- changed_p = DEP_CHANGED;
- }
+#ifdef ENABLE_CHECKING
+ check_dep (new_dep, mem1 != NULL);
#endif
- /* If this is a more restrictive type of dependence than the
- existing one, then change the existing dependence to this
- type. */
- if ((int) dep_type < (int) DEP_KIND (link))
- {
- DEP_KIND (link) = dep_type;
- changed_p = DEP_CHANGED;
- }
+ if (true_dependency_cache != NULL)
+ {
+ switch (ask_dependency_caches (new_dep))
+ {
+ case DEP_PRESENT:
+ return DEP_PRESENT;
-#ifdef INSN_SCHEDULING
- /* If we are adding a dependency to INSN's LOG_LINKs, then
- note that in the bitmap caches of dependency information. */
- if (true_dependency_cache != NULL)
- {
- if (!(current_sched_info->flags & USE_DEPS_LIST))
- {
- if (DEP_KIND (link) == REG_DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- else if (DEP_KIND (link) == REG_DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- else if (DEP_KIND (link) == REG_DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- }
- else
- {
- if (ds & DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- if (ds & DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- if (ds & DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- /* Note, that dep can become speculative only
- at the moment of creation. Thus, we don't need to
- check for it here. */
- }
- }
-
- if (changed_linkpp && changed_p == DEP_CHANGED)
- *changed_linkpp = linkp;
+ case DEP_CHANGED:
+ maybe_present_p = true;
+ present_p = true;
+ break;
+
+ case DEP_CREATED:
+ maybe_present_p = false;
+ present_p = false;
+ break;
+
+ default:
+ gcc_unreachable ();
+ break;
+ }
+ }
#endif
- return changed_p;
- }
- }
- /* We didn't find a dep. It shouldn't be present in the cache. */
- gcc_assert (!present_p);
+
+ /* Check that we don't already have this dependence. */
+ if (maybe_present_p)
+ {
+ dep_t present_dep;
+ sd_iterator_def sd_it;
+
+ gcc_assert (true_dependency_cache == NULL || present_p);
+
+ present_dep = sd_find_dep_between_no_cache (DEP_PRO (new_dep),
+ DEP_CON (new_dep),
+ resolved_p, &sd_it);
+
+ if (present_dep != NULL)
+ /* We found an existing dependency between ELEM and INSN. */
+ return update_dep (present_dep, new_dep, sd_it, mem1, mem2);
+ else
+ /* We didn't find a dep, it shouldn't present in the cache. */
+ gcc_assert (!present_p);
}
/* Might want to check one level of transitivity to save conses.
- This check should be done in maybe_add_or_update_back_dep_1.
- Since we made it to add_or_update_back_dep_1, we must create
+ This check should be done in maybe_add_or_update_dep_1.
+ Since we made it to add_or_update_dep_1, we must create
(or update) a link. */
- if (mem1)
+ if (mem1 != NULL_RTX)
{
gcc_assert (current_sched_info->flags & DO_SPECULATION);
- ds = set_dep_weak (ds, BEGIN_DATA, estimate_dep_weak (mem1, mem2));
+ DEP_STATUS (new_dep) = set_dep_weak (DEP_STATUS (new_dep), BEGIN_DATA,
+ estimate_dep_weak (mem1, mem2));
}
-
- add_back_dep (insn, elem, dep_type, ds);
+
+ sd_add_dep (new_dep, resolved_p);
return DEP_CREATED;
}
-/* This function creates a link between INSN and ELEM under any
- conditions. DS describes speculative status of the link. */
+/* Initialize BACK_LIST_PTR with consumer's backward list and
+ FORW_LIST_PTR with producer's forward list. If RESOLVED_P is true
+ initialize with lists that hold resolved deps. */
static void
-add_back_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
+get_back_and_forw_lists (dep_t dep, bool resolved_p,
+ deps_list_t *back_list_ptr,
+ deps_list_t *forw_list_ptr)
{
- struct _dep _dep, *dep = &_dep;
+ rtx con = DEP_CON (dep);
- gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
+ if (!resolved_p)
+ {
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (DEP_STATUS (dep) & SPECULATIVE))
+ *back_list_ptr = INSN_SPEC_BACK_DEPS (con);
+ else
+ *back_list_ptr = INSN_HARD_BACK_DEPS (con);
- if (current_sched_info->flags & USE_DEPS_LIST)
- init_dep_1 (dep, elem, insn, dep_type, ds);
+ *forw_list_ptr = INSN_FORW_DEPS (DEP_PRO (dep));
+ }
else
- init_dep_1 (dep, elem, insn, dep_type, -1);
+ {
+ *back_list_ptr = INSN_RESOLVED_BACK_DEPS (con);
+ *forw_list_ptr = INSN_RESOLVED_FORW_DEPS (DEP_PRO (dep));
+ }
+}
+
+/* Add dependence described by DEP.
+ If RESOLVED_P is true treat the dependence as a resolved one. */
+void
+sd_add_dep (dep_t dep, bool resolved_p)
+{
+ dep_node_t n = create_dep_node ();
+ deps_list_t con_back_deps;
+ deps_list_t pro_forw_deps;
+ rtx elem = DEP_PRO (dep);
+ rtx insn = DEP_CON (dep);
+
+ gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
+
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && !sched_insn_is_legitimate_for_speculation_p (insn, DEP_STATUS (dep)))
+ DEP_STATUS (dep) &= ~SPECULATIVE;
- add_back_dep_to_deps_list (INSN_BACK_DEPS (insn), dep);
+ copy_dep (DEP_NODE_DEP (n), dep);
+
+ get_back_and_forw_lists (dep, resolved_p, &con_back_deps, &pro_forw_deps);
+
+ add_to_deps_list (DEP_NODE_BACK (n), con_back_deps);
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
- check_dep_status (dep_type, ds, false);
+ check_dep (dep, false);
#endif
+ add_to_deps_list (DEP_NODE_FORW (n), pro_forw_deps);
+
/* If we are adding a dependency to INSN's LOG_LINKs, then note that
in the bitmap caches of dependency information. */
if (true_dependency_cache != NULL)
+ set_dependency_caches (dep);
+#endif
+}
+
+/* Add or update backward dependence between INSN and ELEM
+ with given type DEP_TYPE and dep_status DS.
+ This function is a convenience wrapper. */
+enum DEPS_ADJUST_RESULT
+sd_add_or_update_dep (dep_t dep, bool resolved_p)
+{
+ return add_or_update_dep_1 (dep, resolved_p, NULL_RTX, NULL_RTX);
+}
+
+/* Resolved dependence pointed to by SD_IT.
+ SD_IT will advance to the next element. */
+void
+sd_resolve_dep (sd_iterator_def sd_it)
+{
+ dep_node_t node = DEP_LINK_NODE (*sd_it.linkp);
+ dep_t dep = DEP_NODE_DEP (node);
+ rtx pro = DEP_PRO (dep);
+ rtx con = DEP_CON (dep);
+
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (DEP_STATUS (dep) & SPECULATIVE))
+ move_dep_link (DEP_NODE_BACK (node), INSN_SPEC_BACK_DEPS (con),
+ INSN_RESOLVED_BACK_DEPS (con));
+ else
+ move_dep_link (DEP_NODE_BACK (node), INSN_HARD_BACK_DEPS (con),
+ INSN_RESOLVED_BACK_DEPS (con));
+
+ move_dep_link (DEP_NODE_FORW (node), INSN_FORW_DEPS (pro),
+ INSN_RESOLVED_FORW_DEPS (pro));
+}
+
+/* Make TO depend on all the FROM's producers.
+ If RESOLVED_P is true add dependencies to the resolved lists. */
+void
+sd_copy_back_deps (rtx to, rtx from, bool resolved_p)
+{
+ sd_list_types_def list_type;
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ list_type = resolved_p ? SD_LIST_RES_BACK : SD_LIST_BACK;
+
+ FOR_EACH_DEP (from, list_type, sd_it, dep)
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ copy_dep (new_dep, dep);
+ DEP_CON (new_dep) = to;
+ sd_add_dep (new_dep, resolved_p);
+ }
+}
+
+/* Remove a dependency referred to by SD_IT.
+ SD_IT will point to the next dependence after removal. */
+void
+sd_delete_dep (sd_iterator_def sd_it)
+{
+ dep_node_t n = DEP_LINK_NODE (*sd_it.linkp);
+ dep_t dep = DEP_NODE_DEP (n);
+ rtx pro = DEP_PRO (dep);
+ rtx con = DEP_CON (dep);
+ deps_list_t con_back_deps;
+ deps_list_t pro_forw_deps;
+
+ if (true_dependency_cache != NULL)
{
- if (!(current_sched_info->flags & USE_DEPS_LIST))
- {
- if (dep_type == REG_DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- else if (dep_type == REG_DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- else if (dep_type == REG_DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- }
- else
- {
- if (ds & DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- if (ds & DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- if (ds & DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- if (ds & SPECULATIVE)
- {
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
- bitmap_set_bit (&spec_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- }
- }
+ int elem_luid = INSN_LUID (pro);
+ int insn_luid = INSN_LUID (con);
+
+ bitmap_clear_bit (&true_dependency_cache[insn_luid], elem_luid);
+ bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
+
+ if (current_sched_info->flags & DO_SPECULATION)
+ bitmap_clear_bit (&spec_dependency_cache[insn_luid], elem_luid);
}
-#endif
+
+ get_back_and_forw_lists (dep, sd_it.resolved_p,
+ &con_back_deps, &pro_forw_deps);
+
+ remove_from_deps_list (DEP_NODE_BACK (n), con_back_deps);
+ remove_from_deps_list (DEP_NODE_FORW (n), pro_forw_deps);
+
+ delete_dep_node (n);
+}
+
+/* Dump deps_lists of INSN specified by TYPES. */
+static void
+sd_dump_lists (FILE *dump, rtx insn, sd_list_types_def types, int flags)
+{
+ sd_iterator_def sd_it;
+ dep_t dep;
+ int all;
+
+ all = (flags & 1);
+
+ fprintf (dump, "%d: ", sd_lists_size (insn, types));
+
+ FOR_EACH_DEP (insn, types, sd_it, dep)
+ {
+ dump_dep (dump, dep, dump_dep_flags | all);
+ fprintf (dump, " ");
+ }
+
+ fprintf (dump, "\n");
+}
+
+/* Dump deps_lists of INSN specified by TYPES to STDERR. */
+void
+sd_debug_lists (rtx insn, sd_list_types_def types)
+{
+ sd_dump_lists (stderr, insn, types, 1);
}
/* A convenience wrapper to operate on an entire list. */
@@ -939,26 +1354,19 @@ add_dependence_list_and_free (rtx insn,
}
/* Clear all dependencies for an insn. */
-
static void
delete_all_dependences (rtx insn)
{
- /* Clear caches, if they exist, as well as free the dependence. */
-
-#ifdef INSN_SCHEDULING
- if (true_dependency_cache != NULL)
- {
- bitmap_clear (&true_dependency_cache[INSN_LUID (insn)]);
- bitmap_clear (&output_dependency_cache[INSN_LUID (insn)]);
- bitmap_clear (&anti_dependency_cache[INSN_LUID (insn)]);
- /* We don't have to clear forward_dependency_cache here,
- because it is formed later. */
- if (current_sched_info->flags & DO_SPECULATION)
- bitmap_clear (&spec_dependency_cache[INSN_LUID (insn)]);
- }
-#endif
+ sd_iterator_def sd_it;
+ dep_t dep;
- clear_deps_list (INSN_BACK_DEPS (insn));
+ /* The below cycle can be optimized to clear the caches and back_deps
+ in one call but that would provoke duplication of code from
+ delete_dep (). */
+
+ for (sd_it = sd_iterator_start (insn, SD_LIST_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
+ sd_delete_dep (sd_it);
}
/* All insns in a scheduling group except the first should only have
@@ -970,13 +1378,13 @@ delete_all_dependences (rtx insn)
static void
fixup_sched_groups (rtx insn)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
rtx prev_nonnote;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
{
rtx i = insn;
- dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
do
@@ -988,7 +1396,7 @@ fixup_sched_groups (rtx insn)
} while (SCHED_GROUP_P (i));
if (! sched_insns_conditions_mutex_p (i, pro))
- add_dependence (i, pro, DEP_KIND (dep));
+ add_dependence (i, pro, DEP_TYPE (dep));
next_link:;
}
@@ -1374,11 +1782,19 @@ sched_analyze_2 (struct deps *deps, rtx
t, rtx_varies_p)
&& ! sched_insns_conditions_mutex_p (insn, XEXP (pending, 0)))
{
- if (current_sched_info->flags & DO_SPECULATION)
- maybe_add_or_update_back_dep_1 (insn, XEXP (pending, 0),
- REG_DEP_TRUE,
- BEGIN_DATA | DEP_TRUE,
- XEXP (pending_mem, 0), t, 0);
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (spec_info->mask & BEGIN_DATA))
+ /* Create a data-speculative dependence between producer
+ and consumer. */
+ {
+ dep_def _dep, *dep = &_dep;
+
+ init_dep_1 (dep, XEXP (pending, 0), insn, REG_DEP_TRUE,
+ BEGIN_DATA | DEP_TRUE);
+
+ maybe_add_or_update_dep_1 (dep, false,
+ XEXP (pending_mem, 0), t);
+ }
else
add_dependence (insn, XEXP (pending, 0), REG_DEP_TRUE);
}
@@ -1811,6 +2227,19 @@ sched_analyze_insn (struct deps *deps, r
/* Fixup the dependencies in the sched group. */
if (SCHED_GROUP_P (insn))
fixup_sched_groups (insn);
+
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && !sched_insn_is_legitimate_for_speculation_p (insn, 0))
+ /* INSN has an internal dependency (e.g. r14 = [r14]) and thus cannot
+ be speculated. */
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
+ change_spec_dep_to_hard (sd_it);
+ }
}
/* Analyze every insn between HEAD and TAIL inclusive, creating backward
@@ -1839,13 +2268,8 @@ sched_analyze (struct deps *deps, rtx he
if (INSN_P (insn))
{
- /* These two lists will be freed in schedule_insn (). */
- INSN_BACK_DEPS (insn) = create_deps_list (false);
- INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list (false);
-
- /* This one should be allocated on the obstack because it should live
- till the scheduling ends. */
- INSN_FORW_DEPS (insn) = create_deps_list (true);
+ /* And initialize deps_lists. */
+ sd_init_insn (insn);
}
if (NONJUMP_INSN_P (insn) || JUMP_P (insn))
@@ -1987,94 +2411,58 @@ sched_analyze (struct deps *deps, rtx he
}
gcc_unreachable ();
}
-\f
-/* The following function adds forward dependence (FROM, TO) with
- given DEP_TYPE. The forward dependence should be not exist before. */
-
-void
-add_forw_dep (dep_link_t link)
+/* Helper for sched_free_deps ().
+ Delete INSN's (RESOLVED_P) backward dependencies. */
+static void
+delete_dep_nodes_in_back_deps (rtx insn, bool resolved_p)
{
- dep_t dep = DEP_LINK_DEP (link);
- rtx to = DEP_CON (dep);
- rtx from = DEP_PRO (dep);
-
-#ifdef ENABLE_CHECKING
- /* If add_dependence is working properly there should never
- be notes, deleted insns or duplicates in the backward
- links. Thus we need not check for them here.
-
- However, if we have enabled checking we might as well go
- ahead and verify that add_dependence worked properly. */
- gcc_assert (INSN_P (from));
- gcc_assert (!INSN_DELETED_P (from));
- if (true_dependency_cache)
- {
- gcc_assert (!bitmap_bit_p (&forward_dependency_cache[INSN_LUID (from)],
- INSN_LUID (to)));
- bitmap_set_bit (&forward_dependency_cache[INSN_LUID (from)],
- INSN_LUID (to));
- }
+ sd_iterator_def sd_it;
+ dep_t dep;
+ sd_list_types_def types;
- gcc_assert (find_link_by_con_in_deps_list (INSN_FORW_DEPS (from), to)
- == NULL);
-#endif
+ if (resolved_p)
+ types = SD_LIST_RES_BACK;
+ else
+ types = SD_LIST_BACK;
- add_to_deps_list (DEP_NODE_FORW (DEP_LINK_NODE (link)),
- INSN_FORW_DEPS (from));
+ for (sd_it = sd_iterator_start (insn, types);
+ sd_iterator_cond (&sd_it, &dep);)
+ {
+ dep_link_t link = *sd_it.linkp;
+ dep_node_t node = DEP_LINK_NODE (link);
+ deps_list_t back_list;
+ deps_list_t forw_list;
- INSN_DEP_COUNT (to) += 1;
+ get_back_and_forw_lists (dep, resolved_p, &back_list, &forw_list);
+ remove_from_deps_list (link, back_list);
+ delete_dep_node (node);
+ }
}
-/* Examine insns in the range [ HEAD, TAIL ] and Use the backward
- dependences from INSN_BACK_DEPS list to build forward dependences in
- INSN_FORW_DEPS. */
-
+/* Delete (RESOLVED_P) dependencies between HEAD and TAIL together with
+ deps_lists. */
void
-compute_forward_dependences (rtx head, rtx tail)
+sched_free_deps (rtx head, rtx tail, bool resolved_p)
{
rtx insn;
- rtx next_tail;
+ rtx next_tail = NEXT_INSN (tail);
- next_tail = NEXT_INSN (tail);
for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
- {
- dep_link_t link;
-
- if (! INSN_P (insn))
- continue;
-
- if (current_sched_info->flags & DO_SPECULATION)
- {
- /* We will add links, preserving order, from INSN_BACK_DEPS to
- NEW. */
- dep_link_t new = NULL;
-
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
-
- while (link != NULL)
- {
- dep_link_t next = DEP_LINK_NEXT (link);
-
- detach_dep_link (link);
- adjust_add_sorted_back_dep (insn, link, &new);
-
- link = next;
- }
-
- /* Attach NEW to be the list of backward dependencies. */
- if (new != NULL)
- {
- DEP_LINK_PREV_NEXTP (new)
- = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
+ if (INSN_P (insn) && INSN_LUID (insn) > 0)
+ {
+ /* Clear resolved back deps together with its dep_nodes. */
+ delete_dep_nodes_in_back_deps (insn, resolved_p);
- DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)) = new;
- }
- }
+ /* Clear forward deps and leave the dep_nodes to the
+ corresponding back_deps list. */
+ if (resolved_p)
+ clear_deps_list (INSN_RESOLVED_FORW_DEPS (insn));
+ else
+ clear_deps_list (INSN_FORW_DEPS (insn));
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
- add_forw_dep (link);
- }
+ sd_finish_insn (insn);
+ }
}
\f
/* Initialize variables for region data dependence analysis.
@@ -2144,27 +2532,31 @@ free_deps (struct deps *deps)
void
init_dependency_caches (int luid)
{
+ /* Average number of insns in the basic block.
+ '+ 1' is used to make it nonzero. */
+ int insns_in_block = luid / n_basic_blocks + 1;
+
/* ?!? We could save some memory by computing a per-region luid mapping
which could reduce both the number of vectors in the cache and the size
of each vector. Instead we just avoid the cache entirely unless the
average number of instructions in a basic block is very high. See
the comment before the declaration of true_dependency_cache for
what we consider "very high". */
- if (luid / n_basic_blocks > 100 * 5)
+ if (insns_in_block > 100 * 5)
{
cache_size = 0;
extend_dependency_caches (luid, true);
}
- /* Lifetime of this obstack is whole function scheduling (not single region
- scheduling) because some dependencies can be manually generated for
- outside regions. See dont_calc_deps in sched-{rgn, ebb}.c .
-
- Possible solution would be to have two obstacks:
- * the big one for regular dependencies with region scheduling lifetime,
- * and the small one for manually generated dependencies with function
- scheduling lifetime. */
- gcc_obstack_init (&deps_obstack);
+ dl_pool = create_alloc_pool ("deps_list", sizeof (struct _deps_list),
+ /* Allocate lists for one block at a time. */
+ insns_in_block);
+
+ dn_pool = create_alloc_pool ("dep_node", sizeof (struct _dep_node),
+ /* Allocate nodes for one block at a time.
+ We assume that average insn has
+ 5 producers. */
+ 5 * insns_in_block);
}
/* Create or extend (depending on CREATE_P) dependency caches to
@@ -2182,10 +2574,7 @@ extend_dependency_caches (int n, bool cr
output_dependency_cache, luid);
anti_dependency_cache = XRESIZEVEC (bitmap_head, anti_dependency_cache,
luid);
-#ifdef ENABLE_CHECKING
- forward_dependency_cache = XRESIZEVEC (bitmap_head,
- forward_dependency_cache, luid);
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
spec_dependency_cache = XRESIZEVEC (bitmap_head, spec_dependency_cache,
luid);
@@ -2195,9 +2584,7 @@ extend_dependency_caches (int n, bool cr
bitmap_initialize (&true_dependency_cache[i], 0);
bitmap_initialize (&output_dependency_cache[i], 0);
bitmap_initialize (&anti_dependency_cache[i], 0);
-#ifdef ENABLE_CHECKING
- bitmap_initialize (&forward_dependency_cache[i], 0);
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
bitmap_initialize (&spec_dependency_cache[i], 0);
}
@@ -2210,7 +2597,10 @@ extend_dependency_caches (int n, bool cr
void
free_dependency_caches (void)
{
- obstack_free (&deps_obstack, NULL);
+ gcc_assert (deps_pools_are_empty_p ());
+ free_alloc_pool_if_empty (&dn_pool);
+ free_alloc_pool_if_empty (&dl_pool);
+ gcc_assert (dn_pool == NULL && dl_pool == NULL);
if (true_dependency_cache)
{
@@ -2221,9 +2611,7 @@ free_dependency_caches (void)
bitmap_clear (&true_dependency_cache[i]);
bitmap_clear (&output_dependency_cache[i]);
bitmap_clear (&anti_dependency_cache[i]);
-#ifdef ENABLE_CHECKING
- bitmap_clear (&forward_dependency_cache[i]);
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
bitmap_clear (&spec_dependency_cache[i]);
}
@@ -2233,10 +2621,7 @@ free_dependency_caches (void)
output_dependency_cache = NULL;
free (anti_dependency_cache);
anti_dependency_cache = NULL;
-#ifdef ENABLE_CHECKING
- free (forward_dependency_cache);
- forward_dependency_cache = NULL;
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
{
free (spec_dependency_cache);
@@ -2267,72 +2652,6 @@ finish_deps_global (void)
FREE_REG_SET (reg_pending_uses);
}
-/* Insert LINK into the dependence chain pointed to by LINKP and
- maintain the sort order. */
-static void
-adjust_add_sorted_back_dep (rtx insn, dep_link_t link, dep_link_t *linkp)
-{
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
- /* If the insn cannot move speculatively, but the link is speculative,
- make it hard dependence. */
- if (HAS_INTERNAL_DEP (insn)
- && (DEP_LINK_STATUS (link) & SPECULATIVE))
- {
- DEP_LINK_STATUS (link) &= ~SPECULATIVE;
-
- if (true_dependency_cache)
- bitmap_clear_bit (&spec_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (DEP_LINK_PRO (link)));
- }
-
- /* Non-speculative links go at the head of deps_list, followed by
- speculative links. */
- if (DEP_LINK_STATUS (link) & SPECULATIVE)
- while (*linkp && !(DEP_LINK_STATUS (*linkp) & SPECULATIVE))
- linkp = &DEP_LINK_NEXT (*linkp);
-
- attach_dep_link (link, linkp);
-
- if (CHECK)
- gcc_assert (deps_list_consistent_p (INSN_BACK_DEPS (insn)));
-}
-
-/* Move the dependence pointed to by LINKP to the back dependencies
- of INSN, and also add this dependence to the forward ones. All dep_links,
- except one pointed to by LINKP, must be sorted. */
-static void
-adjust_back_add_forw_dep (rtx insn, dep_link_t *linkp)
-{
- dep_link_t link;
-
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
- link = *linkp;
- detach_dep_link (link);
-
- adjust_add_sorted_back_dep (insn, link,
- &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
- add_forw_dep (link);
-}
-
-/* Remove forward dependence described by L. */
-static void
-delete_forw_dep (dep_link_t l)
-{
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
-#ifdef ENABLE_CHECKING
- if (true_dependency_cache)
- bitmap_clear_bit (&forward_dependency_cache[INSN_LUID (DEP_LINK_PRO (l))],
- INSN_LUID (DEP_LINK_CON (l)));
-#endif
-
- detach_dep_link (l);
-
- INSN_DEP_COUNT (DEP_LINK_CON (l))--;
-}
-
/* Estimate the weakness of dependence between MEM1 and MEM2. */
static dw_t
estimate_dep_weak (rtx mem1, rtx mem2)
@@ -2367,90 +2686,15 @@ estimate_dep_weak (rtx mem1, rtx mem2)
void
add_dependence (rtx insn, rtx elem, enum reg_note dep_type)
{
- ds_t ds;
-
- if (dep_type == REG_DEP_TRUE)
- ds = DEP_TRUE;
- else if (dep_type == REG_DEP_OUTPUT)
- ds = DEP_OUTPUT;
- else if (dep_type == REG_DEP_ANTI)
- ds = DEP_ANTI;
- else
- gcc_unreachable ();
-
- maybe_add_or_update_back_dep_1 (insn, elem, dep_type, ds, 0, 0, 0);
-}
-
-/* Add or update backward dependence between INSN and ELEM
- with given type DEP_TYPE and dep_status DS.
- This function is a convenience wrapper. */
-enum DEPS_ADJUST_RESULT
-add_or_update_back_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
-{
- return add_or_update_back_dep_1 (insn, elem, dep_type, ds, 0, 0, 0);
-}
-
-/* Add or update both backward and forward dependencies between INSN and ELEM
- with given type DEP_TYPE and dep_status DS. */
-void
-add_or_update_back_forw_dep (rtx insn, rtx elem, enum reg_note dep_type,
- ds_t ds)
-{
- enum DEPS_ADJUST_RESULT res;
- dep_link_t *linkp;
-
- res = add_or_update_back_dep_1 (insn, elem, dep_type, ds, 0, 0, &linkp);
-
- if (res == DEP_CHANGED || res == DEP_CREATED)
- {
- if (res == DEP_CHANGED)
- delete_forw_dep (DEP_NODE_FORW (DEP_LINK_NODE (*linkp)));
- else if (res == DEP_CREATED)
- linkp = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
-
- adjust_back_add_forw_dep (insn, linkp);
- }
-}
-
-/* Add both backward and forward dependencies between INSN and ELEM
- with given type DEP_TYPE and dep_status DS. */
-void
-add_back_forw_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
-{
- add_back_dep (insn, elem, dep_type, ds);
- adjust_back_add_forw_dep (insn, &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
-
- if (CHECK)
- gcc_assert (deps_list_consistent_p (INSN_BACK_DEPS (insn)));
-}
-
-/* Remove a dependency referred to by L. */
-void
-delete_back_forw_dep (dep_link_t l)
-{
- dep_node_t n = DEP_LINK_NODE (l);
-
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
- if (true_dependency_cache != NULL)
- {
- dep_t dep = DEP_NODE_DEP (n);
- int elem_luid = INSN_LUID (DEP_PRO (dep));
- int insn_luid = INSN_LUID (DEP_CON (dep));
-
- bitmap_clear_bit (&true_dependency_cache[insn_luid], elem_luid);
- bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
- bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
- bitmap_clear_bit (&spec_dependency_cache[insn_luid], elem_luid);
- }
+ dep_def _dep, *dep = &_dep;
- delete_forw_dep (DEP_NODE_FORW (n));
- detach_dep_link (DEP_NODE_BACK (n));
+ init_dep (dep, elem, insn, dep_type);
+ maybe_add_or_update_dep_1 (dep, false, NULL_RTX, NULL_RTX);
}
/* Return weakness of speculative type TYPE in the dep_status DS. */
-dw_t
-get_dep_weak (ds_t ds, ds_t type)
+static dw_t
+get_dep_weak_1 (ds_t ds, ds_t type)
{
ds = ds & type;
switch (type)
@@ -2462,10 +2706,20 @@ get_dep_weak (ds_t ds, ds_t type)
default: gcc_unreachable ();
}
- gcc_assert (MIN_DEP_WEAK <= ds && ds <= MAX_DEP_WEAK);
return (dw_t) ds;
}
+/* Return weakness of speculative type TYPE in the dep_status DS. */
+dw_t
+get_dep_weak (ds_t ds, ds_t type)
+{
+ dw_t dw = get_dep_weak_1 (ds, type);
+
+ gcc_assert (MIN_DEP_WEAK <= dw && dw <= MAX_DEP_WEAK);
+
+ return dw;
+}
+
/* Return the dep_status, which has the same parameters as DS, except for
speculative type TYPE, that will have weakness DW. */
ds_t
@@ -2523,13 +2777,59 @@ ds_merge (ds_t ds1, ds_t ds2)
return ds;
}
+/* Dump information about the dependence status S. */
+static void
+dump_ds (FILE *f, ds_t s)
+{
+ fprintf (f, "{");
+
+ if (s & BEGIN_DATA)
+ fprintf (f, "BEGIN_DATA: %d; ", get_dep_weak_1 (s, BEGIN_DATA));
+ if (s & BE_IN_DATA)
+ fprintf (f, "BE_IN_DATA: %d; ", get_dep_weak_1 (s, BE_IN_DATA));
+ if (s & BEGIN_CONTROL)
+ fprintf (f, "BEGIN_CONTROL: %d; ", get_dep_weak_1 (s, BEGIN_CONTROL));
+ if (s & BE_IN_CONTROL)
+ fprintf (f, "BE_IN_CONTROL: %d; ", get_dep_weak_1 (s, BE_IN_CONTROL));
+
+ if (s & HARD_DEP)
+ fprintf (f, "HARD_DEP; ");
+
+ if (s & DEP_TRUE)
+ fprintf (f, "DEP_TRUE; ");
+ if (s & DEP_ANTI)
+ fprintf (f, "DEP_ANTI; ");
+ if (s & DEP_OUTPUT)
+ fprintf (f, "DEP_OUTPUT; ");
+
+ fprintf (f, "}");
+}
+
+void
+debug_ds (ds_t s)
+{
+ dump_ds (stderr, s);
+ fprintf (stderr, "\n");
+}
+
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
/* Verify that dependence type and status are consistent.
If RELAXED_P is true, then skip dep_weakness checks. */
static void
-check_dep_status (enum reg_note dt, ds_t ds, bool relaxed_p)
+check_dep (dep_t dep, bool relaxed_p)
{
+ enum reg_note dt = DEP_TYPE (dep);
+ ds_t ds = DEP_STATUS (dep);
+
+ gcc_assert (DEP_PRO (dep) != DEP_CON (dep));
+
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
+ gcc_assert (ds == -1);
+ return;
+ }
+
/* Check that dependence type contains the same bits as the status. */
if (dt == REG_DEP_TRUE)
gcc_assert (ds & DEP_TRUE);
--- sched-int.h (/mirror/endeed/trunk/gcc) (revision 1284)
+++ sched-int.h (/mirror/endeed/fdl-pool/gcc) (revision 1284)
@@ -55,35 +55,40 @@ struct _dep
/* Consumer. */
rtx con;
- /* Dependency kind (aka dependency major type). This field is superseded
- by STATUS below. Though, it is still in place because all the backends
- use it. */
- enum reg_note kind;
+ /* Dependency major type. This field is superseded by STATUS below.
+ Though, it is still in place because all the backends use it. */
+ enum reg_note type;
/* Dependency status. This field holds all dependency types and additional
information for speculative dependencies. */
ds_t status;
};
-typedef struct _dep *dep_t;
+
+typedef struct _dep dep_def;
+typedef dep_def *dep_t;
#define DEP_PRO(D) ((D)->pro)
#define DEP_CON(D) ((D)->con)
-#define DEP_KIND(D) ((D)->kind)
+#define DEP_TYPE(D) ((D)->type)
#define DEP_STATUS(D) ((D)->status)
/* Functions to work with dep. */
+extern void init_dep_1 (dep_t, rtx, rtx, enum reg_note, ds_t);
extern void init_dep (dep_t, rtx, rtx, enum reg_note);
+extern void debug_dep (dep_t);
+
/* Definition of this struct resides below. */
struct _dep_node;
+typedef struct _dep_node *dep_node_t;
/* A link in the dependency list. This is essentially an equivalent of a
single {INSN, DEPS}_LIST rtx. */
struct _dep_link
{
/* Dep node with all the data. */
- struct _dep_node *node;
+ dep_node_t node;
/* Next link in the list. For the last one it is NULL. */
struct _dep_link *next;
@@ -108,39 +113,22 @@ typedef struct _dep_link *dep_link_t;
#define DEP_LINK_DEP(N) (DEP_NODE_DEP (DEP_LINK_NODE (N)))
#define DEP_LINK_PRO(N) (DEP_PRO (DEP_LINK_DEP (N)))
#define DEP_LINK_CON(N) (DEP_CON (DEP_LINK_DEP (N)))
-#define DEP_LINK_KIND(N) (DEP_KIND (DEP_LINK_DEP (N)))
+#define DEP_LINK_TYPE(N) (DEP_TYPE (DEP_LINK_DEP (N)))
#define DEP_LINK_STATUS(N) (DEP_STATUS (DEP_LINK_DEP (N)))
-void debug_dep_links (dep_link_t);
-
/* A list of dep_links. */
struct _deps_list
{
+ /* First element. */
dep_link_t first;
+
+ /* Total number of elements in the list. */
+ int n_links;
};
typedef struct _deps_list *deps_list_t;
#define DEPS_LIST_FIRST(L) ((L)->first)
-
-/* Macro to walk through deps_list. */
-#define FOR_EACH_DEP_LINK(LINK, LIST) \
- for ((LINK) = DEPS_LIST_FIRST (LIST); \
- (LINK) != NULL; \
- (LINK) = DEP_LINK_NEXT (LINK))
-
-/* Functions to work with deps_list. */
-
-deps_list_t create_deps_list (bool);
-void free_deps_list (deps_list_t);
-void delete_deps_list (deps_list_t);
-bool deps_list_empty_p (deps_list_t);
-void debug_deps_list (deps_list_t);
-void add_back_dep_to_deps_list (deps_list_t, dep_t);
-dep_link_t find_link_by_pro_in_deps_list (deps_list_t, rtx);
-dep_link_t find_link_by_con_in_deps_list (deps_list_t, rtx);
-void copy_deps_list_change_con (deps_list_t, deps_list_t, rtx);
-
-void move_dep_link (dep_link_t, deps_list_t);
+#define DEPS_LIST_N_LINKS(L) ((L)->n_links)
/* Suppose we have a dependence Y between insn pro1 and con1, where pro1 has
additional dependents con0 and con2, and con1 is dependent on additional
@@ -248,7 +236,6 @@ struct _dep_node
/* Forward link. */
struct _dep_link forw;
};
-typedef struct _dep_node *dep_node_t;
#define DEP_NODE_BACK(N) (&(N)->back)
#define DEP_NODE_DEP(N) (&(N)->dep)
@@ -470,15 +457,17 @@ extern struct sched_info *current_sched_
struct haifa_insn_data
{
- /* NB: We can't place 'struct _deps_list' here instead of deps_list_t into
- h_i_d because when h_i_d extends, addresses of the deps_list->first
- change without updating deps_list->first->next->prev_nextp. Thus
- BACK_DEPS and RESOLVED_BACK_DEPS are allocated on the heap and FORW_DEPS
- list is allocated on the obstack. */
+ /* We can't place 'struct _deps_list' into h_i_d instead of deps_list_t
+ because when h_i_d extends, addresses of the deps_list->first
+ change without updating deps_list->first->next->prev_nextp. */
- /* A list of backward dependencies. The insn is a consumer of all the
+ /* A list of hard backward dependencies. The insn is a consumer of all the
deps mentioned here. */
- deps_list_t back_deps;
+ deps_list_t hard_back_deps;
+
+ /* A list of speculative (weak) dependencies. The insn is a consumer of all
+ the deps mentioned here. */
+ deps_list_t spec_back_deps;
/* A list of insns which depend on the instruction. Unlike 'back_deps',
it represents forward dependencies. */
@@ -487,6 +476,11 @@ struct haifa_insn_data
/* A list of scheduled producers of the instruction. Links are being moved
from 'back_deps' to 'resolved_back_deps' while scheduling. */
deps_list_t resolved_back_deps;
+
+ /* A list of scheduled consumers of the instruction. Links are being moved
+ from 'forw_deps' to 'resolved_forw_deps' while scheduling to fasten the
+ search in 'forw_deps'. */
+ deps_list_t resolved_forw_deps;
/* Logical uid gives the original ordering of the insns. */
int luid;
@@ -494,11 +488,6 @@ struct haifa_insn_data
/* A priority for each insn. */
int priority;
- /* The number of incoming edges in the forward dependency graph.
- As scheduling proceeds, counts are decreased. An insn moves to
- the ready queue when its counter reaches zero. */
- int dep_count;
-
/* Number of instructions referring to this insn. */
int ref_count;
@@ -554,13 +543,16 @@ extern struct haifa_insn_data *h_i_d;
/* Accessor macros for h_i_d. There are more in haifa-sched.c and
sched-rgn.c. */
-#define INSN_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].back_deps)
+
+#define INSN_HARD_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].hard_back_deps)
+#define INSN_SPEC_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].spec_back_deps)
#define INSN_FORW_DEPS(INSN) (h_i_d[INSN_UID (INSN)].forw_deps)
#define INSN_RESOLVED_BACK_DEPS(INSN) \
(h_i_d[INSN_UID (INSN)].resolved_back_deps)
+#define INSN_RESOLVED_FORW_DEPS(INSN) \
+ (h_i_d[INSN_UID (INSN)].resolved_forw_deps)
#define INSN_LUID(INSN) (h_i_d[INSN_UID (INSN)].luid)
#define CANT_MOVE(insn) (h_i_d[INSN_UID (insn)].cant_move)
-#define INSN_DEP_COUNT(INSN) (h_i_d[INSN_UID (INSN)].dep_count)
#define INSN_PRIORITY(INSN) (h_i_d[INSN_UID (INSN)].priority)
#define INSN_PRIORITY_STATUS(INSN) (h_i_d[INSN_UID (INSN)].priority_status)
#define INSN_PRIORITY_KNOWN(INSN) (INSN_PRIORITY_STATUS (INSN) > 0)
@@ -695,13 +687,16 @@ enum SPEC_TYPES_OFFSETS {
#define HARD_DEP (DEP_ANTI << 1)
/* This represents the results of calling sched-deps.c functions,
- which modify dependencies. Possible choices are: a dependence
- is already present and nothing has been changed; a dependence type
- has been changed; brand new dependence has been created. */
+ which modify dependencies. */
enum DEPS_ADJUST_RESULT {
- DEP_PRESENT = 1,
- DEP_CHANGED = 2,
- DEP_CREATED = 3
+ /* No dependence needed (e.g. producer == consumer). */
+ DEP_NODEP,
+ /* Dependence is already present and wasn't modified. */
+ DEP_PRESENT,
+ /* Existing dependence was modified to include additional information. */
+ DEP_CHANGED,
+ /* New dependence has been created. */
+ DEP_CREATED
};
/* Represents the bits that can be set in the flags field of the
@@ -732,6 +727,9 @@ enum SPEC_SCHED_FLAGS {
extern FILE *sched_dump;
extern int sched_verbose;
+extern spec_info_t spec_info;
+extern bool haifa_recovery_block_was_added_during_scheduling_p;
+
/* Exception Free Loads:
We define five classes of speculative loads: IFREE, IRISKY,
@@ -817,23 +815,19 @@ extern void print_insn (char *, rtx, int
extern bool sched_insns_conditions_mutex_p (rtx, rtx);
extern void add_dependence (rtx, rtx, enum reg_note);
extern void sched_analyze (struct deps *, rtx, rtx);
+extern bool deps_pools_are_empty_p (void);
+extern void sched_free_deps (rtx, rtx, bool);
extern void init_deps (struct deps *);
extern void free_deps (struct deps *);
extern void init_deps_global (void);
extern void finish_deps_global (void);
-extern void add_forw_dep (dep_link_t);
-extern void compute_forward_dependences (rtx, rtx);
extern void init_dependency_caches (int);
extern void free_dependency_caches (void);
extern void extend_dependency_caches (int, bool);
-extern enum DEPS_ADJUST_RESULT add_or_update_back_dep (rtx, rtx,
- enum reg_note, ds_t);
-extern void add_or_update_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
-extern void add_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
-extern void delete_back_forw_dep (dep_link_t);
extern dw_t get_dep_weak (ds_t, ds_t);
extern ds_t set_dep_weak (ds_t, ds_t, dw_t);
extern ds_t ds_merge (ds_t, ds_t);
+extern void debug_ds (ds_t);
/* Functions in haifa-sched.c. */
extern int haifa_classify_insn (rtx);
@@ -852,11 +846,151 @@ extern void sched_finish (void);
extern int try_ready (rtx);
extern void * xrecalloc (void *, size_t, size_t, size_t);
+extern bool sched_insn_is_legitimate_for_speculation_p (rtx, ds_t);
extern void unlink_bb_notes (basic_block, basic_block);
extern void add_block (basic_block, basic_block);
extern rtx bb_note (basic_block);
/* Functions in sched-rgn.c. */
+
extern void debug_dependencies (rtx, rtx);
+/* sched-deps.c interface to walk, add, search, update, resolve, delete
+ and debug instruction dependencies. */
+
+/* Constants defining dependences lists. */
+
+/* No list. */
+#define SD_LIST_NONE (0)
+
+/* hard_back_deps. */
+#define SD_LIST_HARD_BACK (1)
+
+/* spec_back_deps. */
+#define SD_LIST_SPEC_BACK (2)
+
+/* forw_deps. */
+#define SD_LIST_FORW (4)
+
+/* resolved_back_deps. */
+#define SD_LIST_RES_BACK (8)
+
+/* resolved_forw_deps. */
+#define SD_LIST_RES_FORW (16)
+
+#define SD_LIST_BACK (SD_LIST_HARD_BACK | SD_LIST_SPEC_BACK)
+
+/* A type to hold above flags. */
+typedef int sd_list_types_def;
+
+extern void sd_next_list (rtx, sd_list_types_def *, deps_list_t *, bool *);
+
+/* Iterator to walk through, resolve and delete dependencies. */
+struct _sd_iterator
+{
+ /* What lists to walk. Can be any combination of SD_LIST_* flags. */
+ sd_list_types_def types;
+
+ /* Instruction dependencies lists of which will be walked. */
+ rtx insn;
+
+ /* Pointer to the next field of the previous element. This is not
+ simply a pointer to the next element to allow easy deletion from the
+ list. When a dep is being removed from the list the iterator
+ will automatically advance because the value in *linkp will start
+ reffering to the next element. */
+ dep_link_t *linkp;
+
+ /* True if the current list is a resolved one. */
+ bool resolved_p;
+};
+
+typedef struct _sd_iterator sd_iterator_def;
+
+/* ??? We can move some definitions that are used in below inline functions
+ out of sched-int.h to sched-deps.c provided that the below functions will
+ become global externals.
+ These definitions include:
+ * struct _deps_list: opaque pointer is needed at global scope.
+ * struct _dep_link: opaque pointer is needed at scope of sd_iterator_def.
+ * struct _dep_node: opaque pointer is needed at scope of
+ struct _deps_link. */
+
+/* Return initialized iterator. */
+static inline sd_iterator_def
+sd_iterator_start (rtx insn, sd_list_types_def types)
+{
+ /* Some dep_link a pointer to which will return NULL. */
+ static dep_link_t null_link = NULL;
+
+ sd_iterator_def i;
+
+ i.types = types;
+ i.insn = insn;
+ i.linkp = &null_link;
+
+ /* Avoid 'uninitialized warning'. */
+ i.resolved_p = false;
+
+ return i;
+}
+
+/* Return the current element. */
+static inline bool
+sd_iterator_cond (sd_iterator_def *it_ptr, dep_t *dep_ptr)
+{
+ dep_link_t link = *it_ptr->linkp;
+
+ if (link != NULL)
+ {
+ *dep_ptr = DEP_LINK_DEP (link);
+ return true;
+ }
+ else
+ {
+ sd_list_types_def types = it_ptr->types;
+
+ if (types != SD_LIST_NONE)
+ /* Switch to next list. */
+ {
+ deps_list_t list;
+
+ sd_next_list (it_ptr->insn,
+ &it_ptr->types, &list, &it_ptr->resolved_p);
+
+ it_ptr->linkp = &DEPS_LIST_FIRST (list);
+
+ return sd_iterator_cond (it_ptr, dep_ptr);
+ }
+
+ *dep_ptr = NULL;
+ return false;
+ }
+}
+
+/* Advance iterator. */
+static inline void
+sd_iterator_next (sd_iterator_def *it_ptr)
+{
+ it_ptr->linkp = &DEP_LINK_NEXT (*it_ptr->linkp);
+}
+
+/* A cycle wrapper. */
+#define FOR_EACH_DEP(INSN, LIST_TYPES, ITER, DEP) \
+ for ((ITER) = sd_iterator_start ((INSN), (LIST_TYPES)); \
+ sd_iterator_cond (&(ITER), &(DEP)); \
+ sd_iterator_next (&(ITER)))
+
+extern int sd_lists_size (rtx, sd_list_types_def);
+extern bool sd_lists_empty_p (rtx, sd_list_types_def);
+extern void sd_init_insn (rtx);
+extern void sd_finish_insn (rtx);
+extern dep_t sd_find_dep_between (rtx, rtx, bool);
+extern void sd_add_dep (dep_t, bool);
+extern enum DEPS_ADJUST_RESULT sd_add_or_update_dep (dep_t, bool);
+extern void sd_resolve_dep (sd_iterator_def);
+extern void sd_copy_back_deps (rtx, rtx, bool);
+extern void sd_delete_dep (sd_iterator_def);
+extern void sd_debug_lists (rtx, sd_list_types_def);
+
#endif /* GCC_SCHED_INT_H */
--- sched-rgn.c (/mirror/endeed/trunk/gcc) (revision 1284)
+++ sched-rgn.c (/mirror/endeed/fdl-pool/gcc) (revision 1284)
@@ -278,7 +278,7 @@ static int is_exception_free (rtx, int,
static bool sets_likely_spilled (rtx);
static void sets_likely_spilled_1 (rtx, rtx, void *);
static void add_branch_dependences (rtx, rtx);
-static void compute_block_backward_dependences (int);
+static void compute_block_dependences (int);
static void init_regions (void);
static void schedule_region (int);
@@ -1698,11 +1698,12 @@ update_live (rtx insn, int src)
static void
set_spec_fed (rtx load_insn)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (load_insn))
- if (DEP_LINK_KIND (link) == REG_DEP_TRUE)
- FED_BY_SPEC_LOAD (DEP_LINK_CON (link)) = 1;
+ FOR_EACH_DEP (load_insn, SD_LIST_FORW, sd_it, dep)
+ if (DEP_TYPE (dep) == REG_DEP_TRUE)
+ FED_BY_SPEC_LOAD (DEP_CON (dep)) = 1;
}
/* On the path from the insn to load_insn_bb, find a conditional
@@ -1711,18 +1712,19 @@ branch depending on insn, that guards th
static int
find_conditional_protection (rtx insn, int load_insn_bb)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
/* Iterate through DEF-USE forward dependences. */
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
{
- rtx next = DEP_LINK_CON (link);
+ rtx next = DEP_CON (dep);
if ((CONTAINING_RGN (BLOCK_NUM (next)) ==
CONTAINING_RGN (BB_TO_BLOCK (load_insn_bb)))
&& IS_REACHABLE (INSN_BB (next), load_insn_bb)
&& load_insn_bb != INSN_BB (next)
- && DEP_LINK_KIND (link) == REG_DEP_TRUE
+ && DEP_TYPE (dep) == REG_DEP_TRUE
&& (JUMP_P (next)
|| find_conditional_protection (next, load_insn_bb)))
return 1;
@@ -1747,14 +1749,15 @@ find_conditional_protection (rtx insn, i
static int
is_conditionally_protected (rtx load_insn, int bb_src, int bb_trg)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (load_insn))
+ FOR_EACH_DEP (load_insn, SD_LIST_BACK, sd_it, dep)
{
- rtx insn1 = DEP_LINK_PRO (link);
+ rtx insn1 = DEP_PRO (dep);
/* Must be a DEF-USE dependence upon non-branch. */
- if (DEP_LINK_KIND (link) != REG_DEP_TRUE
+ if (DEP_TYPE (dep) != REG_DEP_TRUE
|| JUMP_P (insn1))
continue;
@@ -1797,27 +1800,29 @@ is_conditionally_protected (rtx load_ins
static int
is_pfree (rtx load_insn, int bb_src, int bb_trg)
{
- dep_link_t back_link;
+ sd_iterator_def back_sd_it;
+ dep_t back_dep;
candidate *candp = candidate_table + bb_src;
if (candp->split_bbs.nr_members != 1)
/* Must have exactly one escape block. */
return 0;
- FOR_EACH_DEP_LINK (back_link, INSN_BACK_DEPS (load_insn))
+ FOR_EACH_DEP (load_insn, SD_LIST_BACK, back_sd_it, back_dep)
{
- rtx insn1 = DEP_LINK_PRO (back_link);
+ rtx insn1 = DEP_PRO (back_dep);
- if (DEP_LINK_KIND (back_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (back_dep) == REG_DEP_TRUE)
+ /* Found a DEF-USE dependence (insn1, load_insn). */
{
- /* Found a DEF-USE dependence (insn1, load_insn). */
- dep_link_t fore_link;
+ sd_iterator_def fore_sd_it;
+ dep_t fore_dep;
- FOR_EACH_DEP_LINK (fore_link, INSN_FORW_DEPS (insn1))
+ FOR_EACH_DEP (insn1, SD_LIST_FORW, fore_sd_it, fore_dep)
{
- rtx insn2 = DEP_LINK_CON (fore_link);
+ rtx insn2 = DEP_CON (fore_dep);
- if (DEP_LINK_KIND (fore_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (fore_dep) == REG_DEP_TRUE)
{
/* Found a DEF-USE dependence (insn1, insn2). */
if (haifa_classify_insn (insn2) != PFREE_CANDIDATE)
@@ -1850,7 +1855,7 @@ is_prisky (rtx load_insn, int bb_src, in
if (FED_BY_SPEC_LOAD (load_insn))
return 1;
- if (deps_list_empty_p (INSN_BACK_DEPS (load_insn)))
+ if (sd_lists_empty_p (load_insn, SD_LIST_BACK))
/* Dependence may 'hide' out of the region. */
return 1;
@@ -2082,7 +2087,8 @@ new_ready (rtx next, ds_t ts)
if (not_ex_free
/* We are here because is_exception_free () == false.
But we possibly can handle that with control speculation. */
- && current_sched_info->flags & DO_SPECULATION)
+ && (current_sched_info->flags & DO_SPECULATION)
+ && (spec_info->mask & BEGIN_CONTROL))
/* Here we got new control-speculative instruction. */
ts = set_dep_weak (ts, BEGIN_CONTROL, MAX_DEP_WEAK);
else
@@ -2264,8 +2270,7 @@ add_branch_dependences (rtx head, rtx ta
if (!NOTE_P (insn))
{
if (last != 0
- && (find_link_by_pro_in_deps_list (INSN_BACK_DEPS (last), insn)
- == NULL))
+ && sd_find_dep_between (insn, last, false) == NULL)
{
if (! sched_insns_conditions_mutex_p (last, insn))
add_dependence (last, insn, REG_DEP_ANTI);
@@ -2473,7 +2478,7 @@ propagate_deps (int bb, struct deps *pre
pred_deps->pending_write_mems = 0;
}
-/* Compute backward dependences inside bb. In a multiple blocks region:
+/* Compute dependences inside bb. In a multiple blocks region:
(1) a bb is analyzed after its predecessors, and (2) the lists in
effect at the end of bb (after analyzing for bb) are inherited by
bb's successors.
@@ -2491,7 +2496,7 @@ propagate_deps (int bb, struct deps *pre
similar, and the result is interblock dependences in the region. */
static void
-compute_block_backward_dependences (int bb)
+compute_block_dependences (int bb)
{
rtx head, tail;
struct deps tmp_deps;
@@ -2501,6 +2506,7 @@ compute_block_backward_dependences (int
/* Do the analysis for this block. */
gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
+
sched_analyze (&tmp_deps, head, tail);
add_branch_dependences (head, tail);
@@ -2509,6 +2515,20 @@ compute_block_backward_dependences (int
/* Free up the INSN_LISTs. */
free_deps (&tmp_deps);
+
+ if (targetm.sched.dependencies_evaluation_hook)
+ targetm.sched.dependencies_evaluation_hook (head, tail);
+}
+
+static void
+free_block_dependencies (int bb)
+{
+ rtx head;
+ rtx tail;
+
+ get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
+
+ sched_free_deps (head, tail, true);
}
/* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
@@ -2528,7 +2548,8 @@ free_pending_lists (void)
}
}
\f
-
+/* Print dependences for debugging starting from FROM_BB.
+ Callable from debugger. */
/* Print dependences for debugging starting from FROM_BB.
Callable from debugger. */
void
@@ -2568,8 +2589,6 @@ void debug_dependencies (rtx head, rtx t
for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
{
- dep_link_t link;
-
if (! INSN_P (insn))
{
int n;
@@ -2590,7 +2609,7 @@ void debug_dependencies (rtx head, rtx t
INSN_UID (insn),
INSN_CODE (insn),
BLOCK_NUM (insn),
- INSN_DEP_COUNT (insn),
+ sd_lists_size (insn, SD_LIST_BACK),
INSN_PRIORITY (insn),
insn_cost (insn));
@@ -2600,8 +2619,13 @@ void debug_dependencies (rtx head, rtx t
print_reservation (sched_dump, insn);
fprintf (sched_dump, "\t: ");
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
- fprintf (sched_dump, "%d ", INSN_UID (DEP_LINK_CON (link)));
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
+ fprintf (sched_dump, "%d ", INSN_UID (DEP_CON (dep)));
+ }
fprintf (sched_dump, "\n");
}
@@ -2659,23 +2683,9 @@ schedule_region (int rgn)
for (bb = 0; bb < current_nr_blocks; bb++)
init_deps (bb_deps + bb);
- /* Compute backward dependencies. */
+ /* Compute dependencies. */
for (bb = 0; bb < current_nr_blocks; bb++)
- compute_block_backward_dependences (bb);
-
- /* Compute forward dependencies. */
- for (bb = current_nr_blocks - 1; bb >= 0; bb--)
- {
- rtx head, tail;
-
- gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
- get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
-
- compute_forward_dependences (head, tail);
-
- if (targetm.sched.dependencies_evaluation_hook)
- targetm.sched.dependencies_evaluation_hook (head, tail);
- }
+ compute_block_dependences (bb);
free_pending_lists ();
@@ -2827,7 +2837,6 @@ schedule_region (int rgn)
/* Sanity check: verify that all region insns were scheduled. */
gcc_assert (sched_rgn_n_insns == rgn_n_insns);
-
/* Done with this region. */
if (current_nr_blocks > 1)
@@ -2838,6 +2847,13 @@ schedule_region (int rgn)
sbitmap_vector_free (ancestor_edges);
free (rgn_edges);
}
+
+ /* Free dependencies. */
+ for (bb = 0; bb < current_nr_blocks; ++bb)
+ free_block_dependencies (bb);
+
+ gcc_assert (haifa_recovery_block_was_added_during_scheduling_p
+ || deps_pools_are_empty_p ());
}
/* Initialize data structures for region scheduling. */
--- config/ia64/ia64.c (/mirror/endeed/trunk/gcc) (revision 1284)
+++ config/ia64/ia64.c (/mirror/endeed/fdl-pool/gcc) (revision 1284)
@@ -6324,28 +6324,37 @@ ia64_dependencies_evaluation_hook (rtx h
if (INSN_P (insn)
&& ia64_safe_itanium_class (insn) == ITANIUM_CLASS_IALU)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
+ bool has_mem_op_consumer_p = false;
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
{
enum attr_itanium_class c;
- if (DEP_LINK_KIND (link) != REG_DEP_TRUE)
+ if (DEP_TYPE (dep) != REG_DEP_TRUE)
continue;
- next = DEP_LINK_CON (link);
+ next = DEP_CON (dep);
c = ia64_safe_itanium_class (next);
if ((c == ITANIUM_CLASS_ST
|| c == ITANIUM_CLASS_STF)
&& ia64_st_address_bypass_p (insn, next))
- break;
+ {
+ has_mem_op_consumer_p = true;
+ break;
+ }
else if ((c == ITANIUM_CLASS_LD
|| c == ITANIUM_CLASS_FLD
|| c == ITANIUM_CLASS_FLDP)
&& ia64_ld_address_bypass_p (insn, next))
- break;
+ {
+ has_mem_op_consumer_p = true;
+ break;
+ }
}
- insn->call = link != 0;
+
+ insn->call = has_mem_op_consumer_p;
}
}
@@ -6622,14 +6631,15 @@ ia64_dfa_new_cycle (FILE *dump, int verb
if (c != ITANIUM_CLASS_MMMUL && c != ITANIUM_CLASS_MMSHF)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
int d = -1;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
- if (DEP_LINK_KIND (link) == REG_DEP_TRUE)
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
+ if (DEP_TYPE (dep) == REG_DEP_TRUE)
{
enum attr_itanium_class dep_class;
- rtx dep_insn = DEP_LINK_PRO (link);
+ rtx dep_insn = DEP_PRO (dep);
dep_class = ia64_safe_itanium_class (dep_insn);
if ((dep_class == ITANIUM_CLASS_MMMUL
@@ -7160,13 +7170,14 @@ ia64_gen_check (rtx insn, rtx label, boo
As long as patterns are unique for each instruction, this can be
accomplished by matching ORIG_PAT fields. */
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
int check_no = 0;
rtx orig_pat = ORIG_PAT (insn);
- FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
{
- rtx x = DEP_LINK_PRO (link);
+ rtx x = DEP_PRO (dep);
if (ORIG_PAT (x) == orig_pat)
check_no = spec_check_no[INSN_UID (x)];
--- config/rs6000/rs6000.c (/mirror/endeed/trunk/gcc) (revision 1284)
+++ config/rs6000/rs6000.c (/mirror/endeed/fdl-pool/gcc) (revision 1284)
@@ -17806,7 +17806,7 @@ rs6000_is_costly_dependence (dep_t dep,
if (rs6000_sched_costly_dep == true_store_to_load_dep_costly
&& is_load_insn (next)
&& is_store_insn (insn)
- && DEP_KIND (dep) == REG_DEP_TRUE)
+ && DEP_TYPE (dep) == REG_DEP_TRUE)
/* Prevent load after store in the same group if it is a true
dependence. */
return true;
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-06-14 21:43 ` Vladimir Makarov
2007-06-14 22:49 ` David Edelsohn
@ 2007-07-01 18:20 ` Maxim Kuvyrkov
2007-07-02 22:00 ` Jim Wilson
2007-08-14 6:18 ` Maxim Kuvyrkov
1 sibling, 2 replies; 15+ messages in thread
From: Maxim Kuvyrkov @ 2007-07-01 18:20 UTC (permalink / raw)
To: Vladimir Makarov; +Cc: James E Wilson, Ayal Zaks, gcc-patches, David Edelsohn
Vladimir Makarov wrote:
...
> In general, the patch is ok. Although I have minor comments (see below).
Vladimir, sorry for the delay with my reply - I've had little time last
two weeks available for projects of my own. Your comments have helped
to make this patch better and below are hunks I've changed - please take
a look at them and see if they are ok.
> It is very hard to read sched-deps.c patch, please next time check the
> patches and if they look like a mess then use context format
> instead unified.
Sorry for that, next time I'll be more considerate.
> I don't think that you need approval for rs6000.c patch. It is so
> obvious (just renaming). In any case, I CCed this email to rs6000
> maintainer David Edelhson if he wants to comment rs6000.c part (David,
> rs6000.c is at end of email).
>
> Please wait for approval from Jim Wilson for ia64.c (this part is at
> the end of email) and from Ayal Zacks for ddg.c (although ddg.c uses
> functions from the scheduler I believe is a part of modulo scheduler.
> Ayal, ddg.c part is right after ChangeLog.). They looks ok for me but
> I am not a maintainer of these files.
The only part still unapproved is ia64 changes. Jim?
> The patch is ok for me with the following comments fixed.
...
> Please, describe the constants. Their meaning is pretty obvious for
> you and me but may be not for others.
>
>> +#define DUMP_DEP_PRO (2)
>> +#define DUMP_DEP_CON (4)
>> +#define DUMP_DEP_STATUS (8)
>> +
>> +#define DUMP_DEP_ALL (DUMP_DEP_PRO | DUMP_DEP_CON | DUMP_DEP_STATUS)
>> +
>> +/* Dump DEP to DUMP.
>> + FLAGS is a bit mask specifying what information about DEP needs
>> + to be printed. If FLAGS has the very first bit set then dump
>> + all fields of DEP. */
>
> I don't understand why you need a special case with 1. You can and
> should use DUMP_DEP_ALL and you should start the enumeration of the
> above constants with 1 (not 2).
I should have written better comments on this and now I did. I've
refactored that part, but left the special case with '1'. Please see
explanation after the hunk. If you still don't like the '1' case, I'll
remove it.
+/* Define flags for dump_dep (). */
+
+/* Dump producer of the dependence. */
+#define DUMP_DEP_PRO (2)
+
+/* Dump consumer of the dependence. */
+#define DUMP_DEP_CON (4)
+
+/* Dump type of the dependence. */
+#define DUMP_DEP_TYPE (8)
+
+/* Dump status of the dependence. */
+#define DUMP_DEP_STATUS (16)
+
+/* Dump all information about the dependence. */
+#define DUMP_DEP_ALL (DUMP_DEP_PRO | DUMP_DEP_CON | DUMP_DEP_TYPE \
+ |DUMP_DEP_STATUS)
+
+/* Dump DEP to DUMP.
+ FLAGS is a bit mask specifying what information about DEP needs
+ to be printed.
+ If FLAGS has the very first bit set, then dump all information about DEP
+ and propagate this bit into the callee dump functions. */
+static void
+dump_dep (FILE *dump, dep_t dep, int flags)
{
+ if (flags & 1)
+ flags |= DUMP_DEP_ALL;
+
+ fprintf (dump, "<");
+
+ if (flags & DUMP_DEP_PRO)
+ fprintf (dump, "%d; ", INSN_UID (DEP_PRO (dep)));
+
+ if (flags & DUMP_DEP_CON)
+ fprintf (dump, "%d; ", INSN_UID (DEP_CON (dep)));
+
+ if (flags & DUMP_DEP_TYPE)
+ {
+ char t;
+ enum reg_note type = DEP_TYPE (dep);
+
+ switch (type)
+ {
+ case REG_DEP_TRUE:
+ t = 't';
+ break;
+
+ case REG_DEP_OUTPUT:
+ t = 'o';
+ break;
+
+ case REG_DEP_ANTI:
+ t = 'a';
+ break;
+ default:
+ gcc_unreachable ();
+ break;
+ }
+
+ fprintf (dump, "%c; ", t);
+ }
+
+ if (flags & DUMP_DEP_STATUS)
+ {
+ if (current_sched_info->flags & USE_DEPS_LIST)
+ dump_ds (dump, DEP_STATUS (dep));
+ }
+
+ fprintf (dump, ">");
}
+/* Default flags for dump_dep (). */
+static int dump_dep_flags = (DUMP_DEP_PRO | DUMP_DEP_CON);
+
+/* Dump all fields of DEP to STDERR. */
+void
+sd_debug_dep (dep_t dep)
+{
+ dump_dep (stderr, dep, 1);
+ fprintf (stderr, "\n");
+}
+
+/* Dump size of the lists. */
+#define DUMP_LISTS_SIZE (2)
+
+/* Dump dependencies of the lists. */
+#define DUMP_LISTS_DEPS (4)
+
+/* Dump all information about the lists. */
+#define DUMP_LISTS_ALL (DUMP_LISTS_SIZE | DUMP_LISTS_DEPS)
+
+/* Dump deps_lists of INSN specified by TYPES to DUMP.
+ FLAGS is a bit mask specifying what information about the lists needs
+ to be printed.
+ If FLAGS has the very first bit set, then dump all information about
+ the lists and propagate this bit into the callee dump functions. */
+static void
+dump_lists (FILE *dump, rtx insn, sd_list_types_def types, int flags)
+{
+ sd_iterator_def sd_it;
+ dep_t dep;
+ int all;
+
+ all = (flags & 1);
+
+ if (all)
+ flags |= DUMP_LISTS_ALL;
+
+ fprintf (dump, "[");
+
+ if (flags & DUMP_LISTS_SIZE)
+ fprintf (dump, "%d; ", sd_lists_size (insn, types));
+
+ if (flags & DUMP_LISTS_DEPS)
+ {
+ FOR_EACH_DEP (insn, types, sd_it, dep)
+ {
+ dump_dep (dump, dep, dump_dep_flags | all);
+ fprintf (dump, " ");
+ }
+ }
+}
+
+/* Dump all information about deps_lists of INSN specified by TYPES
+ to STDERR. */
+void
+sd_debug_lists (rtx insn, sd_list_types_def types)
+{
+ dump_lists (stderr, insn, types, 1);
+ fprintf (stderr, "\n");
}
The reason to use '1' is that it is propagated to the inner callees.
Dumping functions have two main usecases: to dump info to the log and to
be invoked within debugger. The difference between the usecases is in
the amount of information we wish to get: for the logs we want general
info without details and in the debugger session we want every little
detail to be printed to the console. As both functions - dump_dep ()
and dump_lists () - have their own set of flags and one gets called
within the other, there should be a convenient way to pass information
through dump_lists () to dump_dep () about how much to dump. That is
what '1' handling does.
...
>> +extern bool haifa_recovery_block_was_added_during_scheduling_p;
>
> Personaly I like longer names because they behave like comments. But
> probably it is too long. Another one is
>
> haifa_recovery_block_was_added_during_current_schedule_block_p
>
> Moreover the both have the same prefix of 38 chars long. As I
> remember, ANSI C standard differ only first 31 chars for external
> identifiers.
>
> Please, make them shorter.
They are haifa_recovery_bb_recently_added_p and
haifa_recovery_bb_ever_added_p now.
-----------------
I'm now retesting the patch on ppc64 (with -fmodulo-sched), x86_64 and
ia64 to make sure the patch doesn't change schedulers behavior.
Thanks,
Maxim
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-07-01 18:20 ` Maxim Kuvyrkov
@ 2007-07-02 22:00 ` Jim Wilson
2007-08-14 6:18 ` Maxim Kuvyrkov
1 sibling, 0 replies; 15+ messages in thread
From: Jim Wilson @ 2007-07-02 22:00 UTC (permalink / raw)
To: Maxim Kuvyrkov; +Cc: Vladimir Makarov, Ayal Zaks, gcc-patches, David Edelsohn
On Sun, 2007-07-01 at 22:20 +0400, Maxim Kuvyrkov wrote:
> The only part still unapproved is ia64 changes. Jim?
The ia64 part is OK.
--
Jim Wilson, GNU Tools Support, http://www.specifix.com
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-07-01 18:20 ` Maxim Kuvyrkov
2007-07-02 22:00 ` Jim Wilson
@ 2007-08-14 6:18 ` Maxim Kuvyrkov
2007-08-22 12:16 ` Roman Zippel
1 sibling, 1 reply; 15+ messages in thread
From: Maxim Kuvyrkov @ 2007-08-14 6:18 UTC (permalink / raw)
To: Vladimir Makarov; +Cc: James E Wilson, Ayal Zaks, gcc-patches, David Edelsohn
[-- Attachment #1: Type: text/plain, Size: 802 bytes --]
Maxim Kuvyrkov wrote:
...
> I'm now retesting the patch on ppc64 (with -fmodulo-sched), x86_64 and
> ia64 to make sure the patch doesn't change schedulers behavior.
Ok. All the parts are approved now (I've got offline approval from
Vladimir for the pretty-printing functions).
I successfully tested (bootstrap + regtest) the patch on ia64-linux-gnu,
ppc64-linux-gnu (normal and with '-fmodulo-sched
-fmodulo-sched-allow-regmoves' set to 1 in common.opt) and x86_64-linux-gnu.
Plus I verified that the patch doesn't change scheduler behavior on ia64
and ppc64 (again, normal and with modulo scheduling enabled by default)
by compiling SPECCPU2000 with dumping enabled and checking that all
ready lists after sorting are the same.
Thanks all. Will commit to the mainline today.
--
Maxim
[-- Attachment #2: fdl-pool.clg --]
[-- Type: text/plain, Size: 8268 bytes --]
2007-07-02 Maxim Kuvyrkov <maxim@codesourcery.com>
* sched-int.h (struct _dep): Rename field 'kind' to 'type'.
(DEP_KIND): Rename to DEP_TYPE. Update all uses.
(dep_def): New typedef.
(init_dep_1, sd_debug_dep): Declare functions.
(DEP_LINK_KIND): Rename to DEP_LINK_TYPE.
(debug_dep_links): Remove.
(struct _deps_list): New field 'n_links'.
(DEPS_LIST_N_LINKS): New macro.
(FOR_EACH_DEP_LINK): Remove.
(create_deps_list, free_deps_list, delete_deps_list): Remove
declaration.
(deps_list_empty_p, debug_deps_list, add_back_dep_to_deps_list): Ditto.
(find_link_by_pro_in_deps_list, find_link_by_con_in_deps_list): Ditto.
(copy_deps_list_change_con, move_dep_link): Ditto.
(struct haifa_insn_data): Split field 'back_deps' into 'hard_back_deps'
and 'spec_back_deps'. New field 'resolved_forw_deps'. Remove field
'dep_count'.
(INSN_BACK_DEPS): Remove.
(INSN_HARD_BACK_DEPS, INSN_SPEC_BACK_DEPS, INSN_RESOLVED_FORW_DEPS):
New macros.
(INSN_DEP_COUNT): Remove.
(enum DEPS_ADJUST_RESULT): Add new constant DEP_NODEP. Fix comments.
(spec_info, haifa_recovery_block_was_added_during_scheduling_p):
Declare global variables.
(deps_pools_are_empty_p, sched_free_deps): Declare functions.
(add_forw_dep, compute_forward_dependences): Remove declarations.
(add_or_update_back_dep, add_or_update_back_forw_dep): Ditto.
(add_back_forw_dep, delete_back_forw_dep): Ditto.
(debug_ds, sched_insn_is_legitimate_for_speculation_p): Declare
functions.
(SD_LIST_NONE, SD_LIST_HARD_BACK, SD_LIST_SPEC_BACK, SD_LIST_FORW): New
constants.
(SD_LIST_RES_BACK, SD_LIST_RES_FORW, SD_LIST_BACK): Ditto.
(sd_list_types_def): New typedef.
(sd_next_list): Declare function.
(struct _sd_iterator): New type.
(sd_iterator_def): New typedef.
(sd_iterator_start, sd_iterator_cond, sd_iterator_next): New inline
functions.
(FOR_EACH_DEP): New cycle wrapper.
(sd_lists_size, sd_lists_empty_p, sd_init_insn, sd_finish_insn):
Declare functions.
(sd_find_dep_between, sd_add_dep, sd_add_or_update_dep): Ditto.
(sd_resolve_dep, sd_copy_back_deps, sd_delete_dep, sd_debug_lists):
Ditto.
* sched-deps.c (init_dep_1): Make global.
(DUMP_DEP_PRO, DUMP_DEP_CON, DUMP_DEP_STATUS, DUMP_DEP_ALL): New
constants.
(dump_dep): New static function.
(dump_dep_flags): New static variable.
(sd_debug_dep): New function.
(add_to_deps_list, remove_from_deps_list): Update 'n_links' field of
the list.
(move_dep_link): Use remove_from_deps_list (), instead of
detach_dep_link ().
(dep_links_consistent_p, dump_dep_links, debug_dep_links): Remove.
(dep_link_is_detached_p): New static function.
(deps_obstack, dl_obstack, dn_obstack): Remove. Use dn_pool, dl_pool
instead.
(dn_pool, dl_pool): New alloc_pools.
(dn_pool_diff, dl_pool_diff): New static variables.
(create_dep_node, delete_dep_node): New static function.
(create_deps_list): Make it static. Use alloc_pool 'dl_pool'.
(deps_list_empty_p): Make it static. Use 'n_links' field.
(deps_pools_are_empty_p): New static function.
(alloc_deps_list, delete_deps_list): Remove.
(dump_deps_list, debug_deps_list, add_back_dep_to_deps_list): Remove.
(find_link_by_pro_in_deps_list, find_link_by_con_in_deps_list): Ditto.
(copy_deps_list_change_con): Remove. Use sd_copy_back_deps () instead.
(forward_dependency_cache): Remove.
(maybe_add_or_update_back_dep_1, add_or_update_back_dep_1): Remove
'back' from the names. Change signature to use dep_t instead of
equivalent quad.
(add_back_dep): Ditto. Make global.
(check_dep_status): Rename to check_dep ().
(sd_next_list, sd_lists_size, sd_lists_empty_p, sd_init_insn):
New functions.
(sd_finish_insn): Ditto.
(sd_find_dep_between_no_cache): New static function.
(sd_find_dep_between): New function.
(ask_dependency_caches, set_dependency_caches): New static functions.
(update_dependency_caches, change_spec_dep_to_hard, update_dep): Ditto.
(add_or_update_dep_1): Separate pieces of functionality into
ask_dependency_caches (), update_dependency_caches (),
change_spec_dep_to_hard (), update_dep ().
(get_back_and_forw_lists): New static function.
(sd_add_dep): Separate setting of dependency caches into
set_dependency_caches ().
(sd_add_or_update_dep, sd_resolve_dep, sd_copy_back_deps):
New functions.
(sd_delete_dep): Ditto.
(DUMP_LISTS_SIZE, DUMP_LISTS_DEPS, DUMP_LISTS_ALL): New constants.
(dump_lists): New static function.
(sd_debug_lists): New debug function.
(delete_all_dependences, fixup_sched_groups): Update to use
sd_* infrastructure.
(sched_analyze_2): Create data-speculative dependency only if
data-speculation is enabled.
(sched_analyze_insn): If insn cannot be speculative, make all its
dependencies non-speculative.
(sched_analyze): Use sd_init_insn ().
(add_forw_dep, compute_forward_dependencies): Remove.
(delete_dep_nodes_in_back_deps): New static function.
(sched_free_deps): New function.
(init_dependency_caches): Init alloc_pools.
(extend_depedency_caches): Update after removing of
forward_dependency_cache.
(free_dependency_caches): Ditto. Free alloc_pools.
(adjust_add_sorted_back_dep, adjust_back_add_forw_dep): Remove.
(delete_forw_dep, add_or_update_back_dep, add_or_update_back_forw_dep):
Ditto.
(add_back_forw_dep, delete_back_forw_dep): Ditto.
(add_dependence): Use init_dep ().
(get_dep_weak_1): New static function.
(get_dep_weak): Move logic to get_dep_weak_1 ().
(dump_ds): New static function moved from haifa-sched.c:
debug_spec_status ().
(debug_ds): New debug function.
(check_dep_status): Rename to check_dep (). Update to check whole
dependencies.
* haifa-sched.c (spec_info): Make global.
(added_recovery_block_p): Rename to
'haifa_recovery_block_was_added_during_current_schedule_block_p'.
(haifa_recovery_block_was_added_during_scheduling_p): New variable.
(dep_cost, priority, rank_for_schedule, schedule_insn): Update
to use new interfaces.
(ok_for_early_queue_removal): Ditto.
(schedule_block): Initialize logical uids of insns emitted by the
target.
(sched_init): Initialize new variable.
(fix_inter_tick, try_ready, fix_tick_ready): Update to use new
interfaces.
(extend_global): Initialize insn data.
(init_h_i_d): Remove code that is now handled in sd_init_insn ().
(process_insn_forw_deps_be_in_spec): Change signature. Update to use
new interfaces.
(add_to_speculative_block): Update to use new interfaces.
(create_recovery_block): Set new variables.
(create_check_block_twin, fix_recovery_deps): Update to use new
interfaces.
(sched_insn_is_legitimate_for_speculation_p): New function.
(speculate_insn): Move checking logic to
sched_insn_is_legitimate_for_speculation_p ().
(sched_remove_insn): Finalize sched-deps information of instruction.
(clear_priorities, add_jump_dependencies): Update to use new
interfaces.
(debug_spec_status): Rename to dump_ds () and move to sched-deps.c.
* sched-rgn.c (set_spec_fed, find_conditional_protection): Update
to use new interfaces.
(is_conditionally_protected, is_pfree, is_prisky) Ditto.
(new_ready): Try to use control speculation only if it is available.
(add_branch_dependences): Update to use new interfaces.
(compute_block_backward_dependences): Rename to
compute_block_dependences (). Call
targetm.sched.dependencies_evaluation_hook ().
(free_block_dependencies): New static function.
(debug_dependencies): Update to use new interfaces.
(schedule_region): Remove separate computation of forward dependencies.
Move call of targetm.sched.dependencies_evaluation_hook () to
compute_block_dependences (). Free dependencies at the end of
scheduling the region.
* sched-ebb.c (earliest_block_with_similiar_load): Update to use
new interfaces.
(add_deps_for_risky_insns): Ditto.
(schedule_ebb): Remove separate computation of forward dependencies.
Free dependencies at the end of scheduling the ebb.
* ddg.c (create_ddg_dependence): Update to use new interfaces.
(build_intra_loop_deps): Ditto. Remove separate computation of forward
dependencies. Free sched-deps dependencies.
* config/ia64/ia64.c (ia64_dependencies_evaluation_hook): Update
to use new interfaces.
(ia64_dfa_new_cycle, ia64_gen_check): Ditto.
* config/rs6000/rs6000.c (rs6000_is_costly_dependence): Update to use
new interfaces.
(is_costly_group): Ditto.
[-- Attachment #3: fdl-pool.patch --]
[-- Type: text/x-patch, Size: 130422 bytes --]
--- sched-int.h (/local/trunk/gcc) (revision 962)
+++ sched-int.h (/local/fdl-pool/gcc) (revision 962)
@@ -54,35 +54,40 @@ struct _dep
/* Consumer. */
rtx con;
- /* Dependency kind (aka dependency major type). This field is superseded
- by STATUS below. Though, it is still in place because all the backends
- use it. */
- enum reg_note kind;
+ /* Dependency major type. This field is superseded by STATUS below.
+ Though, it is still in place because some targets use it. */
+ enum reg_note type;
/* Dependency status. This field holds all dependency types and additional
information for speculative dependencies. */
ds_t status;
};
-typedef struct _dep *dep_t;
+
+typedef struct _dep dep_def;
+typedef dep_def *dep_t;
#define DEP_PRO(D) ((D)->pro)
#define DEP_CON(D) ((D)->con)
-#define DEP_KIND(D) ((D)->kind)
+#define DEP_TYPE(D) ((D)->type)
#define DEP_STATUS(D) ((D)->status)
/* Functions to work with dep. */
+extern void init_dep_1 (dep_t, rtx, rtx, enum reg_note, ds_t);
extern void init_dep (dep_t, rtx, rtx, enum reg_note);
+extern void sd_debug_dep (dep_t);
+
/* Definition of this struct resides below. */
struct _dep_node;
+typedef struct _dep_node *dep_node_t;
/* A link in the dependency list. This is essentially an equivalent of a
single {INSN, DEPS}_LIST rtx. */
struct _dep_link
{
/* Dep node with all the data. */
- struct _dep_node *node;
+ dep_node_t node;
/* Next link in the list. For the last one it is NULL. */
struct _dep_link *next;
@@ -107,39 +112,22 @@ typedef struct _dep_link *dep_link_t;
#define DEP_LINK_DEP(N) (DEP_NODE_DEP (DEP_LINK_NODE (N)))
#define DEP_LINK_PRO(N) (DEP_PRO (DEP_LINK_DEP (N)))
#define DEP_LINK_CON(N) (DEP_CON (DEP_LINK_DEP (N)))
-#define DEP_LINK_KIND(N) (DEP_KIND (DEP_LINK_DEP (N)))
+#define DEP_LINK_TYPE(N) (DEP_TYPE (DEP_LINK_DEP (N)))
#define DEP_LINK_STATUS(N) (DEP_STATUS (DEP_LINK_DEP (N)))
-void debug_dep_links (dep_link_t);
-
/* A list of dep_links. */
struct _deps_list
{
+ /* First element. */
dep_link_t first;
+
+ /* Total number of elements in the list. */
+ int n_links;
};
typedef struct _deps_list *deps_list_t;
#define DEPS_LIST_FIRST(L) ((L)->first)
-
-/* Macro to walk through deps_list. */
-#define FOR_EACH_DEP_LINK(LINK, LIST) \
- for ((LINK) = DEPS_LIST_FIRST (LIST); \
- (LINK) != NULL; \
- (LINK) = DEP_LINK_NEXT (LINK))
-
-/* Functions to work with deps_list. */
-
-deps_list_t create_deps_list (bool);
-void free_deps_list (deps_list_t);
-void delete_deps_list (deps_list_t);
-bool deps_list_empty_p (deps_list_t);
-void debug_deps_list (deps_list_t);
-void add_back_dep_to_deps_list (deps_list_t, dep_t);
-dep_link_t find_link_by_pro_in_deps_list (deps_list_t, rtx);
-dep_link_t find_link_by_con_in_deps_list (deps_list_t, rtx);
-void copy_deps_list_change_con (deps_list_t, deps_list_t, rtx);
-
-void move_dep_link (dep_link_t, deps_list_t);
+#define DEPS_LIST_N_LINKS(L) ((L)->n_links)
/* Suppose we have a dependence Y between insn pro1 and con1, where pro1 has
additional dependents con0 and con2, and con1 is dependent on additional
@@ -247,7 +235,6 @@ struct _dep_node
/* Forward link. */
struct _dep_link forw;
};
-typedef struct _dep_node *dep_node_t;
#define DEP_NODE_BACK(N) (&(N)->back)
#define DEP_NODE_DEP(N) (&(N)->dep)
@@ -469,15 +456,17 @@ extern struct sched_info *current_sched_
struct haifa_insn_data
{
- /* NB: We can't place 'struct _deps_list' here instead of deps_list_t into
- h_i_d because when h_i_d extends, addresses of the deps_list->first
- change without updating deps_list->first->next->prev_nextp. Thus
- BACK_DEPS and RESOLVED_BACK_DEPS are allocated on the heap and FORW_DEPS
- list is allocated on the obstack. */
+ /* We can't place 'struct _deps_list' into h_i_d instead of deps_list_t
+ because when h_i_d extends, addresses of the deps_list->first
+ change without updating deps_list->first->next->prev_nextp. */
- /* A list of backward dependencies. The insn is a consumer of all the
+ /* A list of hard backward dependencies. The insn is a consumer of all the
deps mentioned here. */
- deps_list_t back_deps;
+ deps_list_t hard_back_deps;
+
+ /* A list of speculative (weak) dependencies. The insn is a consumer of all
+ the deps mentioned here. */
+ deps_list_t spec_back_deps;
/* A list of insns which depend on the instruction. Unlike 'back_deps',
it represents forward dependencies. */
@@ -486,6 +475,11 @@ struct haifa_insn_data
/* A list of scheduled producers of the instruction. Links are being moved
from 'back_deps' to 'resolved_back_deps' while scheduling. */
deps_list_t resolved_back_deps;
+
+ /* A list of scheduled consumers of the instruction. Links are being moved
+ from 'forw_deps' to 'resolved_forw_deps' while scheduling to fasten the
+ search in 'forw_deps'. */
+ deps_list_t resolved_forw_deps;
/* Logical uid gives the original ordering of the insns. */
int luid;
@@ -493,11 +487,6 @@ struct haifa_insn_data
/* A priority for each insn. */
int priority;
- /* The number of incoming edges in the forward dependency graph.
- As scheduling proceeds, counts are decreased. An insn moves to
- the ready queue when its counter reaches zero. */
- int dep_count;
-
/* Number of instructions referring to this insn. */
int ref_count;
@@ -553,13 +542,16 @@ extern struct haifa_insn_data *h_i_d;
/* Accessor macros for h_i_d. There are more in haifa-sched.c and
sched-rgn.c. */
-#define INSN_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].back_deps)
+
+#define INSN_HARD_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].hard_back_deps)
+#define INSN_SPEC_BACK_DEPS(INSN) (h_i_d[INSN_UID (INSN)].spec_back_deps)
#define INSN_FORW_DEPS(INSN) (h_i_d[INSN_UID (INSN)].forw_deps)
#define INSN_RESOLVED_BACK_DEPS(INSN) \
(h_i_d[INSN_UID (INSN)].resolved_back_deps)
+#define INSN_RESOLVED_FORW_DEPS(INSN) \
+ (h_i_d[INSN_UID (INSN)].resolved_forw_deps)
#define INSN_LUID(INSN) (h_i_d[INSN_UID (INSN)].luid)
#define CANT_MOVE(insn) (h_i_d[INSN_UID (insn)].cant_move)
-#define INSN_DEP_COUNT(INSN) (h_i_d[INSN_UID (INSN)].dep_count)
#define INSN_PRIORITY(INSN) (h_i_d[INSN_UID (INSN)].priority)
#define INSN_PRIORITY_STATUS(INSN) (h_i_d[INSN_UID (INSN)].priority_status)
#define INSN_PRIORITY_KNOWN(INSN) (INSN_PRIORITY_STATUS (INSN) > 0)
@@ -694,13 +686,16 @@ enum SPEC_TYPES_OFFSETS {
#define HARD_DEP (DEP_ANTI << 1)
/* This represents the results of calling sched-deps.c functions,
- which modify dependencies. Possible choices are: a dependence
- is already present and nothing has been changed; a dependence type
- has been changed; brand new dependence has been created. */
+ which modify dependencies. */
enum DEPS_ADJUST_RESULT {
- DEP_PRESENT = 1,
- DEP_CHANGED = 2,
- DEP_CREATED = 3
+ /* No dependence needed (e.g. producer == consumer). */
+ DEP_NODEP,
+ /* Dependence is already present and wasn't modified. */
+ DEP_PRESENT,
+ /* Existing dependence was modified to include additional information. */
+ DEP_CHANGED,
+ /* New dependence has been created. */
+ DEP_CREATED
};
/* Represents the bits that can be set in the flags field of the
@@ -731,6 +726,9 @@ enum SPEC_SCHED_FLAGS {
extern FILE *sched_dump;
extern int sched_verbose;
+extern spec_info_t spec_info;
+extern bool haifa_recovery_bb_ever_added_p;
+
/* Exception Free Loads:
We define five classes of speculative loads: IFREE, IRISKY,
@@ -816,23 +814,19 @@ extern void print_insn (char *, rtx, int
extern bool sched_insns_conditions_mutex_p (rtx, rtx);
extern void add_dependence (rtx, rtx, enum reg_note);
extern void sched_analyze (struct deps *, rtx, rtx);
+extern bool deps_pools_are_empty_p (void);
+extern void sched_free_deps (rtx, rtx, bool);
extern void init_deps (struct deps *);
extern void free_deps (struct deps *);
extern void init_deps_global (void);
extern void finish_deps_global (void);
-extern void add_forw_dep (dep_link_t);
-extern void compute_forward_dependences (rtx, rtx);
extern void init_dependency_caches (int);
extern void free_dependency_caches (void);
extern void extend_dependency_caches (int, bool);
-extern enum DEPS_ADJUST_RESULT add_or_update_back_dep (rtx, rtx,
- enum reg_note, ds_t);
-extern void add_or_update_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
-extern void add_back_forw_dep (rtx, rtx, enum reg_note, ds_t);
-extern void delete_back_forw_dep (dep_link_t);
extern dw_t get_dep_weak (ds_t, ds_t);
extern ds_t set_dep_weak (ds_t, ds_t, dw_t);
extern ds_t ds_merge (ds_t, ds_t);
+extern void debug_ds (ds_t);
/* Functions in haifa-sched.c. */
extern int haifa_classify_insn (rtx);
@@ -851,11 +845,151 @@ extern void sched_finish (void);
extern int try_ready (rtx);
extern void * xrecalloc (void *, size_t, size_t, size_t);
+extern bool sched_insn_is_legitimate_for_speculation_p (rtx, ds_t);
extern void unlink_bb_notes (basic_block, basic_block);
extern void add_block (basic_block, basic_block);
extern rtx bb_note (basic_block);
/* Functions in sched-rgn.c. */
+
extern void debug_dependencies (rtx, rtx);
+/* sched-deps.c interface to walk, add, search, update, resolve, delete
+ and debug instruction dependencies. */
+
+/* Constants defining dependences lists. */
+
+/* No list. */
+#define SD_LIST_NONE (0)
+
+/* hard_back_deps. */
+#define SD_LIST_HARD_BACK (1)
+
+/* spec_back_deps. */
+#define SD_LIST_SPEC_BACK (2)
+
+/* forw_deps. */
+#define SD_LIST_FORW (4)
+
+/* resolved_back_deps. */
+#define SD_LIST_RES_BACK (8)
+
+/* resolved_forw_deps. */
+#define SD_LIST_RES_FORW (16)
+
+#define SD_LIST_BACK (SD_LIST_HARD_BACK | SD_LIST_SPEC_BACK)
+
+/* A type to hold above flags. */
+typedef int sd_list_types_def;
+
+extern void sd_next_list (rtx, sd_list_types_def *, deps_list_t *, bool *);
+
+/* Iterator to walk through, resolve and delete dependencies. */
+struct _sd_iterator
+{
+ /* What lists to walk. Can be any combination of SD_LIST_* flags. */
+ sd_list_types_def types;
+
+ /* Instruction dependencies lists of which will be walked. */
+ rtx insn;
+
+ /* Pointer to the next field of the previous element. This is not
+ simply a pointer to the next element to allow easy deletion from the
+ list. When a dep is being removed from the list the iterator
+ will automatically advance because the value in *linkp will start
+ reffering to the next element. */
+ dep_link_t *linkp;
+
+ /* True if the current list is a resolved one. */
+ bool resolved_p;
+};
+
+typedef struct _sd_iterator sd_iterator_def;
+
+/* ??? We can move some definitions that are used in below inline functions
+ out of sched-int.h to sched-deps.c provided that the below functions will
+ become global externals.
+ These definitions include:
+ * struct _deps_list: opaque pointer is needed at global scope.
+ * struct _dep_link: opaque pointer is needed at scope of sd_iterator_def.
+ * struct _dep_node: opaque pointer is needed at scope of
+ struct _deps_link. */
+
+/* Return initialized iterator. */
+static inline sd_iterator_def
+sd_iterator_start (rtx insn, sd_list_types_def types)
+{
+ /* Some dep_link a pointer to which will return NULL. */
+ static dep_link_t null_link = NULL;
+
+ sd_iterator_def i;
+
+ i.types = types;
+ i.insn = insn;
+ i.linkp = &null_link;
+
+ /* Avoid 'uninitialized warning'. */
+ i.resolved_p = false;
+
+ return i;
+}
+
+/* Return the current element. */
+static inline bool
+sd_iterator_cond (sd_iterator_def *it_ptr, dep_t *dep_ptr)
+{
+ dep_link_t link = *it_ptr->linkp;
+
+ if (link != NULL)
+ {
+ *dep_ptr = DEP_LINK_DEP (link);
+ return true;
+ }
+ else
+ {
+ sd_list_types_def types = it_ptr->types;
+
+ if (types != SD_LIST_NONE)
+ /* Switch to next list. */
+ {
+ deps_list_t list;
+
+ sd_next_list (it_ptr->insn,
+ &it_ptr->types, &list, &it_ptr->resolved_p);
+
+ it_ptr->linkp = &DEPS_LIST_FIRST (list);
+
+ return sd_iterator_cond (it_ptr, dep_ptr);
+ }
+
+ *dep_ptr = NULL;
+ return false;
+ }
+}
+
+/* Advance iterator. */
+static inline void
+sd_iterator_next (sd_iterator_def *it_ptr)
+{
+ it_ptr->linkp = &DEP_LINK_NEXT (*it_ptr->linkp);
+}
+
+/* A cycle wrapper. */
+#define FOR_EACH_DEP(INSN, LIST_TYPES, ITER, DEP) \
+ for ((ITER) = sd_iterator_start ((INSN), (LIST_TYPES)); \
+ sd_iterator_cond (&(ITER), &(DEP)); \
+ sd_iterator_next (&(ITER)))
+
+extern int sd_lists_size (rtx, sd_list_types_def);
+extern bool sd_lists_empty_p (rtx, sd_list_types_def);
+extern void sd_init_insn (rtx);
+extern void sd_finish_insn (rtx);
+extern dep_t sd_find_dep_between (rtx, rtx, bool);
+extern void sd_add_dep (dep_t, bool);
+extern enum DEPS_ADJUST_RESULT sd_add_or_update_dep (dep_t, bool);
+extern void sd_resolve_dep (sd_iterator_def);
+extern void sd_copy_back_deps (rtx, rtx, bool);
+extern void sd_delete_dep (sd_iterator_def);
+extern void sd_debug_lists (rtx, sd_list_types_def);
+
#endif /* GCC_SCHED_INT_H */
--- sched-deps.c (/local/trunk/gcc) (revision 962)
+++ sched-deps.c (/local/fdl-pool/gcc) (revision 962)
@@ -84,12 +84,12 @@ dk_to_ds (enum reg_note dk)
/* Functions to operate with dependence information container - dep_t. */
/* Init DEP with the arguments. */
-static void
-init_dep_1 (dep_t dep, rtx pro, rtx con, enum reg_note kind, ds_t ds)
+void
+init_dep_1 (dep_t dep, rtx pro, rtx con, enum reg_note type, ds_t ds)
{
DEP_PRO (dep) = pro;
DEP_CON (dep) = con;
- DEP_KIND (dep) = kind;
+ DEP_TYPE (dep) = type;
DEP_STATUS (dep) = ds;
}
@@ -101,7 +101,7 @@ init_dep (dep_t dep, rtx pro, rtx con, e
{
ds_t ds;
- if ((current_sched_info->flags & USE_DEPS_LIST) != 0)
+ if ((current_sched_info->flags & USE_DEPS_LIST))
ds = dk_to_ds (kind);
else
ds = -1;
@@ -116,19 +116,95 @@ copy_dep (dep_t to, dep_t from)
memcpy (to, from, sizeof (*to));
}
-/* Functions to operate with a single link from the dependencies lists -
- dep_link_t. */
+static void dump_ds (FILE *, ds_t);
-/* Return true if dep_link L is consistent. */
-static bool
-dep_link_consistent_p (dep_link_t l)
+/* Define flags for dump_dep (). */
+
+/* Dump producer of the dependence. */
+#define DUMP_DEP_PRO (2)
+
+/* Dump consumer of the dependence. */
+#define DUMP_DEP_CON (4)
+
+/* Dump type of the dependence. */
+#define DUMP_DEP_TYPE (8)
+
+/* Dump status of the dependence. */
+#define DUMP_DEP_STATUS (16)
+
+/* Dump all information about the dependence. */
+#define DUMP_DEP_ALL (DUMP_DEP_PRO | DUMP_DEP_CON | DUMP_DEP_TYPE \
+ |DUMP_DEP_STATUS)
+
+/* Dump DEP to DUMP.
+ FLAGS is a bit mask specifying what information about DEP needs
+ to be printed.
+ If FLAGS has the very first bit set, then dump all information about DEP
+ and propagate this bit into the callee dump functions. */
+static void
+dump_dep (FILE *dump, dep_t dep, int flags)
{
- dep_link_t next = DEP_LINK_NEXT (l);
+ if (flags & 1)
+ flags |= DUMP_DEP_ALL;
+
+ fprintf (dump, "<");
+
+ if (flags & DUMP_DEP_PRO)
+ fprintf (dump, "%d; ", INSN_UID (DEP_PRO (dep)));
+
+ if (flags & DUMP_DEP_CON)
+ fprintf (dump, "%d; ", INSN_UID (DEP_CON (dep)));
+
+ if (flags & DUMP_DEP_TYPE)
+ {
+ char t;
+ enum reg_note type = DEP_TYPE (dep);
+
+ switch (type)
+ {
+ case REG_DEP_TRUE:
+ t = 't';
+ break;
+
+ case REG_DEP_OUTPUT:
+ t = 'o';
+ break;
- return (next == NULL
- || &DEP_LINK_NEXT (l) == DEP_LINK_PREV_NEXTP (next));
+ case REG_DEP_ANTI:
+ t = 'a';
+ break;
+
+ default:
+ gcc_unreachable ();
+ break;
+ }
+
+ fprintf (dump, "%c; ", t);
+ }
+
+ if (flags & DUMP_DEP_STATUS)
+ {
+ if (current_sched_info->flags & USE_DEPS_LIST)
+ dump_ds (dump, DEP_STATUS (dep));
+ }
+
+ fprintf (dump, ">");
+}
+
+/* Default flags for dump_dep (). */
+static int dump_dep_flags = (DUMP_DEP_PRO | DUMP_DEP_CON);
+
+/* Dump all fields of DEP to STDERR. */
+void
+sd_debug_dep (dep_t dep)
+{
+ dump_dep (stderr, dep, 1);
+ fprintf (stderr, "\n");
}
+/* Functions to operate with a single link from the dependencies lists -
+ dep_link_t. */
+
/* Attach L to appear after link X whose &DEP_LINK_NEXT (X) is given by
PREV_NEXT_P. */
static void
@@ -160,6 +236,8 @@ static void
add_to_deps_list (dep_link_t link, deps_list_t l)
{
attach_dep_link (link, &DEPS_LIST_FIRST (l));
+
+ ++DEPS_LIST_N_LINKS (l);
}
/* Detach dep_link L from the list. */
@@ -174,232 +252,135 @@ detach_dep_link (dep_link_t l)
if (next != NULL)
DEP_LINK_PREV_NEXTP (next) = prev_nextp;
- /* Though this is property is not used anywhere but in the assert in
- attach_dep_link (), this can prevent latent errors. */
DEP_LINK_PREV_NEXTP (l) = NULL;
DEP_LINK_NEXT (l) = NULL;
}
-/* Move LINK from whatever list it is now to L. */
-void
-move_dep_link (dep_link_t link, deps_list_t l)
+/* Remove link LINK from list LIST. */
+static void
+remove_from_deps_list (dep_link_t link, deps_list_t list)
{
detach_dep_link (link);
- add_to_deps_list (link, l);
-}
-/* Check L's and its successors' consistency.
- This is, potentially, an expensive check, hence it should be guarded by
- ENABLE_CHECKING at all times. */
-static bool
-dep_links_consistent_p (dep_link_t l)
-{
- while (l != NULL)
- {
- if (dep_link_consistent_p (l))
- l = DEP_LINK_NEXT (l);
- else
- return false;
- }
-
- return true;
+ --DEPS_LIST_N_LINKS (list);
}
-/* Dump dep_nodes starting from l. */
+/* Move link LINK from list FROM to list TO. */
static void
-dump_dep_links (FILE *dump, dep_link_t l)
+move_dep_link (dep_link_t link, deps_list_t from, deps_list_t to)
{
- while (l != NULL)
- {
- dep_t d = DEP_LINK_DEP (l);
-
- fprintf (dump, "%d%c>%d ", INSN_UID (DEP_PRO (d)),
- dep_link_consistent_p (l) ? '-' : '!', INSN_UID (DEP_CON (d)));
-
- l = DEP_LINK_NEXT (l);
- }
-
- fprintf (dump, "\n");
+ remove_from_deps_list (link, from);
+ add_to_deps_list (link, to);
}
-/* Dump dep_nodes starting from L to stderr. */
-void
-debug_dep_links (dep_link_t l)
+/* Return true of LINK is not attached to any list. */
+static bool
+dep_link_is_detached_p (dep_link_t link)
{
- dump_dep_links (stderr, l);
+ return DEP_LINK_PREV_NEXTP (link) == NULL;
}
-/* Obstack to allocate dep_nodes and deps_lists on. */
-static struct obstack deps_obstack;
+/* Pool to hold all dependency nodes (dep_node_t). */
+static alloc_pool dn_pool;
-/* Obstack to hold forward dependencies lists (deps_list_t). */
-static struct obstack *dl_obstack = &deps_obstack;
+/* Number of dep_nodes out there. */
+static int dn_pool_diff = 0;
-/* Obstack to hold all dependency nodes (dep_node_t). */
-static struct obstack *dn_obstack = &deps_obstack;
+/* Create a dep_node. */
+static dep_node_t
+create_dep_node (void)
+{
+ dep_node_t n = (dep_node_t) pool_alloc (dn_pool);
+ dep_link_t back = DEP_NODE_BACK (n);
+ dep_link_t forw = DEP_NODE_FORW (n);
-/* Functions to operate with dependences lists - deps_list_t. */
+ DEP_LINK_NODE (back) = n;
+ DEP_LINK_NEXT (back) = NULL;
+ DEP_LINK_PREV_NEXTP (back) = NULL;
-/* Allocate deps_list.
+ DEP_LINK_NODE (forw) = n;
+ DEP_LINK_NEXT (forw) = NULL;
+ DEP_LINK_PREV_NEXTP (forw) = NULL;
- If ON_OBSTACK_P is true, allocate the list on the obstack. This is done for
- INSN_FORW_DEPS lists because they should live till the end of scheduling.
+ ++dn_pool_diff;
- INSN_BACK_DEPS and INSN_RESOLVED_BACK_DEPS lists are allocated on the free
- store and are being freed in haifa-sched.c: schedule_insn (). */
-static deps_list_t
-alloc_deps_list (bool on_obstack_p)
-{
- if (on_obstack_p)
- return obstack_alloc (dl_obstack, sizeof (struct _deps_list));
- else
- return xmalloc (sizeof (struct _deps_list));
+ return n;
}
-/* Initialize deps_list L. */
+/* Delete dep_node N. N must not be connected to any deps_list. */
static void
-init_deps_list (deps_list_t l)
+delete_dep_node (dep_node_t n)
{
- DEPS_LIST_FIRST (l) = NULL;
-}
+ gcc_assert (dep_link_is_detached_p (DEP_NODE_BACK (n))
+ && dep_link_is_detached_p (DEP_NODE_FORW (n)));
-/* Create (allocate and init) deps_list.
- The meaning of ON_OBSTACK_P is the same as in alloc_deps_list (). */
-deps_list_t
-create_deps_list (bool on_obstack_p)
-{
- deps_list_t l = alloc_deps_list (on_obstack_p);
+ --dn_pool_diff;
- init_deps_list (l);
- return l;
+ pool_free (dn_pool, n);
}
-/* Free dep_data_nodes that present in L. */
-static void
-clear_deps_list (deps_list_t l)
-{
- /* All dep_nodes are allocated on the dn_obstack. They'll be freed with
- the obstack. */
-
- DEPS_LIST_FIRST (l) = NULL;
-}
+/* Pool to hold dependencies lists (deps_list_t). */
+static alloc_pool dl_pool;
-/* Free deps_list L. */
-void
-free_deps_list (deps_list_t l)
-{
- gcc_assert (deps_list_empty_p (l));
- free (l);
-}
+/* Number of deps_lists out there. */
+static int dl_pool_diff = 0;
-/* Delete (clear and free) deps_list L. */
-void
-delete_deps_list (deps_list_t l)
-{
- clear_deps_list (l);
- free_deps_list (l);
-}
+/* Functions to operate with dependences lists - deps_list_t. */
-/* Return true if L is empty. */
-bool
+/* Return true if list L is empty. */
+static bool
deps_list_empty_p (deps_list_t l)
{
- return DEPS_LIST_FIRST (l) == NULL;
+ return DEPS_LIST_N_LINKS (l) == 0;
}
-/* Check L's consistency.
- This is, potentially, an expensive check, hence it should be guarded by
- ENABLE_CHECKING at all times. */
-static bool
-deps_list_consistent_p (deps_list_t l)
+/* Create a new deps_list. */
+static deps_list_t
+create_deps_list (void)
{
- dep_link_t first = DEPS_LIST_FIRST (l);
+ deps_list_t l = (deps_list_t) pool_alloc (dl_pool);
- return (first == NULL
- || (&DEPS_LIST_FIRST (l) == DEP_LINK_PREV_NEXTP (first)
- && dep_links_consistent_p (first)));
-}
-
-/* Dump L to F. */
-static void
-dump_deps_list (FILE *f, deps_list_t l)
-{
- dump_dep_links (f, DEPS_LIST_FIRST (l));
-}
+ DEPS_LIST_FIRST (l) = NULL;
+ DEPS_LIST_N_LINKS (l) = 0;
-/* Dump L to STDERR. */
-void
-debug_deps_list (deps_list_t l)
-{
- dump_deps_list (stderr, l);
+ ++dl_pool_diff;
+ return l;
}
-/* Add a dependency described by DEP to the list L.
- L should be either INSN_BACK_DEPS or INSN_RESOLVED_BACK_DEPS. */
-void
-add_back_dep_to_deps_list (deps_list_t l, dep_t dep_from)
+/* Free deps_list L. */
+static void
+free_deps_list (deps_list_t l)
{
- dep_node_t n = (dep_node_t) obstack_alloc (dn_obstack,
- sizeof (*n));
- dep_t dep_to = DEP_NODE_DEP (n);
- dep_link_t back = DEP_NODE_BACK (n);
- dep_link_t forw = DEP_NODE_FORW (n);
-
- copy_dep (dep_to, dep_from);
-
- DEP_LINK_NODE (back) = n;
- DEP_LINK_NODE (forw) = n;
+ gcc_assert (deps_list_empty_p (l));
- /* There is no particular need to initialize these four fields except to make
- assert in attach_dep_link () happy. */
- DEP_LINK_NEXT (back) = NULL;
- DEP_LINK_PREV_NEXTP (back) = NULL;
- DEP_LINK_NEXT (forw) = NULL;
- DEP_LINK_PREV_NEXTP (forw) = NULL;
+ --dl_pool_diff;
- add_to_deps_list (back, l);
+ pool_free (dl_pool, l);
}
-/* Find the dep_link with producer PRO in deps_list L. */
-dep_link_t
-find_link_by_pro_in_deps_list (deps_list_t l, rtx pro)
+/* Return true if there is no dep_nodes and deps_lists out there.
+ After the region is scheduled all the depedency nodes and lists
+ should [generally] be returned to pool. */
+bool
+deps_pools_are_empty_p (void)
{
- dep_link_t link;
-
- FOR_EACH_DEP_LINK (link, l)
- if (DEP_LINK_PRO (link) == pro)
- return link;
-
- return NULL;
+ return dn_pool_diff == 0 && dl_pool_diff == 0;
}
-/* Find the dep_link with consumer CON in deps_list L. */
-dep_link_t
-find_link_by_con_in_deps_list (deps_list_t l, rtx con)
-{
- dep_link_t link;
-
- FOR_EACH_DEP_LINK (link, l)
- if (DEP_LINK_CON (link) == con)
- return link;
-
- return NULL;
-}
-
-/* Make a copy of FROM in TO with substituting consumer with CON.
- TO and FROM should be RESOLVED_BACK_DEPS lists. */
-void
-copy_deps_list_change_con (deps_list_t to, deps_list_t from, rtx con)
+/* Remove all elements from L. */
+static void
+clear_deps_list (deps_list_t l)
{
- dep_link_t l;
+ do
+ {
+ dep_link_t link = DEPS_LIST_FIRST (l);
- gcc_assert (deps_list_empty_p (to));
+ if (link == NULL)
+ break;
- FOR_EACH_DEP_LINK (l, from)
- {
- add_back_dep_to_deps_list (to, DEP_LINK_DEP (l));
- DEP_LINK_CON (DEPS_LIST_FIRST (to)) = con;
+ remove_from_deps_list (link, l);
}
+ while (1);
}
static regset reg_pending_sets;
@@ -437,14 +418,6 @@ static bitmap_head *anti_dependency_cach
static bitmap_head *spec_dependency_cache;
static int cache_size;
-/* To speed up checking consistency of formed forward insn
- dependencies we use the following cache. Another possible solution
- could be switching off checking duplication of insns in forward
- dependencies. */
-#ifdef ENABLE_CHECKING
-static bitmap_head *forward_dependency_cache;
-#endif
-
static int deps_may_trap_p (rtx);
static void add_dependence_list (rtx, rtx, int, enum reg_note);
static void add_dependence_list_and_free (rtx, rtx *, int, enum reg_note);
@@ -459,19 +432,14 @@ static void sched_analyze_insn (struct d
static rtx sched_get_condition (rtx);
static int conditions_mutex_p (rtx, rtx);
-static enum DEPS_ADJUST_RESULT maybe_add_or_update_back_dep_1 (rtx, rtx,
- enum reg_note, ds_t, rtx, rtx, dep_link_t **);
-static enum DEPS_ADJUST_RESULT add_or_update_back_dep_1 (rtx, rtx,
- enum reg_note, ds_t, rtx, rtx, dep_link_t **);
-static void add_back_dep (rtx, rtx, enum reg_note, ds_t);
-
-static void adjust_add_sorted_back_dep (rtx, dep_link_t, dep_link_t *);
-static void adjust_back_add_forw_dep (rtx, dep_link_t *);
-static void delete_forw_dep (dep_link_t);
+static enum DEPS_ADJUST_RESULT maybe_add_or_update_dep_1 (dep_t, bool,
+ rtx, rtx);
+static enum DEPS_ADJUST_RESULT add_or_update_dep_1 (dep_t, bool, rtx, rtx);
+
static dw_t estimate_dep_weak (rtx, rtx);
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
-static void check_dep_status (enum reg_note, ds_t, bool);
+static void check_dep (dep_t, bool);
#endif
#endif
\f
@@ -567,23 +535,218 @@ sched_insns_conditions_mutex_p (rtx insn
return false;
}
\f
-/* Add ELEM wrapped in an dep_link with reg note kind DEP_TYPE to the
- INSN_BACK_DEPS (INSN), if it is not already there. DEP_TYPE indicates the
- type of dependence that this link represents. DS, if nonzero,
- indicates speculations, through which this dependence can be overcome.
- MEM1 and MEM2, if non-null, corresponds to memory locations in case of
- data speculation. The function returns a value indicating if an old entry
- has been changed or a new entry has been added to insn's backward deps.
- In case of changed entry CHANGED_LINKPP sets to its address.
- See also the definition of enum DEPS_ADJUST_RESULT in sched-int.h.
- Actual manipulation of dependence data structures is performed in
- add_or_update_back_dep_1. */
+/* Initialize LIST_PTR to point to one of the lists present in TYPES_PTR,
+ initialize RESOLVED_P_PTR with true if that list consists of resolved deps,
+ and remove the type of returned [through LIST_PTR] list from TYPES_PTR.
+ This function is used to switch sd_iterator to the next list.
+ !!! For internal use only. Might consider moving it to sched-int.h. */
+void
+sd_next_list (rtx insn, sd_list_types_def *types_ptr,
+ deps_list_t *list_ptr, bool *resolved_p_ptr)
+{
+ sd_list_types_def types = *types_ptr;
+
+ if (types & SD_LIST_HARD_BACK)
+ {
+ *list_ptr = INSN_HARD_BACK_DEPS (insn);
+ *resolved_p_ptr = false;
+ *types_ptr = types & ~SD_LIST_HARD_BACK;
+ }
+ else if (types & SD_LIST_SPEC_BACK)
+ {
+ *list_ptr = INSN_SPEC_BACK_DEPS (insn);
+ *resolved_p_ptr = false;
+ *types_ptr = types & ~SD_LIST_SPEC_BACK;
+ }
+ else if (types & SD_LIST_FORW)
+ {
+ *list_ptr = INSN_FORW_DEPS (insn);
+ *resolved_p_ptr = false;
+ *types_ptr = types & ~SD_LIST_FORW;
+ }
+ else if (types & SD_LIST_RES_BACK)
+ {
+ *list_ptr = INSN_RESOLVED_BACK_DEPS (insn);
+ *resolved_p_ptr = true;
+ *types_ptr = types & ~SD_LIST_RES_BACK;
+ }
+ else if (types & SD_LIST_RES_FORW)
+ {
+ *list_ptr = INSN_RESOLVED_FORW_DEPS (insn);
+ *resolved_p_ptr = true;
+ *types_ptr = types & ~SD_LIST_RES_FORW;
+ }
+ else
+ {
+ *list_ptr = NULL;
+ *resolved_p_ptr = false;
+ *types_ptr = SD_LIST_NONE;
+ }
+}
+
+/* Return the summary size of INSN's lists defined by LIST_TYPES. */
+int
+sd_lists_size (rtx insn, sd_list_types_def list_types)
+{
+ int size = 0;
+
+ while (list_types != SD_LIST_NONE)
+ {
+ deps_list_t list;
+ bool resolved_p;
+
+ sd_next_list (insn, &list_types, &list, &resolved_p);
+ size += DEPS_LIST_N_LINKS (list);
+ }
+
+ return size;
+}
+
+/* Return true if INSN's lists defined by LIST_TYPES are all empty. */
+bool
+sd_lists_empty_p (rtx insn, sd_list_types_def list_types)
+{
+ return sd_lists_size (insn, list_types) == 0;
+}
+
+/* Initialize data for INSN. */
+void
+sd_init_insn (rtx insn)
+{
+ INSN_HARD_BACK_DEPS (insn) = create_deps_list ();
+ INSN_SPEC_BACK_DEPS (insn) = create_deps_list ();
+ INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list ();
+ INSN_FORW_DEPS (insn) = create_deps_list ();
+ INSN_RESOLVED_FORW_DEPS (insn) = create_deps_list ();
+
+ /* ??? It would be nice to allocate dependency caches here. */
+}
+
+/* Free data for INSN. */
+void
+sd_finish_insn (rtx insn)
+{
+ /* ??? It would be nice to deallocate dependency caches here. */
+
+ free_deps_list (INSN_HARD_BACK_DEPS (insn));
+ INSN_HARD_BACK_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_SPEC_BACK_DEPS (insn));
+ INSN_SPEC_BACK_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_RESOLVED_BACK_DEPS (insn));
+ INSN_RESOLVED_BACK_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_FORW_DEPS (insn));
+ INSN_FORW_DEPS (insn) = NULL;
+
+ free_deps_list (INSN_RESOLVED_FORW_DEPS (insn));
+ INSN_RESOLVED_FORW_DEPS (insn) = NULL;
+}
+
+/* Find a dependency between producer PRO and consumer CON.
+ Search through resolved dependency lists if RESOLVED_P is true.
+ If no such dependency is found return NULL,
+ overwise return the dependency and initialize SD_IT_PTR [if it is nonnull]
+ with an iterator pointing to it. */
+static dep_t
+sd_find_dep_between_no_cache (rtx pro, rtx con, bool resolved_p,
+ sd_iterator_def *sd_it_ptr)
+{
+ sd_list_types_def pro_list_type;
+ sd_list_types_def con_list_type;
+ sd_iterator_def sd_it;
+ dep_t dep;
+ bool found_p = false;
+
+ if (resolved_p)
+ {
+ pro_list_type = SD_LIST_RES_FORW;
+ con_list_type = SD_LIST_RES_BACK;
+ }
+ else
+ {
+ pro_list_type = SD_LIST_FORW;
+ con_list_type = SD_LIST_BACK;
+ }
+
+ /* Walk through either back list of INSN or forw list of ELEM
+ depending on which one is shorter. */
+ if (sd_lists_size (con, con_list_type) < sd_lists_size (pro, pro_list_type))
+ {
+ /* Find the dep_link with producer PRO in consumer's back_deps. */
+ FOR_EACH_DEP (con, con_list_type, sd_it, dep)
+ if (DEP_PRO (dep) == pro)
+ {
+ found_p = true;
+ break;
+ }
+ }
+ else
+ {
+ /* Find the dep_link with consumer CON in producer's forw_deps. */
+ FOR_EACH_DEP (pro, pro_list_type, sd_it, dep)
+ if (DEP_CON (dep) == con)
+ {
+ found_p = true;
+ break;
+ }
+ }
+
+ if (found_p)
+ {
+ if (sd_it_ptr != NULL)
+ *sd_it_ptr = sd_it;
+
+ return dep;
+ }
+
+ return NULL;
+}
+
+/* Find a dependency between producer PRO and consumer CON.
+ Use dependency [if available] to check if dependency is present at all.
+ Search through resolved dependency lists if RESOLVED_P is true.
+ If the dependency or NULL if none found. */
+dep_t
+sd_find_dep_between (rtx pro, rtx con, bool resolved_p)
+{
+ if (true_dependency_cache != NULL)
+ /* Avoiding the list walk below can cut compile times dramatically
+ for some code. */
+ {
+ int elem_luid = INSN_LUID (pro);
+ int insn_luid = INSN_LUID (con);
+
+ gcc_assert (output_dependency_cache != NULL
+ && anti_dependency_cache != NULL);
+
+ if (!bitmap_bit_p (&true_dependency_cache[insn_luid], elem_luid)
+ && !bitmap_bit_p (&output_dependency_cache[insn_luid], elem_luid)
+ && !bitmap_bit_p (&anti_dependency_cache[insn_luid], elem_luid))
+ return NULL;
+ }
+
+ return sd_find_dep_between_no_cache (pro, con, resolved_p, NULL);
+}
+
+/* Add or update a dependence described by DEP.
+ MEM1 and MEM2, if non-null, correspond to memory locations in case of
+ data speculation.
+
+ The function returns a value indicating if an old entry has been changed
+ or a new entry has been added to insn's backward deps.
+
+ This function merely checks if producer and consumer is the same insn
+ and doesn't create a dep in this case. Actual manipulation of
+ dependence data structures is performed in add_or_update_dep_1. */
static enum DEPS_ADJUST_RESULT
-maybe_add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
- ds_t ds, rtx mem1, rtx mem2,
- dep_link_t **changed_linkpp)
+maybe_add_or_update_dep_1 (dep_t dep, bool resolved_p, rtx mem1, rtx mem2)
{
+ rtx elem = DEP_PRO (dep);
+ rtx insn = DEP_CON (dep);
+
gcc_assert (INSN_P (insn) && INSN_P (elem));
/* Don't depend an insn on itself. */
@@ -594,319 +757,543 @@ maybe_add_or_update_back_dep_1 (rtx insn
/* INSN has an internal dependence, which we can't overcome. */
HAS_INTERNAL_DEP (insn) = 1;
#endif
- return 0;
+
+ return DEP_NODEP;
}
- return add_or_update_back_dep_1 (insn, elem, dep_type,
- ds, mem1, mem2, changed_linkpp);
+ return add_or_update_dep_1 (dep, resolved_p, mem1, mem2);
}
-/* This function has the same meaning of parameters and return values
- as maybe_add_or_update_back_dep_1. The only difference between these
- two functions is that INSN and ELEM are guaranteed not to be the same
- in this one. */
+#ifdef INSN_SCHEDULING
+/* Ask dependency caches what needs to be done for dependence DEP.
+ Return DEP_CREATED if new dependence should be created and there is no
+ need to try to find one searching the dependencies lists.
+ Return DEP_PRESENT if there already is a dependence described by DEP and
+ hence nothing is to be done.
+ Return DEP_CHANGED if there already is a dependence, but it should be
+ updated to incorporate additional information from DEP. */
static enum DEPS_ADJUST_RESULT
-add_or_update_back_dep_1 (rtx insn, rtx elem, enum reg_note dep_type,
- ds_t ds ATTRIBUTE_UNUSED,
- rtx mem1 ATTRIBUTE_UNUSED, rtx mem2 ATTRIBUTE_UNUSED,
- dep_link_t **changed_linkpp ATTRIBUTE_UNUSED)
+ask_dependency_caches (dep_t dep)
{
- bool maybe_present_p = true, present_p = false;
+ int elem_luid = INSN_LUID (DEP_PRO (dep));
+ int insn_luid = INSN_LUID (DEP_CON (dep));
- gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
-
-#ifdef INSN_SCHEDULING
-
-#ifdef ENABLE_CHECKING
- check_dep_status (dep_type, ds, mem1 != NULL);
-#endif
+ gcc_assert (true_dependency_cache != NULL
+ && output_dependency_cache != NULL
+ && anti_dependency_cache != NULL);
- /* If we already have a dependency for ELEM, then we do not need to
- do anything. Avoiding the list walk below can cut compile times
- dramatically for some code. */
- if (true_dependency_cache != NULL)
- {
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
enum reg_note present_dep_type;
-
- gcc_assert (output_dependency_cache);
- gcc_assert (anti_dependency_cache);
- if (!(current_sched_info->flags & USE_DEPS_LIST))
- {
- if (bitmap_bit_p (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_type = REG_DEP_TRUE;
- else if (bitmap_bit_p (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_type = REG_DEP_OUTPUT;
- else if (bitmap_bit_p (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_type = REG_DEP_ANTI;
- else
- maybe_present_p = false;
- if (maybe_present_p)
- {
- if ((int) dep_type >= (int) present_dep_type)
- return DEP_PRESENT;
-
- present_p = true;
- }
- }
+ if (bitmap_bit_p (&true_dependency_cache[insn_luid], elem_luid))
+ present_dep_type = REG_DEP_TRUE;
+ else if (bitmap_bit_p (&output_dependency_cache[insn_luid], elem_luid))
+ present_dep_type = REG_DEP_OUTPUT;
+ else if (bitmap_bit_p (&anti_dependency_cache[insn_luid], elem_luid))
+ present_dep_type = REG_DEP_ANTI;
else
- {
- ds_t present_dep_types = 0;
+ /* There is no existing dep so it should be created. */
+ return DEP_CREATED;
+
+ if ((int) DEP_TYPE (dep) >= (int) present_dep_type)
+ /* DEP does not add anything to the existing dependence. */
+ return DEP_PRESENT;
+ }
+ else
+ {
+ ds_t present_dep_types = 0;
- if (bitmap_bit_p (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_types |= DEP_TRUE;
- if (bitmap_bit_p (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_types |= DEP_OUTPUT;
- if (bitmap_bit_p (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
- present_dep_types |= DEP_ANTI;
+ if (bitmap_bit_p (&true_dependency_cache[insn_luid], elem_luid))
+ present_dep_types |= DEP_TRUE;
+ if (bitmap_bit_p (&output_dependency_cache[insn_luid], elem_luid))
+ present_dep_types |= DEP_OUTPUT;
+ if (bitmap_bit_p (&anti_dependency_cache[insn_luid], elem_luid))
+ present_dep_types |= DEP_ANTI;
+
+ if (present_dep_types == 0)
+ /* There is no existing dep so it should be created. */
+ return DEP_CREATED;
+
+ if (!(current_sched_info->flags & DO_SPECULATION)
+ || !bitmap_bit_p (&spec_dependency_cache[insn_luid], elem_luid))
+ {
+ if ((present_dep_types | (DEP_STATUS (dep) & DEP_TYPES))
+ == present_dep_types)
+ /* DEP does not add anything to the existing dependence. */
+ return DEP_PRESENT;
+ }
+ else
+ {
+ /* Only true dependencies can be data speculative and
+ only anti dependencies can be control speculative. */
+ gcc_assert ((present_dep_types & (DEP_TRUE | DEP_ANTI))
+ == present_dep_types);
+
+ /* if (DEP is SPECULATIVE) then
+ ..we should update DEP_STATUS
+ else
+ ..we should reset existing dep to non-speculative. */
+ }
+ }
+
+ return DEP_CHANGED;
+}
+
+/* Set dependency caches according to DEP. */
+static void
+set_dependency_caches (dep_t dep)
+{
+ int elem_luid = INSN_LUID (DEP_PRO (dep));
+ int insn_luid = INSN_LUID (DEP_CON (dep));
+
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
+ switch (DEP_TYPE (dep))
+ {
+ case REG_DEP_TRUE:
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_OUTPUT:
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_ANTI:
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+ }
+ else
+ {
+ ds_t ds = DEP_STATUS (dep);
- if (present_dep_types)
+ if (ds & DEP_TRUE)
+ bitmap_set_bit (&true_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_OUTPUT)
+ bitmap_set_bit (&output_dependency_cache[insn_luid], elem_luid);
+ if (ds & DEP_ANTI)
+ bitmap_set_bit (&anti_dependency_cache[insn_luid], elem_luid);
+
+ if (ds & SPECULATIVE)
+ {
+ gcc_assert (current_sched_info->flags & DO_SPECULATION);
+ bitmap_set_bit (&spec_dependency_cache[insn_luid], elem_luid);
+ }
+ }
+}
+
+/* Type of dependence DEP have changed from OLD_TYPE. Update dependency
+ caches accordingly. */
+static void
+update_dependency_caches (dep_t dep, enum reg_note old_type)
+{
+ int elem_luid = INSN_LUID (DEP_PRO (dep));
+ int insn_luid = INSN_LUID (DEP_CON (dep));
+
+ /* Clear corresponding cache entry because type of the link
+ may have changed. Keep them if we use_deps_list. */
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
+ switch (old_type)
+ {
+ case REG_DEP_OUTPUT:
+ bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ case REG_DEP_ANTI:
+ bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ break;
+
+ default:
+ gcc_unreachable ();
+ }
+ }
+
+ set_dependency_caches (dep);
+}
+
+/* Convert a dependence pointed to by SD_IT to be non-speculative. */
+static void
+change_spec_dep_to_hard (sd_iterator_def sd_it)
+{
+ dep_node_t node = DEP_LINK_NODE (*sd_it.linkp);
+ dep_link_t link = DEP_NODE_BACK (node);
+ dep_t dep = DEP_NODE_DEP (node);
+ rtx elem = DEP_PRO (dep);
+ rtx insn = DEP_CON (dep);
+
+ move_dep_link (link, INSN_SPEC_BACK_DEPS (insn), INSN_HARD_BACK_DEPS (insn));
+
+ DEP_STATUS (dep) &= ~SPECULATIVE;
+
+ if (true_dependency_cache != NULL)
+ /* Clear the cache entry. */
+ bitmap_clear_bit (&spec_dependency_cache[INSN_LUID (insn)],
+ INSN_LUID (elem));
+}
+#endif
+
+/* Update DEP to incorporate information from NEW_DEP.
+ SD_IT points to DEP in case it should be moved to another list.
+ MEM1 and MEM2, if nonnull, correspond to memory locations in case if
+ data-speculative dependence should be updated. */
+static enum DEPS_ADJUST_RESULT
+update_dep (dep_t dep, dep_t new_dep,
+ sd_iterator_def sd_it, rtx mem1, rtx mem2)
+{
+ enum DEPS_ADJUST_RESULT res = DEP_PRESENT;
+ enum reg_note old_type = DEP_TYPE (dep);
+
+ /* If this is a more restrictive type of dependence than the
+ existing one, then change the existing dependence to this
+ type. */
+ if ((int) DEP_TYPE (new_dep) < (int) old_type)
+ {
+ DEP_TYPE (dep) = DEP_TYPE (new_dep);
+ res = DEP_CHANGED;
+ }
+
+#ifdef INSN_SCHEDULING
+ if (current_sched_info->flags & USE_DEPS_LIST)
+ /* Update DEP_STATUS. */
+ {
+ ds_t dep_status = DEP_STATUS (dep);
+ ds_t ds = DEP_STATUS (new_dep);
+ ds_t new_status = ds | dep_status;
+
+ if (new_status & SPECULATIVE)
+ /* Either existing dep or a dep we're adding or both are
+ speculative. */
+ {
+ if (!(ds & SPECULATIVE)
+ || !(dep_status & SPECULATIVE))
+ /* The new dep can't be speculative. */
+ {
+ new_status &= ~SPECULATIVE;
+
+ if (dep_status & SPECULATIVE)
+ /* The old dep was speculative, but now it
+ isn't. */
+ change_spec_dep_to_hard (sd_it);
+ }
+ else
{
- if (!(current_sched_info->flags & DO_SPECULATION)
- || !bitmap_bit_p (&spec_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem)))
+ /* Both are speculative. Merge probabilities. */
+ if (mem1 != NULL)
{
- if ((present_dep_types | (ds & DEP_TYPES))
- == present_dep_types)
- /* We already have all these bits. */
- return DEP_PRESENT;
- }
- else
- {
- /* Only true dependencies can be data speculative and
- only anti dependencies can be control speculative. */
- gcc_assert ((present_dep_types & (DEP_TRUE | DEP_ANTI))
- == present_dep_types);
-
- /* if (additional dep is SPECULATIVE) then
- we should update DEP_STATUS
- else
- we should reset existing dep to non-speculative. */
+ dw_t dw;
+
+ dw = estimate_dep_weak (mem1, mem2);
+ ds = set_dep_weak (ds, BEGIN_DATA, dw);
}
-
- present_p = true;
+
+ new_status = ds_merge (dep_status, ds);
}
- else
- maybe_present_p = false;
- }
+ }
+
+ ds = new_status;
+
+ if (dep_status != ds)
+ {
+ DEP_STATUS (dep) = ds;
+ res = DEP_CHANGED;
+ }
+ }
+
+ if (true_dependency_cache != NULL
+ && res == DEP_CHANGED)
+ update_dependency_caches (dep, old_type);
+#endif
+
+ return res;
+}
+
+/* Add or update a dependence described by DEP.
+ MEM1 and MEM2, if non-null, correspond to memory locations in case of
+ data speculation.
+
+ The function returns a value indicating if an old entry has been changed
+ or a new entry has been added to insn's backward deps or nothing has
+ been updated at all. */
+static enum DEPS_ADJUST_RESULT
+add_or_update_dep_1 (dep_t new_dep, bool resolved_p,
+ rtx mem1 ATTRIBUTE_UNUSED, rtx mem2 ATTRIBUTE_UNUSED)
+{
+ bool maybe_present_p = true;
+ bool present_p = false;
+
+ gcc_assert (INSN_P (DEP_PRO (new_dep)) && INSN_P (DEP_CON (new_dep))
+ && DEP_PRO (new_dep) != DEP_CON (new_dep));
+
+#ifdef INSN_SCHEDULING
+
+#ifdef ENABLE_CHECKING
+ check_dep (new_dep, mem1 != NULL);
+#endif
+
+ if (true_dependency_cache != NULL)
+ {
+ switch (ask_dependency_caches (new_dep))
+ {
+ case DEP_PRESENT:
+ return DEP_PRESENT;
+
+ case DEP_CHANGED:
+ maybe_present_p = true;
+ present_p = true;
+ break;
+
+ case DEP_CREATED:
+ maybe_present_p = false;
+ present_p = false;
+ break;
+
+ default:
+ gcc_unreachable ();
+ break;
+ }
}
#endif
/* Check that we don't already have this dependence. */
if (maybe_present_p)
{
- dep_link_t *linkp;
-
- for (linkp = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
- *linkp != NULL;
- linkp = &DEP_LINK_NEXT (*linkp))
- {
- dep_t link = DEP_LINK_DEP (*linkp);
-
- gcc_assert (true_dependency_cache == 0 || present_p);
-
- if (DEP_PRO (link) == elem)
- {
- enum DEPS_ADJUST_RESULT changed_p = DEP_PRESENT;
-
-#ifdef INSN_SCHEDULING
- if (current_sched_info->flags & USE_DEPS_LIST)
- {
- ds_t new_status = ds | DEP_STATUS (link);
-
- if (new_status & SPECULATIVE)
- {
- if (!(ds & SPECULATIVE)
- || !(DEP_STATUS (link) & SPECULATIVE))
- /* Then this dep can't be speculative. */
- {
- new_status &= ~SPECULATIVE;
- if (true_dependency_cache
- && (DEP_STATUS (link) & SPECULATIVE))
- bitmap_clear_bit (&spec_dependency_cache
- [INSN_LUID (insn)],
- INSN_LUID (elem));
- }
- else
- {
- /* Both are speculative. Merging probabilities. */
- if (mem1)
- {
- dw_t dw;
-
- dw = estimate_dep_weak (mem1, mem2);
- ds = set_dep_weak (ds, BEGIN_DATA, dw);
- }
-
- new_status = ds_merge (DEP_STATUS (link), ds);
- }
- }
-
- ds = new_status;
- }
-
- /* Clear corresponding cache entry because type of the link
- may have changed. Keep them if we use_deps_list. */
- if (true_dependency_cache != NULL
- && !(current_sched_info->flags & USE_DEPS_LIST))
- {
- enum reg_note kind = DEP_KIND (link);
-
- switch (kind)
- {
- case REG_DEP_OUTPUT:
- bitmap_clear_bit (&output_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- break;
- case REG_DEP_ANTI:
- bitmap_clear_bit (&anti_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- break;
- default:
- gcc_unreachable ();
- }
- }
-
- if ((current_sched_info->flags & USE_DEPS_LIST)
- && DEP_STATUS (link) != ds)
- {
- DEP_STATUS (link) = ds;
- changed_p = DEP_CHANGED;
- }
-#endif
+ dep_t present_dep;
+ sd_iterator_def sd_it;
- /* If this is a more restrictive type of dependence than the
- existing one, then change the existing dependence to this
- type. */
- if ((int) dep_type < (int) DEP_KIND (link))
- {
- DEP_KIND (link) = dep_type;
- changed_p = DEP_CHANGED;
- }
+ gcc_assert (true_dependency_cache == NULL || present_p);
-#ifdef INSN_SCHEDULING
- /* If we are adding a dependency to INSN's LOG_LINKs, then
- note that in the bitmap caches of dependency information. */
- if (true_dependency_cache != NULL)
- {
- if (!(current_sched_info->flags & USE_DEPS_LIST))
- {
- if (DEP_KIND (link) == REG_DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- else if (DEP_KIND (link) == REG_DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- else if (DEP_KIND (link) == REG_DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- }
- else
- {
- if (ds & DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- if (ds & DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- if (ds & DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache
- [INSN_LUID (insn)], INSN_LUID (elem));
- /* Note, that dep can become speculative only
- at the moment of creation. Thus, we don't need to
- check for it here. */
- }
- }
-
- if (changed_linkpp && changed_p == DEP_CHANGED)
- *changed_linkpp = linkp;
-#endif
- return changed_p;
- }
- }
- /* We didn't find a dep. It shouldn't be present in the cache. */
- gcc_assert (!present_p);
+ present_dep = sd_find_dep_between_no_cache (DEP_PRO (new_dep),
+ DEP_CON (new_dep),
+ resolved_p, &sd_it);
+
+ if (present_dep != NULL)
+ /* We found an existing dependency between ELEM and INSN. */
+ return update_dep (present_dep, new_dep, sd_it, mem1, mem2);
+ else
+ /* We didn't find a dep, it shouldn't present in the cache. */
+ gcc_assert (!present_p);
}
/* Might want to check one level of transitivity to save conses.
- This check should be done in maybe_add_or_update_back_dep_1.
- Since we made it to add_or_update_back_dep_1, we must create
+ This check should be done in maybe_add_or_update_dep_1.
+ Since we made it to add_or_update_dep_1, we must create
(or update) a link. */
- if (mem1)
+ if (mem1 != NULL_RTX)
{
gcc_assert (current_sched_info->flags & DO_SPECULATION);
- ds = set_dep_weak (ds, BEGIN_DATA, estimate_dep_weak (mem1, mem2));
+ DEP_STATUS (new_dep) = set_dep_weak (DEP_STATUS (new_dep), BEGIN_DATA,
+ estimate_dep_weak (mem1, mem2));
}
-
- add_back_dep (insn, elem, dep_type, ds);
+
+ sd_add_dep (new_dep, resolved_p);
return DEP_CREATED;
}
-/* This function creates a link between INSN and ELEM under any
- conditions. DS describes speculative status of the link. */
+/* Initialize BACK_LIST_PTR with consumer's backward list and
+ FORW_LIST_PTR with producer's forward list. If RESOLVED_P is true
+ initialize with lists that hold resolved deps. */
static void
-add_back_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
+get_back_and_forw_lists (dep_t dep, bool resolved_p,
+ deps_list_t *back_list_ptr,
+ deps_list_t *forw_list_ptr)
{
- struct _dep _dep, *dep = &_dep;
+ rtx con = DEP_CON (dep);
- gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
+ if (!resolved_p)
+ {
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (DEP_STATUS (dep) & SPECULATIVE))
+ *back_list_ptr = INSN_SPEC_BACK_DEPS (con);
+ else
+ *back_list_ptr = INSN_HARD_BACK_DEPS (con);
- if (current_sched_info->flags & USE_DEPS_LIST)
- init_dep_1 (dep, elem, insn, dep_type, ds);
+ *forw_list_ptr = INSN_FORW_DEPS (DEP_PRO (dep));
+ }
else
- init_dep_1 (dep, elem, insn, dep_type, -1);
+ {
+ *back_list_ptr = INSN_RESOLVED_BACK_DEPS (con);
+ *forw_list_ptr = INSN_RESOLVED_FORW_DEPS (DEP_PRO (dep));
+ }
+}
+
+/* Add dependence described by DEP.
+ If RESOLVED_P is true treat the dependence as a resolved one. */
+void
+sd_add_dep (dep_t dep, bool resolved_p)
+{
+ dep_node_t n = create_dep_node ();
+ deps_list_t con_back_deps;
+ deps_list_t pro_forw_deps;
+ rtx elem = DEP_PRO (dep);
+ rtx insn = DEP_CON (dep);
- add_back_dep_to_deps_list (INSN_BACK_DEPS (insn), dep);
+ gcc_assert (INSN_P (insn) && INSN_P (elem) && insn != elem);
+
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && !sched_insn_is_legitimate_for_speculation_p (insn, DEP_STATUS (dep)))
+ DEP_STATUS (dep) &= ~SPECULATIVE;
+
+ copy_dep (DEP_NODE_DEP (n), dep);
+
+ get_back_and_forw_lists (dep, resolved_p, &con_back_deps, &pro_forw_deps);
+
+ add_to_deps_list (DEP_NODE_BACK (n), con_back_deps);
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
- check_dep_status (dep_type, ds, false);
+ check_dep (dep, false);
#endif
+ add_to_deps_list (DEP_NODE_FORW (n), pro_forw_deps);
+
/* If we are adding a dependency to INSN's LOG_LINKs, then note that
in the bitmap caches of dependency information. */
if (true_dependency_cache != NULL)
+ set_dependency_caches (dep);
+#endif
+}
+
+/* Add or update backward dependence between INSN and ELEM
+ with given type DEP_TYPE and dep_status DS.
+ This function is a convenience wrapper. */
+enum DEPS_ADJUST_RESULT
+sd_add_or_update_dep (dep_t dep, bool resolved_p)
+{
+ return add_or_update_dep_1 (dep, resolved_p, NULL_RTX, NULL_RTX);
+}
+
+/* Resolved dependence pointed to by SD_IT.
+ SD_IT will advance to the next element. */
+void
+sd_resolve_dep (sd_iterator_def sd_it)
+{
+ dep_node_t node = DEP_LINK_NODE (*sd_it.linkp);
+ dep_t dep = DEP_NODE_DEP (node);
+ rtx pro = DEP_PRO (dep);
+ rtx con = DEP_CON (dep);
+
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (DEP_STATUS (dep) & SPECULATIVE))
+ move_dep_link (DEP_NODE_BACK (node), INSN_SPEC_BACK_DEPS (con),
+ INSN_RESOLVED_BACK_DEPS (con));
+ else
+ move_dep_link (DEP_NODE_BACK (node), INSN_HARD_BACK_DEPS (con),
+ INSN_RESOLVED_BACK_DEPS (con));
+
+ move_dep_link (DEP_NODE_FORW (node), INSN_FORW_DEPS (pro),
+ INSN_RESOLVED_FORW_DEPS (pro));
+}
+
+/* Make TO depend on all the FROM's producers.
+ If RESOLVED_P is true add dependencies to the resolved lists. */
+void
+sd_copy_back_deps (rtx to, rtx from, bool resolved_p)
+{
+ sd_list_types_def list_type;
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ list_type = resolved_p ? SD_LIST_RES_BACK : SD_LIST_BACK;
+
+ FOR_EACH_DEP (from, list_type, sd_it, dep)
{
- if (!(current_sched_info->flags & USE_DEPS_LIST))
- {
- if (dep_type == REG_DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- else if (dep_type == REG_DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- else if (dep_type == REG_DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- }
- else
- {
- if (ds & DEP_TRUE)
- bitmap_set_bit (&true_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- if (ds & DEP_OUTPUT)
- bitmap_set_bit (&output_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- if (ds & DEP_ANTI)
- bitmap_set_bit (&anti_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- if (ds & SPECULATIVE)
- {
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
- bitmap_set_bit (&spec_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (elem));
- }
- }
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ copy_dep (new_dep, dep);
+ DEP_CON (new_dep) = to;
+ sd_add_dep (new_dep, resolved_p);
}
-#endif
+}
+
+/* Remove a dependency referred to by SD_IT.
+ SD_IT will point to the next dependence after removal. */
+void
+sd_delete_dep (sd_iterator_def sd_it)
+{
+ dep_node_t n = DEP_LINK_NODE (*sd_it.linkp);
+ dep_t dep = DEP_NODE_DEP (n);
+ rtx pro = DEP_PRO (dep);
+ rtx con = DEP_CON (dep);
+ deps_list_t con_back_deps;
+ deps_list_t pro_forw_deps;
+
+ if (true_dependency_cache != NULL)
+ {
+ int elem_luid = INSN_LUID (pro);
+ int insn_luid = INSN_LUID (con);
+
+ bitmap_clear_bit (&true_dependency_cache[insn_luid], elem_luid);
+ bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
+ bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
+
+ if (current_sched_info->flags & DO_SPECULATION)
+ bitmap_clear_bit (&spec_dependency_cache[insn_luid], elem_luid);
+ }
+
+ get_back_and_forw_lists (dep, sd_it.resolved_p,
+ &con_back_deps, &pro_forw_deps);
+
+ remove_from_deps_list (DEP_NODE_BACK (n), con_back_deps);
+ remove_from_deps_list (DEP_NODE_FORW (n), pro_forw_deps);
+
+ delete_dep_node (n);
+}
+
+/* Dump size of the lists. */
+#define DUMP_LISTS_SIZE (2)
+
+/* Dump dependencies of the lists. */
+#define DUMP_LISTS_DEPS (4)
+
+/* Dump all information about the lists. */
+#define DUMP_LISTS_ALL (DUMP_LISTS_SIZE | DUMP_LISTS_DEPS)
+
+/* Dump deps_lists of INSN specified by TYPES to DUMP.
+ FLAGS is a bit mask specifying what information about the lists needs
+ to be printed.
+ If FLAGS has the very first bit set, then dump all information about
+ the lists and propagate this bit into the callee dump functions. */
+static void
+dump_lists (FILE *dump, rtx insn, sd_list_types_def types, int flags)
+{
+ sd_iterator_def sd_it;
+ dep_t dep;
+ int all;
+
+ all = (flags & 1);
+
+ if (all)
+ flags |= DUMP_LISTS_ALL;
+
+ fprintf (dump, "[");
+
+ if (flags & DUMP_LISTS_SIZE)
+ fprintf (dump, "%d; ", sd_lists_size (insn, types));
+
+ if (flags & DUMP_LISTS_DEPS)
+ {
+ FOR_EACH_DEP (insn, types, sd_it, dep)
+ {
+ dump_dep (dump, dep, dump_dep_flags | all);
+ fprintf (dump, " ");
+ }
+ }
+}
+
+/* Dump all information about deps_lists of INSN specified by TYPES
+ to STDERR. */
+void
+sd_debug_lists (rtx insn, sd_list_types_def types)
+{
+ dump_lists (stderr, insn, types, 1);
+ fprintf (stderr, "\n");
}
/* A convenience wrapper to operate on an entire list. */
@@ -938,26 +1325,19 @@ add_dependence_list_and_free (rtx insn,
}
/* Clear all dependencies for an insn. */
-
static void
delete_all_dependences (rtx insn)
{
- /* Clear caches, if they exist, as well as free the dependence. */
-
-#ifdef INSN_SCHEDULING
- if (true_dependency_cache != NULL)
- {
- bitmap_clear (&true_dependency_cache[INSN_LUID (insn)]);
- bitmap_clear (&output_dependency_cache[INSN_LUID (insn)]);
- bitmap_clear (&anti_dependency_cache[INSN_LUID (insn)]);
- /* We don't have to clear forward_dependency_cache here,
- because it is formed later. */
- if (current_sched_info->flags & DO_SPECULATION)
- bitmap_clear (&spec_dependency_cache[INSN_LUID (insn)]);
- }
-#endif
+ sd_iterator_def sd_it;
+ dep_t dep;
- clear_deps_list (INSN_BACK_DEPS (insn));
+ /* The below cycle can be optimized to clear the caches and back_deps
+ in one call but that would provoke duplication of code from
+ delete_dep (). */
+
+ for (sd_it = sd_iterator_start (insn, SD_LIST_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
+ sd_delete_dep (sd_it);
}
/* All insns in a scheduling group except the first should only have
@@ -969,13 +1349,13 @@ delete_all_dependences (rtx insn)
static void
fixup_sched_groups (rtx insn)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
rtx prev_nonnote;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
{
rtx i = insn;
- dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
do
@@ -987,7 +1367,7 @@ fixup_sched_groups (rtx insn)
} while (SCHED_GROUP_P (i));
if (! sched_insns_conditions_mutex_p (i, pro))
- add_dependence (i, pro, DEP_KIND (dep));
+ add_dependence (i, pro, DEP_TYPE (dep));
next_link:;
}
@@ -1373,11 +1753,19 @@ sched_analyze_2 (struct deps *deps, rtx
t, rtx_varies_p)
&& ! sched_insns_conditions_mutex_p (insn, XEXP (pending, 0)))
{
- if (current_sched_info->flags & DO_SPECULATION)
- maybe_add_or_update_back_dep_1 (insn, XEXP (pending, 0),
- REG_DEP_TRUE,
- BEGIN_DATA | DEP_TRUE,
- XEXP (pending_mem, 0), t, 0);
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (spec_info->mask & BEGIN_DATA))
+ /* Create a data-speculative dependence between producer
+ and consumer. */
+ {
+ dep_def _dep, *dep = &_dep;
+
+ init_dep_1 (dep, XEXP (pending, 0), insn, REG_DEP_TRUE,
+ BEGIN_DATA | DEP_TRUE);
+
+ maybe_add_or_update_dep_1 (dep, false,
+ XEXP (pending_mem, 0), t);
+ }
else
add_dependence (insn, XEXP (pending, 0), REG_DEP_TRUE);
}
@@ -1810,6 +2198,19 @@ sched_analyze_insn (struct deps *deps, r
/* Fixup the dependencies in the sched group. */
if (SCHED_GROUP_P (insn))
fixup_sched_groups (insn);
+
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && !sched_insn_is_legitimate_for_speculation_p (insn, 0))
+ /* INSN has an internal dependency (e.g. r14 = [r14]) and thus cannot
+ be speculated. */
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
+ change_spec_dep_to_hard (sd_it);
+ }
}
/* Analyze every insn between HEAD and TAIL inclusive, creating backward
@@ -1838,13 +2239,8 @@ sched_analyze (struct deps *deps, rtx he
if (INSN_P (insn))
{
- /* These two lists will be freed in schedule_insn (). */
- INSN_BACK_DEPS (insn) = create_deps_list (false);
- INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list (false);
-
- /* This one should be allocated on the obstack because it should live
- till the scheduling ends. */
- INSN_FORW_DEPS (insn) = create_deps_list (true);
+ /* And initialize deps_lists. */
+ sd_init_insn (insn);
}
if (NONJUMP_INSN_P (insn) || JUMP_P (insn))
@@ -1986,94 +2382,58 @@ sched_analyze (struct deps *deps, rtx he
}
gcc_unreachable ();
}
-\f
-/* The following function adds forward dependence (FROM, TO) with
- given DEP_TYPE. The forward dependence should be not exist before. */
-
-void
-add_forw_dep (dep_link_t link)
+/* Helper for sched_free_deps ().
+ Delete INSN's (RESOLVED_P) backward dependencies. */
+static void
+delete_dep_nodes_in_back_deps (rtx insn, bool resolved_p)
{
- dep_t dep = DEP_LINK_DEP (link);
- rtx to = DEP_CON (dep);
- rtx from = DEP_PRO (dep);
-
-#ifdef ENABLE_CHECKING
- /* If add_dependence is working properly there should never
- be notes, deleted insns or duplicates in the backward
- links. Thus we need not check for them here.
-
- However, if we have enabled checking we might as well go
- ahead and verify that add_dependence worked properly. */
- gcc_assert (INSN_P (from));
- gcc_assert (!INSN_DELETED_P (from));
- if (true_dependency_cache)
- {
- gcc_assert (!bitmap_bit_p (&forward_dependency_cache[INSN_LUID (from)],
- INSN_LUID (to)));
- bitmap_set_bit (&forward_dependency_cache[INSN_LUID (from)],
- INSN_LUID (to));
- }
+ sd_iterator_def sd_it;
+ dep_t dep;
+ sd_list_types_def types;
- gcc_assert (find_link_by_con_in_deps_list (INSN_FORW_DEPS (from), to)
- == NULL);
-#endif
+ if (resolved_p)
+ types = SD_LIST_RES_BACK;
+ else
+ types = SD_LIST_BACK;
- add_to_deps_list (DEP_NODE_FORW (DEP_LINK_NODE (link)),
- INSN_FORW_DEPS (from));
+ for (sd_it = sd_iterator_start (insn, types);
+ sd_iterator_cond (&sd_it, &dep);)
+ {
+ dep_link_t link = *sd_it.linkp;
+ dep_node_t node = DEP_LINK_NODE (link);
+ deps_list_t back_list;
+ deps_list_t forw_list;
- INSN_DEP_COUNT (to) += 1;
+ get_back_and_forw_lists (dep, resolved_p, &back_list, &forw_list);
+ remove_from_deps_list (link, back_list);
+ delete_dep_node (node);
+ }
}
-/* Examine insns in the range [ HEAD, TAIL ] and Use the backward
- dependences from INSN_BACK_DEPS list to build forward dependences in
- INSN_FORW_DEPS. */
-
+/* Delete (RESOLVED_P) dependencies between HEAD and TAIL together with
+ deps_lists. */
void
-compute_forward_dependences (rtx head, rtx tail)
+sched_free_deps (rtx head, rtx tail, bool resolved_p)
{
rtx insn;
- rtx next_tail;
+ rtx next_tail = NEXT_INSN (tail);
- next_tail = NEXT_INSN (tail);
for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
- {
- dep_link_t link;
-
- if (! INSN_P (insn))
- continue;
-
- if (current_sched_info->flags & DO_SPECULATION)
- {
- /* We will add links, preserving order, from INSN_BACK_DEPS to
- NEW. */
- dep_link_t new = NULL;
-
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
-
- while (link != NULL)
- {
- dep_link_t next = DEP_LINK_NEXT (link);
-
- detach_dep_link (link);
- adjust_add_sorted_back_dep (insn, link, &new);
-
- link = next;
- }
-
- /* Attach NEW to be the list of backward dependencies. */
- if (new != NULL)
- {
- DEP_LINK_PREV_NEXTP (new)
- = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
+ if (INSN_P (insn) && INSN_LUID (insn) > 0)
+ {
+ /* Clear resolved back deps together with its dep_nodes. */
+ delete_dep_nodes_in_back_deps (insn, resolved_p);
- DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)) = new;
- }
- }
+ /* Clear forward deps and leave the dep_nodes to the
+ corresponding back_deps list. */
+ if (resolved_p)
+ clear_deps_list (INSN_RESOLVED_FORW_DEPS (insn));
+ else
+ clear_deps_list (INSN_FORW_DEPS (insn));
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
- add_forw_dep (link);
- }
+ sd_finish_insn (insn);
+ }
}
\f
/* Initialize variables for region data dependence analysis.
@@ -2143,27 +2503,31 @@ free_deps (struct deps *deps)
void
init_dependency_caches (int luid)
{
+ /* Average number of insns in the basic block.
+ '+ 1' is used to make it nonzero. */
+ int insns_in_block = luid / n_basic_blocks + 1;
+
/* ?!? We could save some memory by computing a per-region luid mapping
which could reduce both the number of vectors in the cache and the size
of each vector. Instead we just avoid the cache entirely unless the
average number of instructions in a basic block is very high. See
the comment before the declaration of true_dependency_cache for
what we consider "very high". */
- if (luid / n_basic_blocks > 100 * 5)
+ if (insns_in_block > 100 * 5)
{
cache_size = 0;
extend_dependency_caches (luid, true);
}
- /* Lifetime of this obstack is whole function scheduling (not single region
- scheduling) because some dependencies can be manually generated for
- outside regions. See dont_calc_deps in sched-{rgn, ebb}.c .
-
- Possible solution would be to have two obstacks:
- * the big one for regular dependencies with region scheduling lifetime,
- * and the small one for manually generated dependencies with function
- scheduling lifetime. */
- gcc_obstack_init (&deps_obstack);
+ dl_pool = create_alloc_pool ("deps_list", sizeof (struct _deps_list),
+ /* Allocate lists for one block at a time. */
+ insns_in_block);
+
+ dn_pool = create_alloc_pool ("dep_node", sizeof (struct _dep_node),
+ /* Allocate nodes for one block at a time.
+ We assume that average insn has
+ 5 producers. */
+ 5 * insns_in_block);
}
/* Create or extend (depending on CREATE_P) dependency caches to
@@ -2181,10 +2545,7 @@ extend_dependency_caches (int n, bool cr
output_dependency_cache, luid);
anti_dependency_cache = XRESIZEVEC (bitmap_head, anti_dependency_cache,
luid);
-#ifdef ENABLE_CHECKING
- forward_dependency_cache = XRESIZEVEC (bitmap_head,
- forward_dependency_cache, luid);
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
spec_dependency_cache = XRESIZEVEC (bitmap_head, spec_dependency_cache,
luid);
@@ -2194,9 +2555,7 @@ extend_dependency_caches (int n, bool cr
bitmap_initialize (&true_dependency_cache[i], 0);
bitmap_initialize (&output_dependency_cache[i], 0);
bitmap_initialize (&anti_dependency_cache[i], 0);
-#ifdef ENABLE_CHECKING
- bitmap_initialize (&forward_dependency_cache[i], 0);
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
bitmap_initialize (&spec_dependency_cache[i], 0);
}
@@ -2209,7 +2568,10 @@ extend_dependency_caches (int n, bool cr
void
free_dependency_caches (void)
{
- obstack_free (&deps_obstack, NULL);
+ gcc_assert (deps_pools_are_empty_p ());
+ free_alloc_pool_if_empty (&dn_pool);
+ free_alloc_pool_if_empty (&dl_pool);
+ gcc_assert (dn_pool == NULL && dl_pool == NULL);
if (true_dependency_cache)
{
@@ -2220,9 +2582,7 @@ free_dependency_caches (void)
bitmap_clear (&true_dependency_cache[i]);
bitmap_clear (&output_dependency_cache[i]);
bitmap_clear (&anti_dependency_cache[i]);
-#ifdef ENABLE_CHECKING
- bitmap_clear (&forward_dependency_cache[i]);
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
bitmap_clear (&spec_dependency_cache[i]);
}
@@ -2232,10 +2592,7 @@ free_dependency_caches (void)
output_dependency_cache = NULL;
free (anti_dependency_cache);
anti_dependency_cache = NULL;
-#ifdef ENABLE_CHECKING
- free (forward_dependency_cache);
- forward_dependency_cache = NULL;
-#endif
+
if (current_sched_info->flags & DO_SPECULATION)
{
free (spec_dependency_cache);
@@ -2266,72 +2623,6 @@ finish_deps_global (void)
FREE_REG_SET (reg_pending_uses);
}
-/* Insert LINK into the dependence chain pointed to by LINKP and
- maintain the sort order. */
-static void
-adjust_add_sorted_back_dep (rtx insn, dep_link_t link, dep_link_t *linkp)
-{
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
- /* If the insn cannot move speculatively, but the link is speculative,
- make it hard dependence. */
- if (HAS_INTERNAL_DEP (insn)
- && (DEP_LINK_STATUS (link) & SPECULATIVE))
- {
- DEP_LINK_STATUS (link) &= ~SPECULATIVE;
-
- if (true_dependency_cache)
- bitmap_clear_bit (&spec_dependency_cache[INSN_LUID (insn)],
- INSN_LUID (DEP_LINK_PRO (link)));
- }
-
- /* Non-speculative links go at the head of deps_list, followed by
- speculative links. */
- if (DEP_LINK_STATUS (link) & SPECULATIVE)
- while (*linkp && !(DEP_LINK_STATUS (*linkp) & SPECULATIVE))
- linkp = &DEP_LINK_NEXT (*linkp);
-
- attach_dep_link (link, linkp);
-
- if (CHECK)
- gcc_assert (deps_list_consistent_p (INSN_BACK_DEPS (insn)));
-}
-
-/* Move the dependence pointed to by LINKP to the back dependencies
- of INSN, and also add this dependence to the forward ones. All dep_links,
- except one pointed to by LINKP, must be sorted. */
-static void
-adjust_back_add_forw_dep (rtx insn, dep_link_t *linkp)
-{
- dep_link_t link;
-
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
- link = *linkp;
- detach_dep_link (link);
-
- adjust_add_sorted_back_dep (insn, link,
- &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
- add_forw_dep (link);
-}
-
-/* Remove forward dependence described by L. */
-static void
-delete_forw_dep (dep_link_t l)
-{
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
-#ifdef ENABLE_CHECKING
- if (true_dependency_cache)
- bitmap_clear_bit (&forward_dependency_cache[INSN_LUID (DEP_LINK_PRO (l))],
- INSN_LUID (DEP_LINK_CON (l)));
-#endif
-
- detach_dep_link (l);
-
- INSN_DEP_COUNT (DEP_LINK_CON (l))--;
-}
-
/* Estimate the weakness of dependence between MEM1 and MEM2. */
static dw_t
estimate_dep_weak (rtx mem1, rtx mem2)
@@ -2366,90 +2657,15 @@ estimate_dep_weak (rtx mem1, rtx mem2)
void
add_dependence (rtx insn, rtx elem, enum reg_note dep_type)
{
- ds_t ds;
-
- if (dep_type == REG_DEP_TRUE)
- ds = DEP_TRUE;
- else if (dep_type == REG_DEP_OUTPUT)
- ds = DEP_OUTPUT;
- else if (dep_type == REG_DEP_ANTI)
- ds = DEP_ANTI;
- else
- gcc_unreachable ();
-
- maybe_add_or_update_back_dep_1 (insn, elem, dep_type, ds, 0, 0, 0);
-}
-
-/* Add or update backward dependence between INSN and ELEM
- with given type DEP_TYPE and dep_status DS.
- This function is a convenience wrapper. */
-enum DEPS_ADJUST_RESULT
-add_or_update_back_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
-{
- return add_or_update_back_dep_1 (insn, elem, dep_type, ds, 0, 0, 0);
-}
-
-/* Add or update both backward and forward dependencies between INSN and ELEM
- with given type DEP_TYPE and dep_status DS. */
-void
-add_or_update_back_forw_dep (rtx insn, rtx elem, enum reg_note dep_type,
- ds_t ds)
-{
- enum DEPS_ADJUST_RESULT res;
- dep_link_t *linkp;
-
- res = add_or_update_back_dep_1 (insn, elem, dep_type, ds, 0, 0, &linkp);
-
- if (res == DEP_CHANGED || res == DEP_CREATED)
- {
- if (res == DEP_CHANGED)
- delete_forw_dep (DEP_NODE_FORW (DEP_LINK_NODE (*linkp)));
- else if (res == DEP_CREATED)
- linkp = &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
-
- adjust_back_add_forw_dep (insn, linkp);
- }
-}
-
-/* Add both backward and forward dependencies between INSN and ELEM
- with given type DEP_TYPE and dep_status DS. */
-void
-add_back_forw_dep (rtx insn, rtx elem, enum reg_note dep_type, ds_t ds)
-{
- add_back_dep (insn, elem, dep_type, ds);
- adjust_back_add_forw_dep (insn, &DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
-
- if (CHECK)
- gcc_assert (deps_list_consistent_p (INSN_BACK_DEPS (insn)));
-}
-
-/* Remove a dependency referred to by L. */
-void
-delete_back_forw_dep (dep_link_t l)
-{
- dep_node_t n = DEP_LINK_NODE (l);
-
- gcc_assert (current_sched_info->flags & DO_SPECULATION);
-
- if (true_dependency_cache != NULL)
- {
- dep_t dep = DEP_NODE_DEP (n);
- int elem_luid = INSN_LUID (DEP_PRO (dep));
- int insn_luid = INSN_LUID (DEP_CON (dep));
-
- bitmap_clear_bit (&true_dependency_cache[insn_luid], elem_luid);
- bitmap_clear_bit (&anti_dependency_cache[insn_luid], elem_luid);
- bitmap_clear_bit (&output_dependency_cache[insn_luid], elem_luid);
- bitmap_clear_bit (&spec_dependency_cache[insn_luid], elem_luid);
- }
+ dep_def _dep, *dep = &_dep;
- delete_forw_dep (DEP_NODE_FORW (n));
- detach_dep_link (DEP_NODE_BACK (n));
+ init_dep (dep, elem, insn, dep_type);
+ maybe_add_or_update_dep_1 (dep, false, NULL_RTX, NULL_RTX);
}
/* Return weakness of speculative type TYPE in the dep_status DS. */
-dw_t
-get_dep_weak (ds_t ds, ds_t type)
+static dw_t
+get_dep_weak_1 (ds_t ds, ds_t type)
{
ds = ds & type;
switch (type)
@@ -2461,10 +2677,20 @@ get_dep_weak (ds_t ds, ds_t type)
default: gcc_unreachable ();
}
- gcc_assert (MIN_DEP_WEAK <= ds && ds <= MAX_DEP_WEAK);
return (dw_t) ds;
}
+/* Return weakness of speculative type TYPE in the dep_status DS. */
+dw_t
+get_dep_weak (ds_t ds, ds_t type)
+{
+ dw_t dw = get_dep_weak_1 (ds, type);
+
+ gcc_assert (MIN_DEP_WEAK <= dw && dw <= MAX_DEP_WEAK);
+
+ return dw;
+}
+
/* Return the dep_status, which has the same parameters as DS, except for
speculative type TYPE, that will have weakness DW. */
ds_t
@@ -2522,13 +2748,59 @@ ds_merge (ds_t ds1, ds_t ds2)
return ds;
}
+/* Dump information about the dependence status S. */
+static void
+dump_ds (FILE *f, ds_t s)
+{
+ fprintf (f, "{");
+
+ if (s & BEGIN_DATA)
+ fprintf (f, "BEGIN_DATA: %d; ", get_dep_weak_1 (s, BEGIN_DATA));
+ if (s & BE_IN_DATA)
+ fprintf (f, "BE_IN_DATA: %d; ", get_dep_weak_1 (s, BE_IN_DATA));
+ if (s & BEGIN_CONTROL)
+ fprintf (f, "BEGIN_CONTROL: %d; ", get_dep_weak_1 (s, BEGIN_CONTROL));
+ if (s & BE_IN_CONTROL)
+ fprintf (f, "BE_IN_CONTROL: %d; ", get_dep_weak_1 (s, BE_IN_CONTROL));
+
+ if (s & HARD_DEP)
+ fprintf (f, "HARD_DEP; ");
+
+ if (s & DEP_TRUE)
+ fprintf (f, "DEP_TRUE; ");
+ if (s & DEP_ANTI)
+ fprintf (f, "DEP_ANTI; ");
+ if (s & DEP_OUTPUT)
+ fprintf (f, "DEP_OUTPUT; ");
+
+ fprintf (f, "}");
+}
+
+void
+debug_ds (ds_t s)
+{
+ dump_ds (stderr, s);
+ fprintf (stderr, "\n");
+}
+
#ifdef INSN_SCHEDULING
#ifdef ENABLE_CHECKING
/* Verify that dependence type and status are consistent.
If RELAXED_P is true, then skip dep_weakness checks. */
static void
-check_dep_status (enum reg_note dt, ds_t ds, bool relaxed_p)
+check_dep (dep_t dep, bool relaxed_p)
{
+ enum reg_note dt = DEP_TYPE (dep);
+ ds_t ds = DEP_STATUS (dep);
+
+ gcc_assert (DEP_PRO (dep) != DEP_CON (dep));
+
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
+ {
+ gcc_assert (ds == -1);
+ return;
+ }
+
/* Check that dependence type contains the same bits as the status. */
if (dt == REG_DEP_TRUE)
gcc_assert (ds & DEP_TRUE);
--- haifa-sched.c (/local/trunk/gcc) (revision 962)
+++ haifa-sched.c (/local/fdl-pool/gcc) (revision 962)
@@ -206,11 +206,16 @@ static rtx note_list;
static struct spec_info_def spec_info_var;
/* Description of the speculative part of the scheduling.
If NULL - no speculation. */
-static spec_info_t spec_info;
+spec_info_t spec_info;
/* True, if recovery block was added during scheduling of current block.
Used to determine, if we need to fix INSN_TICKs. */
-static bool added_recovery_block_p;
+static bool haifa_recovery_bb_recently_added_p;
+
+/* True, if recovery block was added during this scheduling pass.
+ Used to determine if we should have empty memory pools of dependencies
+ after finishing current region. */
+bool haifa_recovery_bb_ever_added_p;
/* Counters of different types of speculative instructions. */
static int nr_begin_data, nr_be_in_data, nr_begin_control, nr_be_in_control;
@@ -553,7 +558,7 @@ static void extend_global (rtx);
static void extend_all (rtx);
static void init_h_i_d (rtx);
static void generate_recovery_code (rtx);
-static void process_insn_forw_deps_be_in_spec (deps_list_t, rtx, ds_t);
+static void process_insn_forw_deps_be_in_spec (rtx, rtx, ds_t);
static void begin_speculative_block (rtx);
static void add_to_speculative_block (rtx);
static dw_t dep_weak (ds_t);
@@ -654,7 +659,7 @@ dep_cost (dep_t link)
else
{
rtx insn = DEP_PRO (link);
- enum reg_note dep_type = DEP_KIND (link);
+ enum reg_note dep_type = DEP_TYPE (link);
cost = insn_cost (insn);
@@ -684,7 +689,7 @@ dep_cost (dep_t link)
XEXP (dep_cost_rtx_link, 1) = dep_cost_rtx_link;
/* Targets use only REG_NOTE_KIND of the link. */
- PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_KIND (link));
+ PUT_REG_NOTE_KIND (dep_cost_rtx_link, DEP_TYPE (link));
cost = targetm.sched.adjust_cost (used, dep_cost_rtx_link,
insn, cost);
@@ -726,8 +731,6 @@ contributes_to_priority_p (dep_t dep)
static int
priority (rtx insn)
{
- dep_link_t link;
-
if (! INSN_P (insn))
return 0;
@@ -738,7 +741,7 @@ priority (rtx insn)
{
int this_priority = 0;
- if (deps_list_empty_p (INSN_FORW_DEPS (insn)))
+ if (sd_lists_empty_p (insn, SD_LIST_FORW))
/* ??? We should set INSN_PRIORITY to insn_cost when and insn has
some forward deps but all of them are ignored by
contributes_to_priority hook. At the moment we set priority of
@@ -769,11 +772,13 @@ priority (rtx insn)
do
{
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (twin))
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ FOR_EACH_DEP (twin, SD_LIST_FORW, sd_it, dep)
{
rtx next;
int next_priority;
- dep_t dep = DEP_LINK_DEP (link);
next = DEP_CON (dep);
@@ -832,7 +837,6 @@ rank_for_schedule (const void *x, const
{
rtx tmp = *(const rtx *) y;
rtx tmp2 = *(const rtx *) x;
- dep_link_t link1, link2;
int tmp_class, tmp2_class;
int val, priority_val, weight_val, info_val;
@@ -885,31 +889,30 @@ rank_for_schedule (const void *x, const
/* Compare insns based on their relation to the last-scheduled-insn. */
if (INSN_P (last_scheduled_insn))
{
+ dep_t dep1;
+ dep_t dep2;
+
/* Classify the instructions into three classes:
1) Data dependent on last schedule insn.
2) Anti/Output dependent on last scheduled insn.
3) Independent of last scheduled insn, or has latency of one.
Choose the insn from the highest numbered class if different. */
- link1
- = find_link_by_con_in_deps_list (INSN_FORW_DEPS (last_scheduled_insn),
- tmp);
+ dep1 = sd_find_dep_between (last_scheduled_insn, tmp, true);
- if (link1 == NULL || dep_cost (DEP_LINK_DEP (link1)) == 1)
+ if (dep1 == NULL || dep_cost (dep1) == 1)
tmp_class = 3;
else if (/* Data dependence. */
- DEP_LINK_KIND (link1) == REG_DEP_TRUE)
+ DEP_TYPE (dep1) == REG_DEP_TRUE)
tmp_class = 1;
else
tmp_class = 2;
- link2
- = find_link_by_con_in_deps_list (INSN_FORW_DEPS (last_scheduled_insn),
- tmp2);
+ dep2 = sd_find_dep_between (last_scheduled_insn, tmp2, true);
- if (link2 == NULL || dep_cost (DEP_LINK_DEP (link2)) == 1)
+ if (dep2 == NULL || dep_cost (dep2) == 1)
tmp2_class = 3;
else if (/* Data dependence. */
- DEP_LINK_KIND (link2) == REG_DEP_TRUE)
+ DEP_TYPE (dep2) == REG_DEP_TRUE)
tmp2_class = 1;
else
tmp2_class = 2;
@@ -922,21 +925,11 @@ rank_for_schedule (const void *x, const
This gives the scheduler more freedom when scheduling later
instructions at the expense of added register pressure. */
- link1 = DEPS_LIST_FIRST (INSN_FORW_DEPS (tmp));
- link2 = DEPS_LIST_FIRST (INSN_FORW_DEPS (tmp2));
+ val = (sd_lists_size (tmp2, SD_LIST_FORW)
+ - sd_lists_size (tmp, SD_LIST_FORW));
- while (link1 != NULL && link2 != NULL)
- {
- link1 = DEP_LINK_NEXT (link1);
- link2 = DEP_LINK_NEXT (link2);
- }
-
- if (link1 != NULL && link2 == NULL)
- /* TMP (Y) has more insns that depend on it. */
- return -1;
- if (link1 == NULL && link2 != NULL)
- /* TMP2 (X) has more insns that depend on it. */
- return 1;
+ if (val != 0)
+ return val;
/* If insns are equally good, sort by INSN_LUID (original insn order),
so that we make the sort stable. This minimizes instruction movement,
@@ -1172,7 +1165,8 @@ static int last_clock_var;
static int
schedule_insn (rtx insn)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
int advance = 0;
if (sched_verbose >= 1)
@@ -1192,12 +1186,7 @@ schedule_insn (rtx insn)
/* Scheduling instruction should have all its dependencies resolved and
should have been removed from the ready list. */
- gcc_assert (INSN_DEP_COUNT (insn) == 0
- && deps_list_empty_p (INSN_BACK_DEPS (insn)));
- free_deps_list (INSN_BACK_DEPS (insn));
-
- /* Now we can free INSN_RESOLVED_BACK_DEPS list. */
- delete_deps_list (INSN_RESOLVED_BACK_DEPS (insn));
+ gcc_assert (sd_lists_empty_p (insn, SD_LIST_BACK));
gcc_assert (QUEUE_INDEX (insn) == QUEUE_NOWHERE);
QUEUE_INDEX (insn) = QUEUE_SCHEDULED;
@@ -1213,19 +1202,15 @@ schedule_insn (rtx insn)
INSN_TICK (insn) = clock_var;
/* Update dependent instructions. */
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
+ for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
+ sd_iterator_cond (&sd_it, &dep);)
{
- rtx next = DEP_LINK_CON (link);
-
- /* Resolve the dependence between INSN and NEXT. */
-
- INSN_DEP_COUNT (next)--;
+ rtx next = DEP_CON (dep);
- move_dep_link (DEP_NODE_BACK (DEP_LINK_NODE (link)),
- INSN_RESOLVED_BACK_DEPS (next));
-
- gcc_assert ((INSN_DEP_COUNT (next) == 0)
- == deps_list_empty_p (INSN_BACK_DEPS (next)));
+ /* Resolve the dependence between INSN and NEXT.
+ sd_resolve_dep () moves current dep to another list thus
+ advancing the iterator. */
+ sd_resolve_dep (sd_it);
if (!IS_SPECULATION_BRANCHY_CHECK_P (insn))
{
@@ -1242,11 +1227,23 @@ schedule_insn (rtx insn)
/* Check always has only one forward dependence (to the first insn in
the recovery block), therefore, this will be executed only once. */
{
- gcc_assert (DEP_LINK_NEXT (link) == NULL);
+ gcc_assert (sd_lists_empty_p (insn, SD_LIST_FORW));
fix_recovery_deps (RECOVERY_BLOCK (insn));
}
}
+ /* This is the place where scheduler doesn't *basically* need backward and
+ forward dependencies for INSN anymore. Nevertheless they are used in
+ heuristics in rank_for_schedule (), early_queue_to_ready () and in
+ some targets (e.g. rs6000). Thus the earliest place where we *can*
+ remove dependencies is after targetm.sched.md_finish () call in
+ schedule_block (). But, on the other side, the safest place to remove
+ dependencies is when we are finishing scheduling entire region. As we
+ don't generate [many] dependencies during scheduling itself, we won't
+ need memory until beginning of next region.
+ Bottom line: Dependencies are removed for all insns in the end of
+ scheduling the region. */
+
/* Annotate the instruction with issue information -- TImode
indicates that the instruction is expected not to be able
to issue on the same cycle as the previous insn. A machine
@@ -1589,15 +1586,12 @@ ok_for_early_queue_removal (rtx insn)
if (!NOTE_P (prev_insn))
{
- dep_link_t dep_link;
+ dep_t dep;
- dep_link = (find_link_by_con_in_deps_list
- (INSN_FORW_DEPS (prev_insn), insn));
+ dep = sd_find_dep_between (prev_insn, insn, true);
- if (dep_link)
+ if (dep != NULL)
{
- dep_t dep = DEP_LINK_DEP (dep_link);
-
cost = dep_cost (dep);
if (targetm.sched.is_costly_dependence (dep, cost,
@@ -2148,7 +2142,7 @@ schedule_block (basic_block *target_bb,
gcc_assert (head != tail || INSN_P (head));
- added_recovery_block_p = false;
+ haifa_recovery_bb_recently_added_p = false;
/* Debug info. */
if (sched_verbose)
@@ -2539,7 +2533,7 @@ schedule_block (basic_block *target_bb,
}
if (!current_sched_info->queue_must_finish_empty
- || added_recovery_block_p)
+ || haifa_recovery_bb_recently_added_p)
{
/* INSN_TICK (minimum clock tick at which the insn becomes
ready) may be not correct for the insn in the subsequent
@@ -2551,7 +2545,15 @@ schedule_block (basic_block *target_bb,
}
if (targetm.sched.md_finish)
- targetm.sched.md_finish (sched_dump, sched_verbose);
+ {
+ targetm.sched.md_finish (sched_dump, sched_verbose);
+
+ /* Target might have added some instructions to the scheduled block.
+ in its md_finish () hook. These new insns don't have any data
+ initialized and to identify them we extend h_i_d so that they'll
+ get zero luids.*/
+ extend_h_i_d ();
+ }
/* Update head/tail boundaries. */
head = NEXT_INSN (prev_head);
@@ -2759,6 +2761,8 @@ sched_init (void)
nr_begin_data = nr_begin_control = nr_be_in_data = nr_be_in_control = 0;
before_recovery = 0;
+ haifa_recovery_bb_ever_added_p = false;
+
#ifdef ENABLE_CHECKING
/* This is used preferably for finding bugs in check_cfg () itself. */
check_cfg (0, 0);
@@ -2817,6 +2821,10 @@ fix_inter_tick (rtx head, rtx tail)
{
/* Set of instructions with corrected INSN_TICK. */
bitmap_head processed;
+ /* ??? It is doubtful if we should assume that cycle advance happens on
+ basic block boundaries. Basically insns that are unconditionally ready
+ on the start of the block are more preferable then those which have
+ a one cycle dependency over insn from the previous block. */
int next_clock = clock_var + 1;
bitmap_initialize (&processed, 0);
@@ -2829,7 +2837,8 @@ fix_inter_tick (rtx head, rtx tail)
if (INSN_P (head))
{
int tick;
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
tick = INSN_TICK (head);
gcc_assert (tick >= MIN_TICK);
@@ -2846,11 +2855,11 @@ fix_inter_tick (rtx head, rtx tail)
INSN_TICK (head) = tick;
}
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (head))
+ FOR_EACH_DEP (head, SD_LIST_RES_FORW, sd_it, dep)
{
rtx next;
- next = DEP_LINK_CON (link);
+ next = DEP_CON (dep);
tick = INSN_TICK (next);
if (tick != INVALID_TICK
@@ -2869,7 +2878,7 @@ fix_inter_tick (rtx head, rtx tail)
INTER_TICK (next) = tick;
else
tick = INTER_TICK (next);
-
+
INSN_TICK (next) = tick;
}
}
@@ -2888,7 +2897,6 @@ int
try_ready (rtx next)
{
ds_t old_ts, *ts;
- dep_link_t link;
ts = &TODO_SPEC (next);
old_ts = *ts;
@@ -2897,44 +2905,52 @@ try_ready (rtx next)
&& ((old_ts & HARD_DEP)
|| (old_ts & SPECULATIVE)));
- if (!(current_sched_info->flags & DO_SPECULATION))
+ if (sd_lists_empty_p (next, SD_LIST_BACK))
+ /* NEXT has all its dependencies resolved. */
{
- if (deps_list_empty_p (INSN_BACK_DEPS (next)))
- *ts &= ~HARD_DEP;
+ /* Remove HARD_DEP bit from NEXT's status. */
+ *ts &= ~HARD_DEP;
+
+ if (current_sched_info->flags & DO_SPECULATION)
+ /* Remove all speculative bits from NEXT's status. */
+ *ts &= ~SPECULATIVE;
}
else
{
- *ts &= ~SPECULATIVE & ~HARD_DEP;
+ /* One of the NEXT's dependencies has been resolved.
+ Recalcute NEXT's status. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (next));
+ *ts &= ~SPECULATIVE & ~HARD_DEP;
- if (link != NULL)
- {
- ds_t ds = DEP_LINK_STATUS (link) & SPECULATIVE;
+ if (sd_lists_empty_p (next, SD_LIST_HARD_BACK))
+ /* Now we've got NEXT with speculative deps only.
+ 1. Look at the deps to see what we have to do.
+ 2. Check if we can do 'todo'. */
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
+ bool first_p = true;
- /* Backward dependencies of the insn are maintained sorted.
- So if DEP_STATUS of the first dep is SPECULATIVE,
- than all other deps are speculative too. */
- if (ds != 0)
- {
- /* Now we've got NEXT with speculative deps only.
- 1. Look at the deps to see what we have to do.
- 2. Check if we can do 'todo'. */
- *ts = ds;
+ FOR_EACH_DEP (next, SD_LIST_BACK, sd_it, dep)
+ {
+ ds_t ds = DEP_STATUS (dep) & SPECULATIVE;
- while ((link = DEP_LINK_NEXT (link)) != NULL)
+ if (first_p)
{
- ds = DEP_LINK_STATUS (link) & SPECULATIVE;
- *ts = ds_merge (*ts, ds);
- }
+ first_p = false;
- if (dep_weak (*ts) < spec_info->weakness_cutoff)
- /* Too few points. */
- *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
+ *ts = ds;
+ }
+ else
+ *ts = ds_merge (*ts, ds);
}
- else
- *ts |= HARD_DEP;
- }
+
+ if (dep_weak (*ts) < spec_info->weakness_cutoff)
+ /* Too few points. */
+ *ts = (*ts & ~SPECULATIVE) | HARD_DEP;
+ }
+ else
+ *ts |= HARD_DEP;
}
if (*ts & HARD_DEP)
@@ -3053,10 +3069,11 @@ fix_tick_ready (rtx next)
{
int tick, delay;
- if (!deps_list_empty_p (INSN_RESOLVED_BACK_DEPS (next)))
+ if (!sd_lists_empty_p (next, SD_LIST_RES_BACK))
{
int full_p;
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
tick = INSN_TICK (next);
/* if tick is not equal to INVALID_TICK, then update
@@ -3064,9 +3081,8 @@ fix_tick_ready (rtx next)
cost. Otherwise, recalculate from scratch. */
full_p = (tick == INVALID_TICK);
- FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (next))
+ FOR_EACH_DEP (next, SD_LIST_RES_BACK, sd_it, dep)
{
- dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
int tick1;
@@ -3180,11 +3196,18 @@ static void
extend_global (rtx insn)
{
gcc_assert (INSN_P (insn));
+
/* These structures have scheduler scope. */
+
+ /* Init h_i_d. */
extend_h_i_d ();
init_h_i_d (insn);
- extend_dependency_caches (1, 0);
+ /* Init data handled in sched-deps.c. */
+ sd_init_insn (insn);
+
+ /* Extend dependency caches by one element. */
+ extend_dependency_caches (1, false);
}
/* Extends global and local scheduler structures to include information
@@ -3212,14 +3235,6 @@ init_h_i_d (rtx insn)
INSN_TICK (insn) = INVALID_TICK;
INTER_TICK (insn) = INVALID_TICK;
find_insn_reg_weight1 (insn);
-
- /* These two lists will be freed in schedule_insn (). */
- INSN_BACK_DEPS (insn) = create_deps_list (false);
- INSN_RESOLVED_BACK_DEPS (insn) = create_deps_list (false);
-
- /* This one should be allocated on the obstack because it should live till
- the scheduling ends. */
- INSN_FORW_DEPS (insn) = create_deps_list (true);
}
/* Generates recovery code for INSN. */
@@ -3240,18 +3255,19 @@ generate_recovery_code (rtx insn)
Tries to add speculative dependencies of type FS between instructions
in deps_list L and TWIN. */
static void
-process_insn_forw_deps_be_in_spec (deps_list_t l, rtx twin, ds_t fs)
+process_insn_forw_deps_be_in_spec (rtx insn, rtx twin, ds_t fs)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
- FOR_EACH_DEP_LINK (link, l)
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
{
ds_t ds;
rtx consumer;
- consumer = DEP_LINK_CON (link);
+ consumer = DEP_CON (dep);
- ds = DEP_LINK_STATUS (link);
+ ds = DEP_STATUS (dep);
if (/* If we want to create speculative dep. */
fs
@@ -3278,7 +3294,12 @@ process_insn_forw_deps_be_in_spec (deps_
ds |= fs;
}
- add_back_forw_dep (consumer, twin, DEP_LINK_KIND (link), ds);
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep_1 (new_dep, twin, consumer, DEP_TYPE (dep), ds);
+ sd_add_dep (new_dep, false);
+ }
}
}
@@ -3301,7 +3322,8 @@ static void
add_to_speculative_block (rtx insn)
{
ds_t ts;
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
rtx twins = NULL;
rtx_vec_t priorities_roots;
@@ -3319,50 +3341,52 @@ add_to_speculative_block (rtx insn)
DONE_SPEC (insn) |= ts;
/* First we convert all simple checks to branchy. */
- for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
{
- rtx check = DEP_LINK_PRO (link);
+ rtx check = DEP_PRO (dep);
if (IS_SPECULATION_SIMPLE_CHECK_P (check))
{
create_check_block_twin (check, true);
/* Restart search. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
+ sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
}
else
/* Continue search. */
- link = DEP_LINK_NEXT (link);
+ sd_iterator_next (&sd_it);
}
priorities_roots = NULL;
clear_priorities (insn, &priorities_roots);
-
- do
+
+ while (1)
{
- dep_link_t link;
rtx check, twin;
basic_block rec;
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
+ /* Get the first backward dependency of INSN. */
+ sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ if (!sd_iterator_cond (&sd_it, &dep))
+ /* INSN has no backward dependencies left. */
+ break;
- gcc_assert ((DEP_LINK_STATUS (link) & BEGIN_SPEC) == 0
- && (DEP_LINK_STATUS (link) & BE_IN_SPEC) != 0
- && (DEP_LINK_STATUS (link) & DEP_TYPES) == DEP_TRUE);
+ gcc_assert ((DEP_STATUS (dep) & BEGIN_SPEC) == 0
+ && (DEP_STATUS (dep) & BE_IN_SPEC) != 0
+ && (DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
- check = DEP_LINK_PRO (link);
+ check = DEP_PRO (dep);
gcc_assert (!IS_SPECULATION_CHECK_P (check) && !ORIG_PAT (check)
&& QUEUE_INDEX (check) == QUEUE_NOWHERE);
-
+
rec = BLOCK_FOR_INSN (check);
-
+
twin = emit_insn_before (copy_insn (PATTERN (insn)), BB_END (rec));
extend_global (twin);
- copy_deps_list_change_con (INSN_RESOLVED_BACK_DEPS (twin),
- INSN_RESOLVED_BACK_DEPS (insn),
- twin);
+ sd_copy_back_deps (twin, insn, true);
if (sched_verbose && spec_info->dump)
/* INSN_BB (insn) isn't determined for twin insns yet.
@@ -3374,47 +3398,38 @@ add_to_speculative_block (rtx insn)
/* Add dependences between TWIN and all appropriate
instructions from REC. */
- do
- {
- add_back_forw_dep (twin, check, REG_DEP_TRUE, DEP_TRUE);
-
- do
- {
- link = DEP_LINK_NEXT (link);
+ FOR_EACH_DEP (insn, SD_LIST_SPEC_BACK, sd_it, dep)
+ {
+ rtx pro = DEP_PRO (dep);
- if (link != NULL)
- {
- check = DEP_LINK_PRO (link);
- if (BLOCK_FOR_INSN (check) == rec)
- break;
- }
- else
- break;
+ gcc_assert (DEP_TYPE (dep) == REG_DEP_TRUE);
+
+ /* INSN might have dependencies from the instructions from
+ several recovery blocks. At this iteration we process those
+ producers that reside in REC. */
+ if (BLOCK_FOR_INSN (pro) == rec)
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep (new_dep, pro, twin, REG_DEP_TRUE);
+ sd_add_dep (new_dep, false);
}
- while (1);
}
- while (link != NULL);
- process_insn_forw_deps_be_in_spec (INSN_FORW_DEPS (insn), twin, ts);
+ process_insn_forw_deps_be_in_spec (insn, twin, ts);
/* Remove all dependencies between INSN and insns in REC. */
- for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
{
- check = DEP_LINK_PRO (link);
+ rtx pro = DEP_PRO (dep);
- if (BLOCK_FOR_INSN (check) == rec)
- {
- delete_back_forw_dep (link);
-
- /* Restart search. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
- }
+ if (BLOCK_FOR_INSN (pro) == rec)
+ sd_delete_dep (sd_it);
else
- /* Continue search. */
- link = DEP_LINK_NEXT (link);
+ sd_iterator_next (&sd_it);
}
}
- while (!deps_list_empty_p (INSN_BACK_DEPS (insn)));
/* We couldn't have added the dependencies between INSN and TWINS earlier
because that would make TWINS appear in the INSN_BACK_DEPS (INSN). */
@@ -3423,7 +3438,13 @@ add_to_speculative_block (rtx insn)
rtx twin;
twin = XEXP (twins, 0);
- add_back_forw_dep (twin, insn, REG_DEP_OUTPUT, DEP_OUTPUT);
+
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
+ sd_add_dep (new_dep, false);
+ }
twin = XEXP (twins, 1);
free_INSN_LIST_node (twins);
@@ -3579,7 +3600,8 @@ create_recovery_block (void)
rtx barrier;
basic_block rec;
- added_recovery_block_p = true;
+ haifa_recovery_bb_recently_added_p = true;
+ haifa_recovery_bb_ever_added_p = true;
if (!before_recovery)
init_before_recovery ();
@@ -3613,8 +3635,10 @@ create_check_block_twin (rtx insn, bool
{
basic_block rec;
rtx label, check, twin;
- dep_link_t link;
ds_t fs;
+ sd_iterator_def sd_it;
+ dep_t dep;
+ dep_def _new_dep, *new_dep = &_new_dep;
gcc_assert (ORIG_PAT (insn)
&& (!mutate_p
@@ -3663,14 +3687,17 @@ create_check_block_twin (rtx insn, bool
in the recovery block). */
if (rec != EXIT_BLOCK_PTR)
{
- FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (insn))
- if ((DEP_LINK_STATUS (link) & DEP_OUTPUT) != 0)
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
+ if ((DEP_STATUS (dep) & DEP_OUTPUT) != 0)
{
- struct _dep _dep, *dep = &_dep;
+ struct _dep _dep2, *dep2 = &_dep2;
- init_dep (dep, DEP_LINK_PRO (link), check, REG_DEP_TRUE);
+ init_dep (dep2, DEP_PRO (dep), check, REG_DEP_TRUE);
- add_back_dep_to_deps_list (INSN_RESOLVED_BACK_DEPS (check), dep);
+ sd_add_dep (dep2, true);
}
twin = emit_insn_after (ORIG_PAT (insn), BB_END (rec));
@@ -3691,9 +3718,9 @@ create_check_block_twin (rtx insn, bool
(TRUE | OUTPUT). */
}
- copy_deps_list_change_con (INSN_RESOLVED_BACK_DEPS (twin),
- INSN_RESOLVED_BACK_DEPS (insn),
- twin);
+ /* Copy all resolved back dependencies of INSN to TWIN. This will
+ provide correct value for INSN_TICK (TWIN). */
+ sd_copy_back_deps (twin, insn, true);
if (rec != EXIT_BLOCK_PTR)
/* In case of branchy check, fix CFG. */
@@ -3756,10 +3783,11 @@ create_check_block_twin (rtx insn, bool
/* Move backward dependences from INSN to CHECK and
move forward dependences from INSN to TWIN. */
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
+
+ /* First, create dependencies between INSN's producers and CHECK & TWIN. */
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
{
- rtx pro = DEP_LINK_PRO (link);
- enum reg_note dk = DEP_LINK_KIND (link);
+ rtx pro = DEP_PRO (dep);
ds_t ds;
/* If BEGIN_DATA: [insn ~~TRUE~~> producer]:
@@ -3777,7 +3805,7 @@ create_check_block_twin (rtx insn, bool
twin ~~TRUE~~> producer
twin --ANTI--> check */
- ds = DEP_LINK_STATUS (link);
+ ds = DEP_STATUS (dep);
if (ds & BEGIN_SPEC)
{
@@ -3785,30 +3813,30 @@ create_check_block_twin (rtx insn, bool
ds &= ~BEGIN_SPEC;
}
+ init_dep_1 (new_dep, pro, check, DEP_TYPE (dep), ds);
+ sd_add_dep (new_dep, false);
+
if (rec != EXIT_BLOCK_PTR)
{
- add_back_forw_dep (check, pro, dk, ds);
- add_back_forw_dep (twin, pro, dk, ds);
+ DEP_CON (new_dep) = twin;
+ sd_add_dep (new_dep, false);
}
- else
- add_back_forw_dep (check, pro, dk, ds);
}
- for (link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)); link != NULL;)
- if ((DEP_LINK_STATUS (link) & BEGIN_SPEC)
- || mutate_p)
- /* We can delete this dep only if we totally overcome it with
- BEGIN_SPECULATION. */
- {
- delete_back_forw_dep (link);
-
- /* Restart search. */
- link = DEPS_LIST_FIRST (INSN_BACK_DEPS (insn));
- }
- else
- /* Continue search. */
- link = DEP_LINK_NEXT (link);
+ /* Second, remove backward dependencies of INSN. */
+ for (sd_it = sd_iterator_start (insn, SD_LIST_SPEC_BACK);
+ sd_iterator_cond (&sd_it, &dep);)
+ {
+ if ((DEP_STATUS (dep) & BEGIN_SPEC)
+ || mutate_p)
+ /* We can delete this dep because we overcome it with
+ BEGIN_SPECULATION. */
+ sd_delete_dep (sd_it);
+ else
+ sd_iterator_next (&sd_it);
+ }
+ /* Future Speculations. Determine what BE_IN speculations will be like. */
fs = 0;
/* Fields (DONE_SPEC (x) & BEGIN_SPEC) and CHECK_SPEC (x) are set only
@@ -3823,16 +3851,19 @@ create_check_block_twin (rtx insn, bool
DONE_SPEC (insn) = ts & BEGIN_SPEC;
CHECK_SPEC (check) = ts & BEGIN_SPEC;
+ /* Luckyness of future speculations solely depends upon initial
+ BEGIN speculation. */
if (ts & BEGIN_DATA)
fs = set_dep_weak (fs, BE_IN_DATA, get_dep_weak (ts, BEGIN_DATA));
if (ts & BEGIN_CONTROL)
- fs = set_dep_weak (fs, BE_IN_CONTROL, get_dep_weak (ts, BEGIN_CONTROL));
+ fs = set_dep_weak (fs, BE_IN_CONTROL,
+ get_dep_weak (ts, BEGIN_CONTROL));
}
else
CHECK_SPEC (check) = CHECK_SPEC (insn);
/* Future speculations: call the helper. */
- process_insn_forw_deps_be_in_spec (INSN_FORW_DEPS (insn), twin, fs);
+ process_insn_forw_deps_be_in_spec (insn, twin, fs);
if (rec != EXIT_BLOCK_PTR)
{
@@ -3842,35 +3873,45 @@ create_check_block_twin (rtx insn, bool
if (!mutate_p)
{
- add_back_forw_dep (check, insn, REG_DEP_TRUE, DEP_TRUE);
- add_back_forw_dep (twin, insn, REG_DEP_OUTPUT, DEP_OUTPUT);
+ init_dep (new_dep, insn, check, REG_DEP_TRUE);
+ sd_add_dep (new_dep, false);
+
+ init_dep (new_dep, insn, twin, REG_DEP_OUTPUT);
+ sd_add_dep (new_dep, false);
}
else
{
- dep_link_t link;
-
if (spec_info->dump)
fprintf (spec_info->dump, ";;\t\tRemoved simple check : %s\n",
(*current_sched_info->print_insn) (insn, 0));
- /* Remove all forward dependencies of the INSN. */
- link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
- while (link != NULL)
- {
- delete_back_forw_dep (link);
- link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
- }
+ /* Remove all dependencies of the INSN. */
+ {
+ sd_it = sd_iterator_start (insn, (SD_LIST_FORW
+ | SD_LIST_BACK
+ | SD_LIST_RES_BACK));
+ while (sd_iterator_cond (&sd_it, &dep))
+ sd_delete_dep (sd_it);
+ }
+ /* If former check (INSN) already was moved to the ready (or queue)
+ list, add new check (CHECK) there too. */
if (QUEUE_INDEX (insn) != QUEUE_NOWHERE)
try_ready (check);
+ /* Remove old check from instruction stream and free its
+ data. */
sched_remove_insn (insn);
}
- add_back_forw_dep (twin, check, REG_DEP_ANTI, DEP_ANTI);
+ init_dep (new_dep, check, twin, REG_DEP_ANTI);
+ sd_add_dep (new_dep, false);
}
else
- add_back_forw_dep (check, insn, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
+ {
+ init_dep_1 (new_dep, insn, check, REG_DEP_TRUE, DEP_TRUE | DEP_OUTPUT);
+ sd_add_dep (new_dep, false);
+ }
if (!mutate_p)
/* Fix priorities. If MUTATE_P is nonzero, this is not necessary,
@@ -3890,10 +3931,9 @@ create_check_block_twin (rtx insn, bool
static void
fix_recovery_deps (basic_block rec)
{
- dep_link_t link;
rtx note, insn, jump, ready_list = 0;
bitmap_head in_ready;
- rtx link1;
+ rtx link;
bitmap_initialize (&in_ready, 0);
@@ -3905,32 +3945,30 @@ fix_recovery_deps (basic_block rec)
insn = PREV_INSN (insn);
do
- {
- for (link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn)); link != NULL;)
- {
- rtx consumer;
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
- consumer = DEP_LINK_CON (link);
+ for (sd_it = sd_iterator_start (insn, SD_LIST_FORW);
+ sd_iterator_cond (&sd_it, &dep);)
+ {
+ rtx consumer = DEP_CON (dep);
if (BLOCK_FOR_INSN (consumer) != rec)
{
- delete_back_forw_dep (link);
+ sd_delete_dep (sd_it);
if (!bitmap_bit_p (&in_ready, INSN_LUID (consumer)))
{
ready_list = alloc_INSN_LIST (consumer, ready_list);
bitmap_set_bit (&in_ready, INSN_LUID (consumer));
}
-
- /* Restart search. */
- link = DEPS_LIST_FIRST (INSN_FORW_DEPS (insn));
}
else
{
- gcc_assert ((DEP_LINK_STATUS (link) & DEP_TYPES) == DEP_TRUE);
+ gcc_assert ((DEP_STATUS (dep) & DEP_TYPES) == DEP_TRUE);
- /* Continue search. */
- link = DEP_LINK_NEXT (link);
+ sd_iterator_next (&sd_it);
}
}
@@ -3941,8 +3979,8 @@ fix_recovery_deps (basic_block rec)
bitmap_clear (&in_ready);
/* Try to add instructions to the ready or queue list. */
- for (link1 = ready_list; link1; link1 = XEXP (link1, 1))
- try_ready (XEXP (link1, 0));
+ for (link = ready_list; link; link = XEXP (link, 1))
+ try_ready (XEXP (link, 0));
free_INSN_LIST_list (&ready_list);
/* Fixing jump's dependences. */
@@ -3971,6 +4009,31 @@ change_pattern (rtx insn, rtx new_pat)
dfa_clear_single_insn_cache (insn);
}
+/* Return true if INSN can potentially be speculated with type DS. */
+bool
+sched_insn_is_legitimate_for_speculation_p (rtx insn, ds_t ds)
+{
+ if (HAS_INTERNAL_DEP (insn))
+ return false;
+
+ if (!NONJUMP_INSN_P (insn))
+ return false;
+
+ if (SCHED_GROUP_P (insn))
+ return false;
+
+ if (IS_SPECULATION_CHECK_P (insn))
+ return false;
+
+ if (side_effects_p (PATTERN (insn)))
+ return false;
+
+ if ((ds & BE_IN_SPEC)
+ && may_trap_p (PATTERN (insn)))
+ return false;
+
+ return true;
+}
/* -1 - can't speculate,
0 - for speculation with REQUEST mode it is OK to use
@@ -3980,25 +4043,15 @@ static int
speculate_insn (rtx insn, ds_t request, rtx *new_pat)
{
gcc_assert (current_sched_info->flags & DO_SPECULATION
- && (request & SPECULATIVE));
+ && (request & SPECULATIVE)
+ && sched_insn_is_legitimate_for_speculation_p (insn, request));
- if (!NONJUMP_INSN_P (insn)
- || HAS_INTERNAL_DEP (insn)
- || SCHED_GROUP_P (insn)
- || side_effects_p (PATTERN (insn))
- || (request & spec_info->mask) != request)
+ if ((request & spec_info->mask) != request)
return -1;
-
- gcc_assert (!IS_SPECULATION_CHECK_P (insn));
- if (request & BE_IN_SPEC)
- {
- if (may_trap_p (PATTERN (insn)))
- return -1;
-
- if (!(request & BEGIN_SPEC))
- return 0;
- }
+ if (request & BE_IN_SPEC
+ && !(request & BEGIN_SPEC))
+ return 0;
return targetm.sched.speculate_insn (insn, request & BEGIN_SPEC, new_pat);
}
@@ -4244,6 +4297,8 @@ move_succs (VEC(edge,gc) **succsp, basic
static void
sched_remove_insn (rtx insn)
{
+ sd_finish_insn (insn);
+
change_queue_index (insn, QUEUE_NOWHERE);
current_sched_info->add_remove_insn (insn, 1);
remove_insn (insn);
@@ -4255,14 +4310,14 @@ sched_remove_insn (rtx insn)
static void
clear_priorities (rtx insn, rtx_vec_t *roots_ptr)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
bool insn_is_root_p = true;
gcc_assert (QUEUE_INDEX (insn) != QUEUE_SCHEDULED);
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
{
- dep_t dep = DEP_LINK_DEP (link);
rtx pro = DEP_PRO (dep);
if (INSN_PRIORITY_STATUS (pro) >= 0
@@ -4307,12 +4362,17 @@ add_jump_dependencies (rtx insn, rtx jum
if (insn == jump)
break;
- if (deps_list_empty_p (INSN_FORW_DEPS (insn)))
- add_back_forw_dep (jump, insn, REG_DEP_ANTI, DEP_ANTI);
+ if (sd_lists_empty_p (insn, SD_LIST_FORW))
+ {
+ dep_def _new_dep, *new_dep = &_new_dep;
+
+ init_dep (new_dep, insn, jump, REG_DEP_ANTI);
+ sd_add_dep (new_dep, false);
+ }
}
while (1);
- gcc_assert (!deps_list_empty_p (INSN_BACK_DEPS (jump)));
+ gcc_assert (!sd_lists_empty_p (jump, SD_LIST_BACK));
}
/* Return the NOTE_INSN_BASIC_BLOCK of BB. */
@@ -4330,36 +4390,6 @@ bb_note (basic_block bb)
}
#ifdef ENABLE_CHECKING
-extern void debug_spec_status (ds_t);
-
-/* Dump information about the dependence status S. */
-void
-debug_spec_status (ds_t s)
-{
- FILE *f = stderr;
-
- if (s & BEGIN_DATA)
- fprintf (f, "BEGIN_DATA: %d; ", get_dep_weak (s, BEGIN_DATA));
- if (s & BE_IN_DATA)
- fprintf (f, "BE_IN_DATA: %d; ", get_dep_weak (s, BE_IN_DATA));
- if (s & BEGIN_CONTROL)
- fprintf (f, "BEGIN_CONTROL: %d; ", get_dep_weak (s, BEGIN_CONTROL));
- if (s & BE_IN_CONTROL)
- fprintf (f, "BE_IN_CONTROL: %d; ", get_dep_weak (s, BE_IN_CONTROL));
-
- if (s & HARD_DEP)
- fprintf (f, "HARD_DEP; ");
-
- if (s & DEP_TRUE)
- fprintf (f, "DEP_TRUE; ");
- if (s & DEP_ANTI)
- fprintf (f, "DEP_ANTI; ");
- if (s & DEP_OUTPUT)
- fprintf (f, "DEP_OUTPUT; ");
-
- fprintf (f, "\n");
-}
-
/* Helper function for check_cfg.
Return nonzero, if edge vector pointed to by EL has edge with TYPE in
its flags. */
--- sched-rgn.c (/local/trunk/gcc) (revision 962)
+++ sched-rgn.c (/local/fdl-pool/gcc) (revision 962)
@@ -277,7 +277,7 @@ static int is_exception_free (rtx, int,
static bool sets_likely_spilled (rtx);
static void sets_likely_spilled_1 (rtx, const_rtx, void *);
static void add_branch_dependences (rtx, rtx);
-static void compute_block_backward_dependences (int);
+static void compute_block_dependences (int);
static void init_regions (void);
static void schedule_region (int);
@@ -1697,11 +1697,12 @@ update_live (rtx insn, int src)
static void
set_spec_fed (rtx load_insn)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (load_insn))
- if (DEP_LINK_KIND (link) == REG_DEP_TRUE)
- FED_BY_SPEC_LOAD (DEP_LINK_CON (link)) = 1;
+ FOR_EACH_DEP (load_insn, SD_LIST_FORW, sd_it, dep)
+ if (DEP_TYPE (dep) == REG_DEP_TRUE)
+ FED_BY_SPEC_LOAD (DEP_CON (dep)) = 1;
}
/* On the path from the insn to load_insn_bb, find a conditional
@@ -1710,18 +1711,19 @@ branch depending on insn, that guards th
static int
find_conditional_protection (rtx insn, int load_insn_bb)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
/* Iterate through DEF-USE forward dependences. */
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
{
- rtx next = DEP_LINK_CON (link);
+ rtx next = DEP_CON (dep);
if ((CONTAINING_RGN (BLOCK_NUM (next)) ==
CONTAINING_RGN (BB_TO_BLOCK (load_insn_bb)))
&& IS_REACHABLE (INSN_BB (next), load_insn_bb)
&& load_insn_bb != INSN_BB (next)
- && DEP_LINK_KIND (link) == REG_DEP_TRUE
+ && DEP_TYPE (dep) == REG_DEP_TRUE
&& (JUMP_P (next)
|| find_conditional_protection (next, load_insn_bb)))
return 1;
@@ -1746,14 +1748,15 @@ find_conditional_protection (rtx insn, i
static int
is_conditionally_protected (rtx load_insn, int bb_src, int bb_trg)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (load_insn))
+ FOR_EACH_DEP (load_insn, SD_LIST_BACK, sd_it, dep)
{
- rtx insn1 = DEP_LINK_PRO (link);
+ rtx insn1 = DEP_PRO (dep);
/* Must be a DEF-USE dependence upon non-branch. */
- if (DEP_LINK_KIND (link) != REG_DEP_TRUE
+ if (DEP_TYPE (dep) != REG_DEP_TRUE
|| JUMP_P (insn1))
continue;
@@ -1796,27 +1799,29 @@ is_conditionally_protected (rtx load_ins
static int
is_pfree (rtx load_insn, int bb_src, int bb_trg)
{
- dep_link_t back_link;
+ sd_iterator_def back_sd_it;
+ dep_t back_dep;
candidate *candp = candidate_table + bb_src;
if (candp->split_bbs.nr_members != 1)
/* Must have exactly one escape block. */
return 0;
- FOR_EACH_DEP_LINK (back_link, INSN_BACK_DEPS (load_insn))
+ FOR_EACH_DEP (load_insn, SD_LIST_BACK, back_sd_it, back_dep)
{
- rtx insn1 = DEP_LINK_PRO (back_link);
+ rtx insn1 = DEP_PRO (back_dep);
- if (DEP_LINK_KIND (back_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (back_dep) == REG_DEP_TRUE)
+ /* Found a DEF-USE dependence (insn1, load_insn). */
{
- /* Found a DEF-USE dependence (insn1, load_insn). */
- dep_link_t fore_link;
+ sd_iterator_def fore_sd_it;
+ dep_t fore_dep;
- FOR_EACH_DEP_LINK (fore_link, INSN_FORW_DEPS (insn1))
+ FOR_EACH_DEP (insn1, SD_LIST_FORW, fore_sd_it, fore_dep)
{
- rtx insn2 = DEP_LINK_CON (fore_link);
+ rtx insn2 = DEP_CON (fore_dep);
- if (DEP_LINK_KIND (fore_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (fore_dep) == REG_DEP_TRUE)
{
/* Found a DEF-USE dependence (insn1, insn2). */
if (haifa_classify_insn (insn2) != PFREE_CANDIDATE)
@@ -1849,7 +1854,7 @@ is_prisky (rtx load_insn, int bb_src, in
if (FED_BY_SPEC_LOAD (load_insn))
return 1;
- if (deps_list_empty_p (INSN_BACK_DEPS (load_insn)))
+ if (sd_lists_empty_p (load_insn, SD_LIST_BACK))
/* Dependence may 'hide' out of the region. */
return 1;
@@ -2081,7 +2086,8 @@ new_ready (rtx next, ds_t ts)
if (not_ex_free
/* We are here because is_exception_free () == false.
But we possibly can handle that with control speculation. */
- && current_sched_info->flags & DO_SPECULATION)
+ && (current_sched_info->flags & DO_SPECULATION)
+ && (spec_info->mask & BEGIN_CONTROL))
/* Here we got new control-speculative instruction. */
ts = set_dep_weak (ts, BEGIN_CONTROL, MAX_DEP_WEAK);
else
@@ -2263,8 +2269,7 @@ add_branch_dependences (rtx head, rtx ta
if (!NOTE_P (insn))
{
if (last != 0
- && (find_link_by_pro_in_deps_list (INSN_BACK_DEPS (last), insn)
- == NULL))
+ && sd_find_dep_between (insn, last, false) == NULL)
{
if (! sched_insns_conditions_mutex_p (last, insn))
add_dependence (last, insn, REG_DEP_ANTI);
@@ -2472,7 +2477,7 @@ propagate_deps (int bb, struct deps *pre
pred_deps->pending_write_mems = 0;
}
-/* Compute backward dependences inside bb. In a multiple blocks region:
+/* Compute dependences inside bb. In a multiple blocks region:
(1) a bb is analyzed after its predecessors, and (2) the lists in
effect at the end of bb (after analyzing for bb) are inherited by
bb's successors.
@@ -2490,7 +2495,7 @@ propagate_deps (int bb, struct deps *pre
similar, and the result is interblock dependences in the region. */
static void
-compute_block_backward_dependences (int bb)
+compute_block_dependences (int bb)
{
rtx head, tail;
struct deps tmp_deps;
@@ -2500,6 +2505,7 @@ compute_block_backward_dependences (int
/* Do the analysis for this block. */
gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
+
sched_analyze (&tmp_deps, head, tail);
add_branch_dependences (head, tail);
@@ -2508,6 +2514,21 @@ compute_block_backward_dependences (int
/* Free up the INSN_LISTs. */
free_deps (&tmp_deps);
+
+ if (targetm.sched.dependencies_evaluation_hook)
+ targetm.sched.dependencies_evaluation_hook (head, tail);
+}
+
+/* Free dependencies of instructions inside BB. */
+static void
+free_block_dependencies (int bb)
+{
+ rtx head;
+ rtx tail;
+
+ get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
+
+ sched_free_deps (head, tail, true);
}
/* Remove all INSN_LISTs and EXPR_LISTs from the pending lists and add
@@ -2527,7 +2548,8 @@ free_pending_lists (void)
}
}
\f
-
+/* Print dependences for debugging starting from FROM_BB.
+ Callable from debugger. */
/* Print dependences for debugging starting from FROM_BB.
Callable from debugger. */
void
@@ -2567,8 +2589,6 @@ void debug_dependencies (rtx head, rtx t
for (insn = head; insn != next_tail; insn = NEXT_INSN (insn))
{
- dep_link_t link;
-
if (! INSN_P (insn))
{
int n;
@@ -2589,7 +2609,7 @@ void debug_dependencies (rtx head, rtx t
INSN_UID (insn),
INSN_CODE (insn),
BLOCK_NUM (insn),
- INSN_DEP_COUNT (insn),
+ sd_lists_size (insn, SD_LIST_BACK),
INSN_PRIORITY (insn),
insn_cost (insn));
@@ -2599,8 +2619,13 @@ void debug_dependencies (rtx head, rtx t
print_reservation (sched_dump, insn);
fprintf (sched_dump, "\t: ");
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
- fprintf (sched_dump, "%d ", INSN_UID (DEP_LINK_CON (link)));
+ {
+ sd_iterator_def sd_it;
+ dep_t dep;
+
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
+ fprintf (sched_dump, "%d ", INSN_UID (DEP_CON (dep)));
+ }
fprintf (sched_dump, "\n");
}
@@ -2658,23 +2683,9 @@ schedule_region (int rgn)
for (bb = 0; bb < current_nr_blocks; bb++)
init_deps (bb_deps + bb);
- /* Compute backward dependencies. */
+ /* Compute dependencies. */
for (bb = 0; bb < current_nr_blocks; bb++)
- compute_block_backward_dependences (bb);
-
- /* Compute forward dependencies. */
- for (bb = current_nr_blocks - 1; bb >= 0; bb--)
- {
- rtx head, tail;
-
- gcc_assert (EBB_FIRST_BB (bb) == EBB_LAST_BB (bb));
- get_ebb_head_tail (EBB_FIRST_BB (bb), EBB_LAST_BB (bb), &head, &tail);
-
- compute_forward_dependences (head, tail);
-
- if (targetm.sched.dependencies_evaluation_hook)
- targetm.sched.dependencies_evaluation_hook (head, tail);
- }
+ compute_block_dependences (bb);
free_pending_lists ();
@@ -2826,7 +2837,6 @@ schedule_region (int rgn)
/* Sanity check: verify that all region insns were scheduled. */
gcc_assert (sched_rgn_n_insns == rgn_n_insns);
-
/* Done with this region. */
if (current_nr_blocks > 1)
@@ -2837,6 +2847,13 @@ schedule_region (int rgn)
sbitmap_vector_free (ancestor_edges);
free (rgn_edges);
}
+
+ /* Free dependencies. */
+ for (bb = 0; bb < current_nr_blocks; ++bb)
+ free_block_dependencies (bb);
+
+ gcc_assert (haifa_recovery_bb_ever_added_p
+ || deps_pools_are_empty_p ());
}
/* Initialize data structures for region scheduling. */
--- sched-ebb.c (/local/trunk/gcc) (revision 962)
+++ sched-ebb.c (/local/fdl-pool/gcc) (revision 962)
@@ -303,24 +303,26 @@ static struct sched_info ebb_sched_info
static basic_block
earliest_block_with_similiar_load (basic_block last_block, rtx load_insn)
{
- dep_link_t back_link;
+ sd_iterator_def back_sd_it;
+ dep_t back_dep;
basic_block bb, earliest_block = NULL;
- FOR_EACH_DEP_LINK (back_link, INSN_BACK_DEPS (load_insn))
+ FOR_EACH_DEP (load_insn, SD_LIST_BACK, back_sd_it, back_dep)
{
- rtx insn1 = DEP_LINK_PRO (back_link);
+ rtx insn1 = DEP_PRO (back_dep);
- if (DEP_LINK_KIND (back_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (back_dep) == REG_DEP_TRUE)
+ /* Found a DEF-USE dependence (insn1, load_insn). */
{
- /* Found a DEF-USE dependence (insn1, load_insn). */
- dep_link_t fore_link;
+ sd_iterator_def fore_sd_it;
+ dep_t fore_dep;
- FOR_EACH_DEP_LINK (fore_link, INSN_FORW_DEPS (insn1))
+ FOR_EACH_DEP (insn1, SD_LIST_FORW, fore_sd_it, fore_dep)
{
- rtx insn2 = DEP_LINK_CON (fore_link);
+ rtx insn2 = DEP_CON (fore_dep);
basic_block insn2_block = BLOCK_FOR_INSN (insn2);
- if (DEP_LINK_KIND (fore_link) == REG_DEP_TRUE)
+ if (DEP_TYPE (fore_dep) == REG_DEP_TRUE)
{
if (earliest_block != NULL
&& earliest_block->index < insn2_block->index)
@@ -395,23 +397,32 @@ add_deps_for_risky_insns (rtx head, rtx
rank. */
if (! sched_insns_conditions_mutex_p (insn, prev))
{
- if (!(current_sched_info->flags & DO_SPECULATION))
+ dep_def _dep, *dep = &_dep;
+
+ init_dep (dep, prev, insn, REG_DEP_ANTI);
+
+ if (!(current_sched_info->flags & USE_DEPS_LIST))
{
enum DEPS_ADJUST_RESULT res;
-
- res = add_or_update_back_dep (insn, prev,
- REG_DEP_ANTI, DEP_ANTI);
-
- if (res == DEP_CREATED)
- add_forw_dep (DEPS_LIST_FIRST (INSN_BACK_DEPS (insn)));
- else
- gcc_assert (res != DEP_CHANGED);
+
+ res = sd_add_or_update_dep (dep, false);
+
+ /* We can't change an existing dependency with
+ DEP_ANTI. */
+ gcc_assert (res != DEP_CHANGED);
}
else
- add_or_update_back_forw_dep (insn, prev, REG_DEP_ANTI,
- set_dep_weak (DEP_ANTI,
- BEGIN_CONTROL,
- MAX_DEP_WEAK));
+ {
+ if ((current_sched_info->flags & DO_SPECULATION)
+ && (spec_info->mask & BEGIN_CONTROL))
+ DEP_STATUS (dep) = set_dep_weak (DEP_ANTI, BEGIN_CONTROL,
+ MAX_DEP_WEAK);
+
+ sd_add_or_update_dep (dep, false);
+
+ /* Dep_status could have been changed.
+ No assertion here. */
+ }
}
break;
@@ -450,14 +461,11 @@ schedule_ebb (rtx head, rtx tail)
{
init_deps_global ();
- /* Compute backward dependencies. */
+ /* Compute dependencies. */
init_deps (&tmp_deps);
sched_analyze (&tmp_deps, head, tail);
free_deps (&tmp_deps);
- /* Compute forward dependencies. */
- compute_forward_dependences (head, tail);
-
add_deps_for_risky_insns (head, tail);
if (targetm.sched.dependencies_evaluation_hook)
@@ -510,8 +518,12 @@ schedule_ebb (rtx head, rtx tail)
/* Sanity check: verify that all region insns were scheduled. */
gcc_assert (sched_n_insns == n_insns);
- head = current_sched_info->head;
- tail = current_sched_info->tail;
+
+ /* Free dependencies. */
+ sched_free_deps (current_sched_info->head, current_sched_info->tail, true);
+
+ gcc_assert (haifa_recovery_bb_ever_added_p
+ || deps_pools_are_empty_p ());
if (EDGE_COUNT (last_bb->preds) == 0)
/* LAST_BB is unreachable. */
--- ddg.c (/local/trunk/gcc) (revision 962)
+++ ddg.c (/local/fdl-pool/gcc) (revision 962)
@@ -159,9 +159,9 @@ create_ddg_dep_from_intra_loop_link (ddg
gcc_assert (link);
/* Note: REG_DEP_ANTI applies to MEM ANTI_DEP as well!! */
- if (DEP_KIND (link) == REG_DEP_ANTI)
+ if (DEP_TYPE (link) == REG_DEP_ANTI)
t = ANTI_DEP;
- else if (DEP_KIND (link) == REG_DEP_OUTPUT)
+ else if (DEP_TYPE (link) == REG_DEP_OUTPUT)
t = OUTPUT_DEP;
/* We currently choose not to create certain anti-deps edges and
@@ -368,7 +368,6 @@ build_intra_loop_deps (ddg_ptr g)
/* Hold the dependency analysis state during dependency calculations. */
struct deps tmp_deps;
rtx head, tail;
- dep_link_t link;
/* Build the dependence information, using the sched_analyze function. */
init_deps_global ();
@@ -383,19 +382,19 @@ build_intra_loop_deps (ddg_ptr g)
for (i = 0; i < g->num_nodes; i++)
{
ddg_node_ptr dest_node = &g->nodes[i];
+ sd_iterator_def sd_it;
+ dep_t dep;
if (! INSN_P (dest_node->insn))
continue;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (dest_node->insn))
+ FOR_EACH_DEP (dest_node->insn, SD_LIST_BACK, sd_it, dep)
{
- dep_t dep = DEP_LINK_DEP (link);
ddg_node_ptr src_node = get_node_of_insn (g, DEP_PRO (dep));
if (!src_node)
continue;
- add_forw_dep (link);
create_ddg_dep_from_intra_loop_link (g, src_node, dest_node, dep);
}
@@ -420,6 +419,9 @@ build_intra_loop_deps (ddg_ptr g)
/* Free the INSN_LISTs. */
finish_deps_global ();
free_deps (&tmp_deps);
+
+ /* Free dependencies. */
+ sched_free_deps (head, tail, false);
}
--- config/ia64/ia64.c (/local/trunk/gcc) (revision 962)
+++ config/ia64/ia64.c (/local/fdl-pool/gcc) (revision 962)
@@ -6341,28 +6341,37 @@ ia64_dependencies_evaluation_hook (rtx h
if (INSN_P (insn)
&& ia64_safe_itanium_class (insn) == ITANIUM_CLASS_IALU)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
+ bool has_mem_op_consumer_p = false;
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
{
enum attr_itanium_class c;
- if (DEP_LINK_KIND (link) != REG_DEP_TRUE)
+ if (DEP_TYPE (dep) != REG_DEP_TRUE)
continue;
- next = DEP_LINK_CON (link);
+ next = DEP_CON (dep);
c = ia64_safe_itanium_class (next);
if ((c == ITANIUM_CLASS_ST
|| c == ITANIUM_CLASS_STF)
&& ia64_st_address_bypass_p (insn, next))
- break;
+ {
+ has_mem_op_consumer_p = true;
+ break;
+ }
else if ((c == ITANIUM_CLASS_LD
|| c == ITANIUM_CLASS_FLD
|| c == ITANIUM_CLASS_FLDP)
&& ia64_ld_address_bypass_p (insn, next))
- break;
+ {
+ has_mem_op_consumer_p = true;
+ break;
+ }
}
- insn->call = link != 0;
+
+ insn->call = has_mem_op_consumer_p;
}
}
@@ -6639,14 +6648,15 @@ ia64_dfa_new_cycle (FILE *dump, int verb
if (c != ITANIUM_CLASS_MMMUL && c != ITANIUM_CLASS_MMSHF)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
int d = -1;
- FOR_EACH_DEP_LINK (link, INSN_BACK_DEPS (insn))
- if (DEP_LINK_KIND (link) == REG_DEP_TRUE)
+ FOR_EACH_DEP (insn, SD_LIST_BACK, sd_it, dep)
+ if (DEP_TYPE (dep) == REG_DEP_TRUE)
{
enum attr_itanium_class dep_class;
- rtx dep_insn = DEP_LINK_PRO (link);
+ rtx dep_insn = DEP_PRO (dep);
dep_class = ia64_safe_itanium_class (dep_insn);
if ((dep_class == ITANIUM_CLASS_MMMUL
@@ -7177,13 +7187,14 @@ ia64_gen_check (rtx insn, rtx label, boo
As long as patterns are unique for each instruction, this can be
accomplished by matching ORIG_PAT fields. */
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
int check_no = 0;
rtx orig_pat = ORIG_PAT (insn);
- FOR_EACH_DEP_LINK (link, INSN_RESOLVED_BACK_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_RES_BACK, sd_it, dep)
{
- rtx x = DEP_LINK_PRO (link);
+ rtx x = DEP_PRO (dep);
if (ORIG_PAT (x) == orig_pat)
check_no = spec_check_no[INSN_UID (x)];
--- config/rs6000/rs6000.c (/local/trunk/gcc) (revision 962)
+++ config/rs6000/rs6000.c (/local/fdl-pool/gcc) (revision 962)
@@ -17951,7 +17951,7 @@ rs6000_is_costly_dependence (dep_t dep,
if (rs6000_sched_costly_dep == true_store_to_load_dep_costly
&& is_load_insn (next)
&& is_store_insn (insn)
- && DEP_KIND (dep) == REG_DEP_TRUE)
+ && DEP_TYPE (dep) == REG_DEP_TRUE)
/* Prevent load after store in the same group if it is a true
dependence. */
return true;
@@ -18427,15 +18427,15 @@ is_costly_group (rtx *group_insns, rtx n
for (i = 0; i < issue_rate; i++)
{
- dep_link_t link;
+ sd_iterator_def sd_it;
+ dep_t dep;
rtx insn = group_insns[i];
if (!insn)
continue;
- FOR_EACH_DEP_LINK (link, INSN_FORW_DEPS (insn))
+ FOR_EACH_DEP (insn, SD_LIST_FORW, sd_it, dep)
{
- dep_t dep = DEP_LINK_DEP (link);
rtx next = DEP_CON (dep);
if (next == next_insn
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-08-14 6:18 ` Maxim Kuvyrkov
@ 2007-08-22 12:16 ` Roman Zippel
2007-08-22 14:04 ` Maxim Kuvyrkov
0 siblings, 1 reply; 15+ messages in thread
From: Roman Zippel @ 2007-08-22 12:16 UTC (permalink / raw)
To: Maxim Kuvyrkov
Cc: Vladimir Makarov, James E Wilson, Ayal Zaks, gcc-patches, David Edelsohn
Hi,
On Tue, 14 Aug 2007, Maxim Kuvyrkov wrote:
> I successfully tested (bootstrap + regtest) the patch on ia64-linux-gnu,
> ppc64-linux-gnu (normal and with '-fmodulo-sched
> -fmodulo-sched-allow-regmoves' set to 1 in common.opt) and x86_64-linux-gnu.
It breaks bootstrap on all ports where INSN_SCHEDULING isn't defined:
cc1: warnings being treated as errors
../../gcc/gcc/sched-deps.c: In function 'update_dep':
../../gcc/gcc/sched-deps.c:946: error: unused parameter 'sd_it'
../../gcc/gcc/sched-deps.c:946: error: unused parameter 'mem1'
../../gcc/gcc/sched-deps.c:946: error: unused parameter 'mem2'
../../gcc/gcc/sched-deps.c: In function 'sched_analyze_insn':
../../gcc/gcc/sched-deps.c:2212: error: implicit declaration of function 'change_spec_dep_to_hard'
make[3]: *** [sched-deps.o] Error 1
bye, Roman
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-08-22 12:16 ` Roman Zippel
@ 2007-08-22 14:04 ` Maxim Kuvyrkov
2007-08-25 15:00 ` Dorit Nuzman
2007-08-30 23:01 ` Vladimir Makarov
0 siblings, 2 replies; 15+ messages in thread
From: Maxim Kuvyrkov @ 2007-08-22 14:04 UTC (permalink / raw)
To: Roman Zippel
Cc: Vladimir Makarov, James E Wilson, Ayal Zaks, gcc-patches,
David Edelsohn, Dorit Nuzman
[-- Attachment #1: Type: text/plain, Size: 667 bytes --]
Roman Zippel wrote:
> Hi,
>
> On Tue, 14 Aug 2007, Maxim Kuvyrkov wrote:
>
>> I successfully tested (bootstrap + regtest) the patch on ia64-linux-gnu,
>> ppc64-linux-gnu (normal and with '-fmodulo-sched
>> -fmodulo-sched-allow-regmoves' set to 1 in common.opt) and x86_64-linux-gnu.
>
> It breaks bootstrap on all ports where INSN_SCHEDULING isn't defined:
Yes, sorry about that. I only cross-compiled to those targets.
The attached is patch that fixes warnings. Jim, Vladimir, ok to commit?
Besides warnings in sched-deps.c I also spotted an 'uninitialized use'
warning in tree-vect-transform.c, the patch also fixes that. Dorit, ok
to commit?
--
Maxim
[-- Attachment #2: fdl-pool-uninit.ChangeLog --]
[-- Type: text/plain, Size: 313 bytes --]
2007-08-22 Maxim Kuvyrkov <maxim@codesourcery.com>
* sched-deps.c (update_dep): Mark arguments with ATTRIBUTE_UNUSED.
(sched_analyze_insn): Don't postprocess speculative dependencies when
target has no scheduling.
* tree-vect-transform.c (vect_estimate_min_profitable_iters): Fix
'uninitialized' warning.
[-- Attachment #3: fdl-pool-uninit.patch --]
[-- Type: text/x-patch, Size: 1671 bytes --]
Index: gcc/sched-deps.c
===================================================================
--- gcc/sched-deps.c (revision 127704)
+++ gcc/sched-deps.c (working copy)
@@ -943,7 +943,9 @@ change_spec_dep_to_hard (sd_iterator_def
data-speculative dependence should be updated. */
static enum DEPS_ADJUST_RESULT
update_dep (dep_t dep, dep_t new_dep,
- sd_iterator_def sd_it, rtx mem1, rtx mem2)
+ sd_iterator_def sd_it ATTRIBUTE_UNUSED,
+ rtx mem1 ATTRIBUTE_UNUSED,
+ rtx mem2 ATTRIBUTE_UNUSED)
{
enum DEPS_ADJUST_RESULT res = DEP_PRESENT;
enum reg_note old_type = DEP_TYPE (dep);
@@ -2199,6 +2201,7 @@ sched_analyze_insn (struct deps *deps, r
if (SCHED_GROUP_P (insn))
fixup_sched_groups (insn);
+#ifdef INSN_SCHEDULING
if ((current_sched_info->flags & DO_SPECULATION)
&& !sched_insn_is_legitimate_for_speculation_p (insn, 0))
/* INSN has an internal dependency (e.g. r14 = [r14]) and thus cannot
@@ -2211,6 +2214,7 @@ sched_analyze_insn (struct deps *deps, r
sd_iterator_cond (&sd_it, &dep);)
change_spec_dep_to_hard (sd_it);
}
+#endif
}
/* Analyze every insn between HEAD and TAIL inclusive, creating backward
Index: gcc/tree-vect-transform.c
===================================================================
--- gcc/tree-vect-transform.c (revision 127704)
+++ gcc/tree-vect-transform.c (working copy)
@@ -124,7 +124,7 @@ vect_estimate_min_profitable_iters (loop
basic_block *bbs = LOOP_VINFO_BBS (loop_vinfo);
int nbbs = loop->num_nodes;
int byte_misalign;
- int innerloop_iters, factor;
+ int innerloop_iters = 0, factor;
/* Cost model disabled. */
if (!flag_vect_cost_model)
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-08-22 14:04 ` Maxim Kuvyrkov
@ 2007-08-25 15:00 ` Dorit Nuzman
2007-08-30 23:01 ` Vladimir Makarov
1 sibling, 0 replies; 15+ messages in thread
From: Dorit Nuzman @ 2007-08-25 15:00 UTC (permalink / raw)
To: Maxim Kuvyrkov
Cc: Ayal Zaks, David Edelsohn, gcc-patches, Vladimir Makarov,
James E Wilson, Roman Zippel
Maxim Kuvyrkov <maxim@codesourcery.com> wrote on 22/08/2007 17:01:13:
> Roman Zippel wrote:
...
> Besides warnings in sched-deps.c I also spotted an 'uninitialized use'
> warning in tree-vect-transform.c, the patch also fixes that. Dorit, ok
> to commit?
>
this is ok
thanks,
dorit
> --
> Maxim
>
> 2007-08-22 Maxim Kuvyrkov <maxim@codesourcery.com>
>
> * sched-deps.c (update_dep): Mark arguments with ATTRIBUTE_UNUSED.
> (sched_analyze_insn): Don't postprocess speculative dependencies when
> target has no scheduling.
> * tree-vect-transform.c (vect_estimate_min_profitable_iters): Fix
> 'uninitialized' warning.
> Index: gcc/sched-deps.c
> ===================================================================
> --- gcc/sched-deps.c (revision 127704)
> +++ gcc/sched-deps.c (working copy)
> @@ -943,7 +943,9 @@ change_spec_dep_to_hard (sd_iterator_def
> data-speculative dependence should be updated. */
> static enum DEPS_ADJUST_RESULT
> update_dep (dep_t dep, dep_t new_dep,
> - sd_iterator_def sd_it, rtx mem1, rtx mem2)
> + sd_iterator_def sd_it ATTRIBUTE_UNUSED,
> + rtx mem1 ATTRIBUTE_UNUSED,
> + rtx mem2 ATTRIBUTE_UNUSED)
> {
> enum DEPS_ADJUST_RESULT res = DEP_PRESENT;
> enum reg_note old_type = DEP_TYPE (dep);
> @@ -2199,6 +2201,7 @@ sched_analyze_insn (struct deps *deps, r
> if (SCHED_GROUP_P (insn))
> fixup_sched_groups (insn);
>
> +#ifdef INSN_SCHEDULING
> if ((current_sched_info->flags & DO_SPECULATION)
> && !sched_insn_is_legitimate_for_speculation_p (insn, 0))
> /* INSN has an internal dependency (e.g. r14 = [r14]) and thus
cannot
> @@ -2211,6 +2214,7 @@ sched_analyze_insn (struct deps *deps, r
> sd_iterator_cond (&sd_it, &dep);)
> change_spec_dep_to_hard (sd_it);
> }
> +#endif
> }
>
> /* Analyze every insn between HEAD and TAIL inclusive, creating backward
> Index: gcc/tree-vect-transform.c
> ===================================================================
> --- gcc/tree-vect-transform.c (revision 127704)
> +++ gcc/tree-vect-transform.c (working copy)
> @@ -124,7 +124,7 @@ vect_estimate_min_profitable_iters (loop
> basic_block *bbs = LOOP_VINFO_BBS (loop_vinfo);
> int nbbs = loop->num_nodes;
> int byte_misalign;
> - int innerloop_iters, factor;
> + int innerloop_iters = 0, factor;
>
> /* Cost model disabled. */
> if (!flag_vect_cost_model)
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PING][PATCH]: Improvements to sched-deps.c interface
2007-08-22 14:04 ` Maxim Kuvyrkov
2007-08-25 15:00 ` Dorit Nuzman
@ 2007-08-30 23:01 ` Vladimir Makarov
1 sibling, 0 replies; 15+ messages in thread
From: Vladimir Makarov @ 2007-08-30 23:01 UTC (permalink / raw)
To: Maxim Kuvyrkov; +Cc: Roman Zippel, James E Wilson, gcc-patches
Maxim Kuvyrkov wrote:
> The attached is patch that fixes warnings. Jim, Vladimir, ok to commit?
>
>
Sorry, for the slow response (I was on vacation). Yes, the patch is ok
to commit.
> 2007-08-22 Maxim Kuvyrkov <maxim@codesourcery.com>
>
> * sched-deps.c (update_dep): Mark arguments with ATTRIBUTE_UNUSED.
> (sched_analyze_insn): Don't postprocess speculative dependencies when
> target has no scheduling.
> * tree-vect-transform.c (vect_estimate_min_profitable_iters): Fix
> 'uninitialized' warning.
>
>
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2007-08-30 21:22 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-04-13 7:00 [PATCH]: Improvements to sched-deps.c interface Maxim Kuvyrkov
2007-06-07 8:27 ` [PING][PATCH]: " Maxim Kuvyrkov
2007-06-14 21:43 ` Vladimir Makarov
2007-06-14 22:49 ` David Edelsohn
2007-06-15 11:51 ` Ayal Zaks
2007-06-15 13:10 ` Maxim Kuvyrkov
2007-06-15 22:08 ` Ayal Zaks
2007-06-17 20:02 ` Maxim Kuvyrkov
2007-07-01 18:20 ` Maxim Kuvyrkov
2007-07-02 22:00 ` Jim Wilson
2007-08-14 6:18 ` Maxim Kuvyrkov
2007-08-22 12:16 ` Roman Zippel
2007-08-22 14:04 ` Maxim Kuvyrkov
2007-08-25 15:00 ` Dorit Nuzman
2007-08-30 23:01 ` Vladimir Makarov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).