From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 9594 invoked by alias); 6 Aug 2014 17:40:21 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 9527 invoked by uid 89); 6 Aug 2014 17:40:20 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.2 required=5.0 tests=AWL,BAYES_00 autolearn=ham version=3.3.2 X-HELO: eggs.gnu.org Received: from eggs.gnu.org (HELO eggs.gnu.org) (208.118.235.92) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-SHA encrypted) ESMTPS; Wed, 06 Aug 2014 17:40:18 +0000 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XF4tR-000155-A5 for gcc-patches@gcc.gnu.org; Wed, 06 Aug 2014 13:20:59 -0400 Received: from mx1.redhat.com ([209.132.183.28]:20405) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XF4tQ-00014m-VM for gcc-patches@gcc.gnu.org; Wed, 06 Aug 2014 13:20:53 -0400 Received: from int-mx13.intmail.prod.int.phx2.redhat.com (int-mx13.intmail.prod.int.phx2.redhat.com [10.5.11.26]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s76HJqAA008328 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK) for ; Wed, 6 Aug 2014 13:19:52 -0400 Received: from c64.redhat.com (vpn-239-139.phx2.redhat.com [10.3.239.139]) by int-mx13.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s76HJ2p1030913; Wed, 6 Aug 2014 13:19:51 -0400 From: David Malcolm To: gcc-patches@gcc.gnu.org Cc: David Malcolm Subject: [PATCH 085/236] ira: Use rtx_insn in various places Date: Wed, 06 Aug 2014 17:40:00 -0000 Message-Id: <1407345815-14551-86-git-send-email-dmalcolm@redhat.com> In-Reply-To: <1407345815-14551-1-git-send-email-dmalcolm@redhat.com> References: <1407345815-14551-1-git-send-email-dmalcolm@redhat.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 209.132.183.28 X-IsSubscribed: yes X-SW-Source: 2014-08/txt/msg00617.txt.bz2 gcc/ * ira-int.h (struct ira_allocno_copy): Strengthen field "insn" from rtx to rtx_insn *insn. (ira_create_copy): Strengthen param "insn" from rtx to rtx_insn *. (ira_add_allocno_copy): Likewise. * ira-build.c (find_allocno_copy): Strengthen param "insn" from rtx to rtx_insn *. (ira_create_copy): Likewise. (ira_add_allocno_copy): Likewise. (create_bb_allocnos): Likewise for local "insn". * ira-conflicts.c (process_regs_for_copy): Likewise for param "insn". (process_reg_shuffles): Update NULL_RTX to NULL in invocation of process_regs_for_copy for rtx_insn * param. (add_insn_allocno_copies): Strengthen param "insn" from rtx to rtx_insn *insn. Update NULL_RTX to NULL in invocation of process_regs_for_copy for rtx_insn * param. (add_copies): Strengthen local "insn" from rtx to rtx_insn *insn. * ira-costs.c (record_reg_classes): Likewise for param "insn". (record_operand_costs): Likewise. (scan_one_insn): Likewise for return type, and for param "insn". (process_bb_for_costs): Likewise for local "insn". (process_bb_node_for_hard_reg_moves): Likewise. * ira-emit.c (struct move): Likewise for field "insn". (create_move): Eliminate use of NULL_RTX when dealing with an rtx_insn *. (emit_move_list): Strengthen return type and locals "result", "insn" from rtx to rtx_insn *insn. (emit_moves): Likewise for locals "insns", "tmp". (ira_emit): Likewise for local "insn". * ira-lives.c (mark_hard_reg_early_clobbers): Likewise for param "insn". (find_call_crossed_cheap_reg): Likewise. (process_bb_node_lives): Likewise for local "insn". * ira.c (decrease_live_ranges_number): Likewise. (compute_regs_asm_clobbered): Likewise. (build_insn_chain): Likewise. (find_moveable_pseudos): Likewise, also locals "def_insn", "use_insn", "x". Also strengthen local "closest_uses" from rtx * to rtx_insn **. Add a checked cast when assigning from "closest_use" into closest_uses array in a region where we know it's a non-NULL insn. (interesting_dest_for_shprep): Strengthen param "insn" from rtx to rtx_insn *. (split_live_ranges_for_shrink_wrap): Likewise for locals "insn", "last_interesting_insn", "uin". (move_unallocated_pseudos): Likewise for locals "def_insn", "move_insn", "newinsn". --- gcc/ira-build.c | 10 +++++----- gcc/ira-conflicts.c | 10 +++++----- gcc/ira-costs.c | 13 +++++++------ gcc/ira-emit.c | 13 +++++++------ gcc/ira-int.h | 8 +++++--- gcc/ira-lives.c | 6 +++--- gcc/ira.c | 40 ++++++++++++++++++++++------------------ 7 files changed, 54 insertions(+), 46 deletions(-) diff --git a/gcc/ira-build.c b/gcc/ira-build.c index 000c25c..d2c8973 100644 --- a/gcc/ira-build.c +++ b/gcc/ira-build.c @@ -39,7 +39,7 @@ along with GCC; see the file COPYING3. If not see #include "ira-int.h" #include "emit-rtl.h" /* FIXME: Can go away once crtl is moved to rtl.h. */ -static ira_copy_t find_allocno_copy (ira_allocno_t, ira_allocno_t, rtx, +static ira_copy_t find_allocno_copy (ira_allocno_t, ira_allocno_t, rtx_insn *, ira_loop_tree_node_t); /* The root of the loop tree corresponding to the all function. */ @@ -1385,7 +1385,7 @@ initiate_copies (void) /* Return copy connecting A1 and A2 and originated from INSN of LOOP_TREE_NODE if any. */ static ira_copy_t -find_allocno_copy (ira_allocno_t a1, ira_allocno_t a2, rtx insn, +find_allocno_copy (ira_allocno_t a1, ira_allocno_t a2, rtx_insn *insn, ira_loop_tree_node_t loop_tree_node) { ira_copy_t cp, next_cp; @@ -1416,7 +1416,7 @@ find_allocno_copy (ira_allocno_t a1, ira_allocno_t a2, rtx insn, SECOND, FREQ, CONSTRAINT_P, and INSN. */ ira_copy_t ira_create_copy (ira_allocno_t first, ira_allocno_t second, int freq, - bool constraint_p, rtx insn, + bool constraint_p, rtx_insn *insn, ira_loop_tree_node_t loop_tree_node) { ira_copy_t cp; @@ -1493,7 +1493,7 @@ swap_allocno_copy_ends_if_necessary (ira_copy_t cp) LOOP_TREE_NODE. */ ira_copy_t ira_add_allocno_copy (ira_allocno_t first, ira_allocno_t second, int freq, - bool constraint_p, rtx insn, + bool constraint_p, rtx_insn *insn, ira_loop_tree_node_t loop_tree_node) { ira_copy_t cp; @@ -1927,7 +1927,7 @@ static void create_bb_allocnos (ira_loop_tree_node_t bb_node) { basic_block bb; - rtx insn; + rtx_insn *insn; unsigned int i; bitmap_iterator bi; diff --git a/gcc/ira-conflicts.c b/gcc/ira-conflicts.c index 8701622..011a865 100644 --- a/gcc/ira-conflicts.c +++ b/gcc/ira-conflicts.c @@ -244,7 +244,7 @@ go_through_subreg (rtx x, int *offset) FALSE. */ static bool process_regs_for_copy (rtx reg1, rtx reg2, bool constraint_p, - rtx insn, int freq) + rtx_insn *insn, int freq) { int allocno_preferenced_hard_regno, cost, index, offset1, offset2; bool only_regs_p; @@ -345,7 +345,7 @@ process_reg_shuffles (rtx reg, int op_num, int freq, bool *bound_p) || bound_p[i]) continue; - process_regs_for_copy (reg, another_reg, false, NULL_RTX, freq); + process_regs_for_copy (reg, another_reg, false, NULL, freq); } } @@ -353,7 +353,7 @@ process_reg_shuffles (rtx reg, int op_num, int freq, bool *bound_p) it might be because INSN is a pseudo-register move or INSN is two operand insn. */ static void -add_insn_allocno_copies (rtx insn) +add_insn_allocno_copies (rtx_insn *insn) { rtx set, operand, dup; bool bound_p[MAX_RECOG_OPERANDS]; @@ -396,7 +396,7 @@ add_insn_allocno_copies (rtx insn) REG_P (operand) ? operand : SUBREG_REG (operand)) != NULL_RTX) - process_regs_for_copy (operand, dup, true, NULL_RTX, + process_regs_for_copy (operand, dup, true, NULL, freq); } } @@ -421,7 +421,7 @@ static void add_copies (ira_loop_tree_node_t loop_tree_node) { basic_block bb; - rtx insn; + rtx_insn *insn; bb = loop_tree_node->bb; if (bb == NULL) diff --git a/gcc/ira-costs.c b/gcc/ira-costs.c index 4ecf75f..2709a23 100644 --- a/gcc/ira-costs.c +++ b/gcc/ira-costs.c @@ -402,7 +402,7 @@ copy_cost (rtx x, enum machine_mode mode, reg_class_t rclass, bool to_p, static void record_reg_classes (int n_alts, int n_ops, rtx *ops, enum machine_mode *modes, const char **constraints, - rtx insn, enum reg_class *pref) + rtx_insn *insn, enum reg_class *pref) { int alt; int i, j, k; @@ -1242,7 +1242,7 @@ record_address_regs (enum machine_mode mode, addr_space_t as, rtx x, /* Calculate the costs of insn operands. */ static void -record_operand_costs (rtx insn, enum reg_class *pref) +record_operand_costs (rtx_insn *insn, enum reg_class *pref) { const char *constraints[MAX_RECOG_OPERANDS]; enum machine_mode modes[MAX_RECOG_OPERANDS]; @@ -1386,8 +1386,8 @@ record_operand_costs (rtx insn, enum reg_class *pref) /* Process one insn INSN. Scan it and record each time it would save code to put a certain allocnos in a certain class. Return the last insn processed, so that the scan can be continued from there. */ -static rtx -scan_one_insn (rtx insn) +static rtx_insn * +scan_one_insn (rtx_insn *insn) { enum rtx_code pat_code; rtx set, note; @@ -1570,7 +1570,7 @@ print_pseudo_costs (FILE *f) static void process_bb_for_costs (basic_block bb) { - rtx insn; + rtx_insn *insn; frequency = REG_FREQ_FROM_BB (bb); if (frequency == 0) @@ -1963,7 +1963,8 @@ process_bb_node_for_hard_reg_moves (ira_loop_tree_node_t loop_tree_node) ira_loop_tree_node_t curr_loop_tree_node; enum reg_class rclass; basic_block bb; - rtx insn, set, src, dst; + rtx_insn *insn; + rtx set, src, dst; bb = loop_tree_node->bb; if (bb == NULL) diff --git a/gcc/ira-emit.c b/gcc/ira-emit.c index 71dc6bc..445b386 100644 --- a/gcc/ira-emit.c +++ b/gcc/ira-emit.c @@ -172,7 +172,7 @@ struct move dependencies. */ move_t *deps; /* First insn generated for the move. */ - rtx insn; + rtx_insn *insn; }; /* Array of moves (indexed by BB index) which should be put at the @@ -196,7 +196,7 @@ create_move (ira_allocno_t to, ira_allocno_t from) move->to = to; move->from = from; move->next = NULL; - move->insn = NULL_RTX; + move->insn = NULL; move->visited_p = false; return move; } @@ -893,12 +893,13 @@ modify_move_list (move_t list) /* Generate RTX move insns from the move list LIST. This updates allocation cost using move execution frequency FREQ. */ -static rtx +static rtx_insn * emit_move_list (move_t list, int freq) { rtx to, from, dest; int to_regno, from_regno, cost, regno; - rtx result, insn, set; + rtx_insn *result, *insn; + rtx set; enum machine_mode mode; enum reg_class aclass; @@ -984,7 +985,7 @@ emit_moves (void) basic_block bb; edge_iterator ei; edge e; - rtx insns, tmp; + rtx_insn *insns, *tmp; FOR_EACH_BB_FN (bb, cfun) { @@ -1234,7 +1235,7 @@ void ira_emit (bool loops_p) { basic_block bb; - rtx insn; + rtx_insn *insn; edge_iterator ei; edge e; ira_allocno_t a; diff --git a/gcc/ira-int.h b/gcc/ira-int.h index 413c823..3d1a1d3 100644 --- a/gcc/ira-int.h +++ b/gcc/ira-int.h @@ -571,7 +571,7 @@ struct ira_allocno_copy for the copy created to remove register shuffle is NULL. In last case the copy frequency is smaller than the corresponding insn execution frequency. */ - rtx insn; + rtx_insn *insn; /* All copies with the same allocno as FIRST are linked by the two following members. */ ira_copy_t prev_first_allocno_copy, next_first_allocno_copy; @@ -1009,9 +1009,11 @@ extern void ira_add_allocno_pref (ira_allocno_t, int, int); extern void ira_remove_pref (ira_pref_t); extern void ira_remove_allocno_prefs (ira_allocno_t); extern ira_copy_t ira_create_copy (ira_allocno_t, ira_allocno_t, - int, bool, rtx, ira_loop_tree_node_t); + int, bool, rtx_insn *, + ira_loop_tree_node_t); extern ira_copy_t ira_add_allocno_copy (ira_allocno_t, ira_allocno_t, int, - bool, rtx, ira_loop_tree_node_t); + bool, rtx_insn *, + ira_loop_tree_node_t); extern int *ira_allocate_cost_vector (reg_class_t); extern void ira_free_cost_vector (int *, reg_class_t); diff --git a/gcc/ira-lives.c b/gcc/ira-lives.c index 0dccee3..4875399 100644 --- a/gcc/ira-lives.c +++ b/gcc/ira-lives.c @@ -709,7 +709,7 @@ make_early_clobber_and_input_conflicts (void) /* Mark early clobber hard registers of the current INSN as live (if LIVE_P) or dead. Return true if there are such registers. */ static bool -mark_hard_reg_early_clobbers (rtx insn, bool live_p) +mark_hard_reg_early_clobbers (rtx_insn *insn, bool live_p) { df_ref *def_rec; bool set_p = false; @@ -1049,7 +1049,7 @@ bb_has_abnormal_call_pred (basic_block bb) we find a SET rtx that we can use to deduce that a register can be cheaply caller-saved. Return such a register, or NULL_RTX if none is found. */ static rtx -find_call_crossed_cheap_reg (rtx insn) +find_call_crossed_cheap_reg (rtx_insn *insn) { rtx cheap_reg = NULL_RTX; rtx exp = CALL_INSN_FUNCTION_USAGE (insn); @@ -1116,7 +1116,7 @@ process_bb_node_lives (ira_loop_tree_node_t loop_tree_node) int i, freq; unsigned int j; basic_block bb; - rtx insn; + rtx_insn *insn; bitmap_iterator bi; bitmap reg_live_out; unsigned int px; diff --git a/gcc/ira.c b/gcc/ira.c index bad6759..71a8fc3 100644 --- a/gcc/ira.c +++ b/gcc/ira.c @@ -2097,7 +2097,8 @@ static void decrease_live_ranges_number (void) { basic_block bb; - rtx insn, set, src, dest, dest_death, p, q, note; + rtx_insn *insn; + rtx set, src, dest, dest_death, p, q, note; int sregno, dregno; if (! flag_expensive_optimizations) @@ -2331,7 +2332,7 @@ compute_regs_asm_clobbered (void) FOR_EACH_BB_FN (bb, cfun) { - rtx insn; + rtx_insn *insn; FOR_BB_INSNS_REVERSE (bb, insn) { df_ref *def_rec; @@ -4101,7 +4102,7 @@ build_insn_chain (void) FOR_EACH_BB_REVERSE_FN (bb, cfun) { bitmap_iterator bi; - rtx insn; + rtx_insn *insn; CLEAR_REG_SET (live_relevant_regs); bitmap_clear (live_subregs_used); @@ -4476,7 +4477,7 @@ find_moveable_pseudos (void) int max_uid = get_max_uid (); basic_block bb; int *uid_luid = XNEWVEC (int, max_uid); - rtx *closest_uses = XNEWVEC (rtx, max_regs); + rtx_insn **closest_uses = XNEWVEC (rtx_insn *, max_regs); /* A set of registers which are live but not modified throughout a block. */ bitmap_head *bb_transp_live = XNEWVEC (bitmap_head, last_basic_block_for_fn (cfun)); @@ -4505,7 +4506,7 @@ find_moveable_pseudos (void) bitmap_initialize (&unusable_as_input, 0); FOR_EACH_BB_FN (bb, cfun) { - rtx insn; + rtx_insn *insn; bitmap transp = bb_transp_live + bb->index; bitmap moveable = bb_moveable_reg_sets + bb->index; bitmap local = bb_local + bb->index; @@ -4569,12 +4570,13 @@ find_moveable_pseudos (void) FOR_EACH_BB_FN (bb, cfun) { bitmap local = bb_local + bb->index; - rtx insn; + rtx_insn *insn; FOR_BB_INSNS (bb, insn) if (NONDEBUG_INSN_P (insn)) { - rtx def_insn, closest_use, note; + rtx_insn *def_insn; + rtx closest_use, note; df_ref *def_rec, def, use; unsigned regno; bool all_dominated, all_local; @@ -4617,7 +4619,7 @@ find_moveable_pseudos (void) closest_use = NULL_RTX; for (; use; use = DF_REF_NEXT_REG (use)) { - rtx insn; + rtx_insn *insn; if (!DF_REF_INSN_INFO (use)) { all_dominated = false; @@ -4669,7 +4671,9 @@ find_moveable_pseudos (void) } #endif bitmap_set_bit (&interesting, regno); - closest_uses[regno] = closest_use; + /* If we get here, we know closest_use is a non-NULL insn + (as opposed to const_0_rtx). */ + closest_uses[regno] = as_a (closest_use); if (dump_file && (all_local || all_dominated)) { @@ -4688,13 +4692,13 @@ find_moveable_pseudos (void) EXECUTE_IF_SET_IN_BITMAP (&interesting, 0, i, bi) { df_ref def = DF_REG_DEF_CHAIN (i); - rtx def_insn = DF_REF_INSN (def); + rtx_insn *def_insn = DF_REF_INSN (def); basic_block def_block = BLOCK_FOR_INSN (def_insn); bitmap def_bb_local = bb_local + def_block->index; bitmap def_bb_moveable = bb_moveable_reg_sets + def_block->index; bitmap def_bb_transp = bb_transp_live + def_block->index; bool local_to_bb_p = bitmap_bit_p (def_bb_local, i); - rtx use_insn = closest_uses[i]; + rtx_insn *use_insn = closest_uses[i]; df_ref *def_insn_use_rec = DF_INSN_USES (def_insn); bool all_ok = true; bool all_transp = true; @@ -4748,7 +4752,7 @@ find_moveable_pseudos (void) { if (modified_between_p (DF_REF_REG (use), def_insn, use_insn)) { - rtx x = NEXT_INSN (def_insn); + rtx_insn *x = NEXT_INSN (def_insn); while (!modified_in_p (DF_REF_REG (use), x)) { gcc_assert (x != use_insn); @@ -4843,7 +4847,7 @@ interesting_dest_for_shprep_1 (rtx set, basic_block call_dom) Otherwise return NULL. */ static rtx -interesting_dest_for_shprep (rtx insn, basic_block call_dom) +interesting_dest_for_shprep (rtx_insn *insn, basic_block call_dom) { if (!INSN_P (insn)) return NULL; @@ -4881,7 +4885,7 @@ split_live_ranges_for_shrink_wrap (void) { basic_block bb, call_dom = NULL; basic_block first = single_succ (ENTRY_BLOCK_PTR_FOR_FN (cfun)); - rtx insn, last_interesting_insn = NULL; + rtx_insn *insn, *last_interesting_insn = NULL; bitmap_head need_new, reachable; vec queue; @@ -4998,7 +5002,7 @@ split_live_ranges_for_shrink_wrap (void) df_ref use, next; for (use = DF_REG_USE_CHAIN (REGNO (dest)); use; use = next) { - rtx uin = DF_REF_INSN (use); + rtx_insn *uin = DF_REF_INSN (use); next = DF_REF_NEXT_REG (use); basic_block ubb = BLOCK_FOR_INSN (uin); @@ -5044,12 +5048,12 @@ move_unallocated_pseudos (void) { int idx = i - first_moveable_pseudo; rtx other_reg = pseudo_replaced_reg[idx]; - rtx def_insn = DF_REF_INSN (DF_REG_DEF_CHAIN (i)); + rtx_insn *def_insn = DF_REF_INSN (DF_REG_DEF_CHAIN (i)); /* The use must follow all definitions of OTHER_REG, so we can insert the new definition immediately after any of them. */ df_ref other_def = DF_REG_DEF_CHAIN (REGNO (other_reg)); - rtx move_insn = DF_REF_INSN (other_def); - rtx newinsn = emit_insn_after (PATTERN (def_insn), move_insn); + rtx_insn *move_insn = DF_REF_INSN (other_def); + rtx_insn *newinsn = emit_insn_after (PATTERN (def_insn), move_insn); rtx set; int success; -- 1.8.5.3