From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 70490 invoked by alias); 7 Dec 2018 14:21:39 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 70480 invoked by uid 89); 7 Dec 2018 14:21:38 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-24.2 required=5.0 tests=BAYES_50,GIT_PATCH_0,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,SPF_PASS autolearn=ham version=3.3.2 spammy=nearest, walked, budget, 8337 X-HELO: mx1.suse.de Received: from mx2.suse.de (HELO mx1.suse.de) (195.135.220.15) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 07 Dec 2018 14:21:29 +0000 Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 9E120B00F for ; Fri, 7 Dec 2018 14:21:26 +0000 (UTC) From: Martin Jambor To: GCC Patches Cc: Jan Hubicka Subject: [PR 87615] Limit AA walking in hopefully all of IPA summary generation User-Agent: Notmuch/0.26 (https://notmuchmail.org) Emacs/26.1 (x86_64-suse-linux-gnu) Date: Fri, 07 Dec 2018 14:21:00 -0000 Message-ID: MIME-Version: 1.0 Content-Type: text/plain X-IsSubscribed: yes X-SW-Source: 2018-12/txt/msg00456.txt.bz2 Hi, the patch below adds alias analysis (AA) walk limiting to all walk_alias_vdefs calls invoked as IPA-CP and ipa-fnsummary summary generation. Eventually the two should be unified into just one phase but that is something for stage1. This patch gives them both independent budgets, both initialized to PARAM_VALUE(PARAM_IPA_MAX_AA_STEPS) - as a consequence, the real limit is twice the given value. I'm not sure whether the ipa-prop AA limiting was written before walk_alias_vdefs had its limit parameter added, but I changed the code to use it and also modified the budget counter to go from the parameter value down to zero as opposed to incrementing it and checking manually before each call into AA. I always pass the current budget + 1 to walk_alias_vdefs limit so that I do not have to check whether it is zero which means no limiting at all, and also so that loads from a parameter right at the top of a function get detected as unmodified even if some previous analysis already used up all the budget. As far as the testcase for PR 87615 is concerned, this patch makes the compile time go down from approx. 340 seconds to about 160 seconds on my desktop and IPA-CP gets zeros in -ftime-reportbut we still do not reach the 40 second compile time we get with -fno-ipa-cp -fno-inline. Bootstrapped and tested on x86_64-linux. OK for trunk? Thanks, Martin 2018-12-06 Martin Jambor PR ipa/87615 * ipa-prop.h (struct ipa_func_body_info): Replaced field aa_walked with aa_walk_budget. * cgraph.h (ipa_polymorphic_call_context::get_dynamic_type): Add aa_walk_budget_p parameter. * ipa-fnsummary.c (unmodified_parm_1): New parameter fbi. Limit AA walk. Updated all callers. (unmodified_parm): New parameter fbi, pass it to unmodified_parm_1. (eliminated_by_inlining_prob): New parameter fbi, pass it on to unmodified_parm. (will_be_nonconstant_expr_predicate): New parameter fbi, removed parameter info. Extract info from fbi. Pass fbi to recursive calls and to unmodified_parm. (phi_result_unknown_predicate): New parameter fbi, removed parameter info, updated call to will_be_nonconstant_expr_predicate. (param_change_prob): New parameter fbi, limit AA walking. (analyze_function_body): Initialize aa_walk_budget in fbi. Update calls to various above functions. * ipa-polymorphic-call.c (get_dynamic_type): Add aa_walk_budget_p parameter. Use it to limit AA walking. * ipa-prop.c (detect_type_change_from_memory_writes): New parameter fbi, limit AA walk. (detect_type_change): New parameter fbi, pass it on to detect_type_change_from_memory_writes. (detect_type_change_ssa): Likewise. (aa_overwalked): Removed. (parm_preserved_before_stmt_p): Assume fbi is never NULL, stream line accordingly, adjust to the neew AA limiting scheme. (parm_ref_data_preserved_p): Likewise. (ipa_compute_jump_functions_for_edge): Adjust call to get_dynamic_type. (ipa_analyze_call_uses): Likewise. (ipa_analyze_virtual_call_uses): Pass fbi to detect_type_change_ssa. (ipa_analyze_node): Initialize aa_walk_budget. (ipcp_transform_function): Likewise. * tree-ssa-sccvn.c (eliminate_dom_walker::eliminate_stmt): Update call to get_dynamic_type. --- gcc/cgraph.h | 2 +- gcc/ipa-fnsummary.c | 111 +++++++++++++++++++------------- gcc/ipa-polymorphic-call.c | 28 ++++++-- gcc/ipa-prop.c | 128 +++++++++++++++++-------------------- gcc/ipa-prop.h | 5 +- gcc/tree-ssa-sccvn.c | 2 +- 6 files changed, 154 insertions(+), 122 deletions(-) diff --git a/gcc/cgraph.h b/gcc/cgraph.h index b8e23cc338a..028899e44b1 100644 --- a/gcc/cgraph.h +++ b/gcc/cgraph.h @@ -1548,7 +1548,7 @@ public: /* Look for vtable stores or constructor calls to work out dynamic type of memory location. */ - bool get_dynamic_type (tree, tree, tree, gimple *); + bool get_dynamic_type (tree, tree, tree, gimple *, unsigned *); /* Make context non-speculative. */ void clear_speculation (); diff --git a/gcc/ipa-fnsummary.c b/gcc/ipa-fnsummary.c index 23b7821dcc1..f4cc4df5cda 100644 --- a/gcc/ipa-fnsummary.c +++ b/gcc/ipa-fnsummary.c @@ -941,7 +941,8 @@ mark_modified (ao_ref *ao ATTRIBUTE_UNUSED, tree vdef ATTRIBUTE_UNUSED, PARM_DECL) will be stored to *SIZE_P in that case too. */ static tree -unmodified_parm_1 (gimple *stmt, tree op, HOST_WIDE_INT *size_p) +unmodified_parm_1 (ipa_func_body_info *fbi, gimple *stmt, tree op, + HOST_WIDE_INT *size_p) { /* SSA_NAME referring to parm default def? */ if (TREE_CODE (op) == SSA_NAME @@ -959,8 +960,14 @@ unmodified_parm_1 (gimple *stmt, tree op, HOST_WIDE_INT *size_p) ao_ref refd; ao_ref_init (&refd, op); - walk_aliased_vdefs (&refd, gimple_vuse (stmt), mark_modified, &modified, - NULL); + int walked = walk_aliased_vdefs (&refd, gimple_vuse (stmt), + mark_modified, &modified, NULL, NULL, + fbi->aa_walk_budget + 1); + if (walked < 0) + { + fbi->aa_walk_budget = 0; + return NULL_TREE; + } if (!modified) { if (size_p) @@ -977,16 +984,17 @@ unmodified_parm_1 (gimple *stmt, tree op, HOST_WIDE_INT *size_p) stored to *SIZE_P in that case too. */ static tree -unmodified_parm (gimple *stmt, tree op, HOST_WIDE_INT *size_p) +unmodified_parm (ipa_func_body_info *fbi, gimple *stmt, tree op, + HOST_WIDE_INT *size_p) { - tree res = unmodified_parm_1 (stmt, op, size_p); + tree res = unmodified_parm_1 (fbi, stmt, op, size_p); if (res) return res; if (TREE_CODE (op) == SSA_NAME && !SSA_NAME_IS_DEFAULT_DEF (op) && gimple_assign_single_p (SSA_NAME_DEF_STMT (op))) - return unmodified_parm (SSA_NAME_DEF_STMT (op), + return unmodified_parm (fbi, SSA_NAME_DEF_STMT (op), gimple_assign_rhs1 (SSA_NAME_DEF_STMT (op)), size_p); return NULL_TREE; @@ -1005,7 +1013,7 @@ unmodified_parm_or_parm_agg_item (struct ipa_func_body_info *fbi, HOST_WIDE_INT *size_p, struct agg_position_info *aggpos) { - tree res = unmodified_parm_1 (stmt, op, size_p); + tree res = unmodified_parm_1 (fbi, stmt, op, size_p); gcc_checking_assert (aggpos); if (res) @@ -1044,7 +1052,7 @@ unmodified_parm_or_parm_agg_item (struct ipa_func_body_info *fbi, penalty wrappers. */ static int -eliminated_by_inlining_prob (gimple *stmt) +eliminated_by_inlining_prob (ipa_func_body_info *fbi, gimple *stmt) { enum gimple_code code = gimple_code (stmt); enum tree_code rhs_code; @@ -1084,7 +1092,7 @@ eliminated_by_inlining_prob (gimple *stmt) inner_lhs = lhs; /* Reads of parameter are expected to be free. */ - if (unmodified_parm (stmt, inner_rhs, NULL)) + if (unmodified_parm (fbi, stmt, inner_rhs, NULL)) rhs_free = true; /* Match expressions of form &this->field. Those will most likely combine with something upstream after inlining. */ @@ -1094,7 +1102,8 @@ eliminated_by_inlining_prob (gimple *stmt) if (TREE_CODE (op) == PARM_DECL) rhs_free = true; else if (TREE_CODE (op) == MEM_REF - && unmodified_parm (stmt, TREE_OPERAND (op, 0), NULL)) + && unmodified_parm (fbi, stmt, TREE_OPERAND (op, 0), + NULL)) rhs_free = true; } @@ -1107,7 +1116,7 @@ eliminated_by_inlining_prob (gimple *stmt) /* Reads of parameters passed by reference expected to be free (i.e. optimized out after inlining). */ if (TREE_CODE (inner_rhs) == MEM_REF - && unmodified_parm (stmt, TREE_OPERAND (inner_rhs, 0), NULL)) + && unmodified_parm (fbi, stmt, TREE_OPERAND (inner_rhs, 0), NULL)) rhs_free = true; /* Copying parameter passed by reference into gimple register is @@ -1148,7 +1157,8 @@ eliminated_by_inlining_prob (gimple *stmt) if (TREE_CODE (inner_lhs) == PARM_DECL || TREE_CODE (inner_lhs) == RESULT_DECL || (TREE_CODE (inner_lhs) == MEM_REF - && (unmodified_parm (stmt, TREE_OPERAND (inner_lhs, 0), NULL) + && (unmodified_parm (fbi, stmt, TREE_OPERAND (inner_lhs, 0), + NULL) || (TREE_CODE (TREE_OPERAND (inner_lhs, 0)) == SSA_NAME && SSA_NAME_VAR (TREE_OPERAND (inner_lhs, 0)) && TREE_CODE (SSA_NAME_VAR (TREE_OPERAND @@ -1396,7 +1406,7 @@ compute_bb_predicates (struct ipa_func_body_info *fbi, a compile time constant. */ static predicate -will_be_nonconstant_expr_predicate (struct ipa_node_params *info, +will_be_nonconstant_expr_predicate (ipa_func_body_info *fbi, struct ipa_fn_summary *summary, tree expr, vec nonconstant_names) @@ -1408,8 +1418,8 @@ will_be_nonconstant_expr_predicate (struct ipa_node_params *info, while (UNARY_CLASS_P (expr)) expr = TREE_OPERAND (expr, 0); - parm = unmodified_parm (NULL, expr, &size); - if (parm && (index = ipa_get_param_decl_index (info, parm)) >= 0) + parm = unmodified_parm (fbi, NULL, expr, &size); + if (parm && (index = ipa_get_param_decl_index (fbi->info, parm)) >= 0) return add_condition (summary, index, size, NULL, predicate::changed, NULL_TREE); if (is_gimple_min_invariant (expr)) @@ -1418,34 +1428,36 @@ will_be_nonconstant_expr_predicate (struct ipa_node_params *info, return nonconstant_names[SSA_NAME_VERSION (expr)]; if (BINARY_CLASS_P (expr) || COMPARISON_CLASS_P (expr)) { - predicate p1 = will_be_nonconstant_expr_predicate - (info, summary, TREE_OPERAND (expr, 0), - nonconstant_names); + predicate p1 + = will_be_nonconstant_expr_predicate (fbi, summary, + TREE_OPERAND (expr, 0), + nonconstant_names); if (p1 == true) return p1; - predicate p2; - p2 = will_be_nonconstant_expr_predicate (info, summary, - TREE_OPERAND (expr, 1), - nonconstant_names); + predicate p2 + = will_be_nonconstant_expr_predicate (fbi, summary, + TREE_OPERAND (expr, 1), + nonconstant_names); return p1.or_with (summary->conds, p2); } else if (TREE_CODE (expr) == COND_EXPR) { - predicate p1 = will_be_nonconstant_expr_predicate - (info, summary, TREE_OPERAND (expr, 0), - nonconstant_names); + predicate p1 + = will_be_nonconstant_expr_predicate (fbi, summary, + TREE_OPERAND (expr, 0), + nonconstant_names); if (p1 == true) return p1; - predicate p2; - p2 = will_be_nonconstant_expr_predicate (info, summary, - TREE_OPERAND (expr, 1), - nonconstant_names); + predicate p2 + = will_be_nonconstant_expr_predicate (fbi, summary, + TREE_OPERAND (expr, 1), + nonconstant_names); if (p2 == true) return p2; p1 = p1.or_with (summary->conds, p2); - p2 = will_be_nonconstant_expr_predicate (info, summary, + p2 = will_be_nonconstant_expr_predicate (fbi, summary, TREE_OPERAND (expr, 2), nonconstant_names); return p2.or_with (summary->conds, p1); @@ -1511,7 +1523,7 @@ will_be_nonconstant_predicate (struct ipa_func_body_info *fbi, adding conditionals. */ FOR_EACH_SSA_TREE_OPERAND (use, stmt, iter, SSA_OP_USE) { - tree parm = unmodified_parm (stmt, use, NULL); + tree parm = unmodified_parm (fbi, stmt, use, NULL); /* For arguments we can build a condition. */ if (parm && ipa_get_param_decl_index (fbi->info, parm) >= 0) continue; @@ -1533,7 +1545,7 @@ will_be_nonconstant_predicate (struct ipa_func_body_info *fbi, FOR_EACH_SSA_TREE_OPERAND (use, stmt, iter, SSA_OP_USE) { HOST_WIDE_INT size; - tree parm = unmodified_parm (stmt, use, &size); + tree parm = unmodified_parm (fbi, stmt, use, &size); int index; if (parm && (index = ipa_get_param_decl_index (fbi->info, parm)) >= 0) @@ -1620,7 +1632,7 @@ record_modified (ao_ref *ao ATTRIBUTE_UNUSED, tree vdef, void *data) ought to be REG_BR_PROB_BASE / estimated_iters. */ static int -param_change_prob (gimple *stmt, int i) +param_change_prob (ipa_func_body_info *fbi, gimple *stmt, int i) { tree op = gimple_call_arg (stmt, i); basic_block bb = gimple_bb (stmt); @@ -1680,12 +1692,18 @@ param_change_prob (gimple *stmt, int i) info.op = op; info.stmt = stmt; info.bb_set = BITMAP_ALLOC (NULL); - walk_aliased_vdefs (&refd, gimple_vuse (stmt), record_modified, &info, - NULL); - if (bitmap_bit_p (info.bb_set, bb->index)) + int walked + = walk_aliased_vdefs (&refd, gimple_vuse (stmt), record_modified, &info, + NULL, NULL, fbi->aa_walk_budget); + if (walked < 0 || bitmap_bit_p (info.bb_set, bb->index)) { if (dump_file) - fprintf (dump_file, " Set in same BB as used.\n"); + { + if (walked < 0) + fprintf (dump_file, " Ran out of AA walking budget.\n"); + else + fprintf (dump_file, " Set in same BB as used.\n"); + } BITMAP_FREE (info.bb_set); return REG_BR_PROB_BASE; } @@ -1719,7 +1737,7 @@ param_change_prob (gimple *stmt, int i) return true and store the pointer the predicate in *P. */ static bool -phi_result_unknown_predicate (struct ipa_node_params *info, +phi_result_unknown_predicate (ipa_func_body_info *fbi, ipa_fn_summary *summary, basic_block bb, predicate *p, vec nonconstant_names) @@ -1764,7 +1782,7 @@ phi_result_unknown_predicate (struct ipa_node_params *info, || !is_gimple_ip_invariant (gimple_cond_rhs (stmt))) return false; - *p = will_be_nonconstant_expr_predicate (info, summary, + *p = will_be_nonconstant_expr_predicate (fbi, summary, gimple_cond_lhs (stmt), nonconstant_names); if (*p == true) @@ -2016,7 +2034,9 @@ analyze_function_body (struct cgraph_node *node, bool early) fbi.info = IPA_NODE_REF (node); fbi.bb_infos = vNULL; fbi.bb_infos.safe_grow_cleared (last_basic_block_for_fn (cfun)); - fbi.param_count = count_formal_params(node->decl); + fbi.param_count = count_formal_params (node->decl); + fbi.aa_walk_budget = PARAM_VALUE (PARAM_IPA_MAX_AA_STEPS); + nonconstant_names.safe_grow_cleared (SSANAMES (my_function)->length ()); } @@ -2078,7 +2098,7 @@ analyze_function_body (struct cgraph_node *node, bool early) gsi_next (&bsi)) { if (first_phi - && !phi_result_unknown_predicate (fbi.info, info, bb, + && !phi_result_unknown_predicate (&fbi, info, bb, &phi_predicate, nonconstant_names)) break; @@ -2168,7 +2188,7 @@ analyze_function_body (struct cgraph_node *node, bool early) es->param.safe_grow_cleared (count); for (i = 0; i < count; i++) { - int prob = param_change_prob (stmt, i); + int prob = param_change_prob (&fbi, stmt, i); gcc_assert (prob >= 0 && prob <= REG_BR_PROB_BASE); es->param[i].change_prob = prob; } @@ -2193,7 +2213,7 @@ analyze_function_body (struct cgraph_node *node, bool early) { sreal final_time = (sreal)this_time * freq; - prob = eliminated_by_inlining_prob (stmt); + prob = eliminated_by_inlining_prob (&fbi, stmt); if (prob == 1 && dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, "\t\t50%% will be eliminated by inlining\n"); @@ -2270,7 +2290,7 @@ analyze_function_body (struct cgraph_node *node, bool early) && !is_gimple_min_invariant (niter_desc.niter)) { predicate will_be_nonconstant - = will_be_nonconstant_expr_predicate (fbi.info, info, + = will_be_nonconstant_expr_predicate (&fbi, info, niter_desc.niter, nonconstant_names); if (will_be_nonconstant != true) @@ -2315,8 +2335,7 @@ analyze_function_body (struct cgraph_node *node, bool early) continue; predicate will_be_nonconstant - = will_be_nonconstant_expr_predicate (fbi.info, info, - iv.step, + = will_be_nonconstant_expr_predicate (&fbi, info, iv.step, nonconstant_names); if (will_be_nonconstant != true) will_be_nonconstant = bb_predicate & will_be_nonconstant; diff --git a/gcc/ipa-polymorphic-call.c b/gcc/ipa-polymorphic-call.c index 13aca94dd00..9ecc9bff9cc 100644 --- a/gcc/ipa-polymorphic-call.c +++ b/gcc/ipa-polymorphic-call.c @@ -1529,13 +1529,18 @@ check_stmt_for_type_change (ao_ref *ao ATTRIBUTE_UNUSED, tree vdef, void *data) We do not include this analysis in the context analysis itself, because it needs memory SSA to be fully built and the walk may be expensive. - So it is not suitable for use withing fold_stmt and similar uses. */ + So it is not suitable for use withing fold_stmt and similar uses. + + AA_WALK_BUDGET_P, if not NULL, is how statements we should allow + walk_aliased_vdefs to examine. The value should be decremented by the + number of stetements we examined or set to zero if exhausted. */ bool ipa_polymorphic_call_context::get_dynamic_type (tree instance, tree otr_object, tree otr_type, - gimple *call) + gimple *call, + unsigned *aa_walk_budget_p) { struct type_change_info tci; ao_ref ao; @@ -1695,8 +1700,13 @@ ipa_polymorphic_call_context::get_dynamic_type (tree instance, tci.speculative = 0; tci.seen_unanalyzed_store = false; - walk_aliased_vdefs (&ao, gimple_vuse (stmt), check_stmt_for_type_change, - &tci, NULL, &function_entry_reached); + unsigned aa_walk_budget = 0; + if (aa_walk_budget_p) + aa_walk_budget = *aa_walk_budget_p + 1; + + int walked + = walk_aliased_vdefs (&ao, gimple_vuse (stmt), check_stmt_for_type_change, + &tci, NULL, &function_entry_reached, aa_walk_budget); /* If we did not find any type changing statements, we may still drop maybe_in_construction flag if the context already have outer type. @@ -1744,6 +1754,16 @@ ipa_polymorphic_call_context::get_dynamic_type (tree instance, only if there was dyanmic type store that may affect given variable (seen_unanalyzed_store) */ + if (walked < 0) + { + if (dump_file) + fprintf (dump_file, " AA walk budget exhausted.\n"); + *aa_walk_budget_p = 0; + return false; + } + else if (aa_walk_budget_p) + *aa_walk_budget_p -= walked; + if (!tci.type_maybe_changed || (outer_type && !dynamic diff --git a/gcc/ipa-prop.c b/gcc/ipa-prop.c index 74052350ac1..487e3498619 100644 --- a/gcc/ipa-prop.c +++ b/gcc/ipa-prop.c @@ -746,13 +746,13 @@ param_type_may_change_p (tree function, tree arg, gimple *call) that does the heavy work which is usually unnecesary. */ static bool -detect_type_change_from_memory_writes (tree arg, tree base, tree comp_type, - gcall *call, struct ipa_jump_func *jfunc, +detect_type_change_from_memory_writes (ipa_func_body_info *fbi, tree arg, + tree base, tree comp_type, gcall *call, + struct ipa_jump_func *jfunc, HOST_WIDE_INT offset) { struct prop_type_change_info tci; ao_ref ao; - bool entry_reached = false; gcc_checking_assert (DECL_P (arg) || TREE_CODE (arg) == MEM_REF @@ -780,9 +780,11 @@ detect_type_change_from_memory_writes (tree arg, tree base, tree comp_type, tci.object = get_base_address (arg); tci.type_maybe_changed = false; - walk_aliased_vdefs (&ao, gimple_vuse (call), check_stmt_for_type_change, - &tci, NULL, &entry_reached); - if (!tci.type_maybe_changed) + int walked + = walk_aliased_vdefs (&ao, gimple_vuse (call), check_stmt_for_type_change, + &tci, NULL, NULL, fbi->aa_walk_budget + 1); + + if (walked >= 0 && !tci.type_maybe_changed) return false; ipa_set_jf_unknown (jfunc); @@ -796,8 +798,9 @@ detect_type_change_from_memory_writes (tree arg, tree base, tree comp_type, returned by get_ref_base_and_extent, as is the offset. */ static bool -detect_type_change (tree arg, tree base, tree comp_type, gcall *call, - struct ipa_jump_func *jfunc, HOST_WIDE_INT offset) +detect_type_change (ipa_func_body_info *fbi, tree arg, tree base, + tree comp_type, gcall *call, struct ipa_jump_func *jfunc, + HOST_WIDE_INT offset) { if (!flag_devirtualize) return false; @@ -807,7 +810,7 @@ detect_type_change (tree arg, tree base, tree comp_type, gcall *call, TREE_OPERAND (base, 0), call)) return false; - return detect_type_change_from_memory_writes (arg, base, comp_type, + return detect_type_change_from_memory_writes (fbi, arg, base, comp_type, call, jfunc, offset); } @@ -816,7 +819,7 @@ detect_type_change (tree arg, tree base, tree comp_type, gcall *call, be zero). */ static bool -detect_type_change_ssa (tree arg, tree comp_type, +detect_type_change_ssa (ipa_func_body_info *fbi, tree arg, tree comp_type, gcall *call, struct ipa_jump_func *jfunc) { gcc_checking_assert (TREE_CODE (arg) == SSA_NAME); @@ -830,7 +833,7 @@ detect_type_change_ssa (tree arg, tree comp_type, arg = build2 (MEM_REF, ptr_type_node, arg, build_int_cst (ptr_type_node, 0)); - return detect_type_change_from_memory_writes (arg, arg, comp_type, + return detect_type_change_from_memory_writes (fbi, arg, arg, comp_type, call, jfunc, 0); } @@ -846,16 +849,6 @@ mark_modified (ao_ref *ao ATTRIBUTE_UNUSED, tree vdef ATTRIBUTE_UNUSED, return true; } -/* Return true if we have already walked so many statements in AA that we - should really just start giving up. */ - -static bool -aa_overwalked (struct ipa_func_body_info *fbi) -{ - gcc_checking_assert (fbi); - return fbi->aa_walked > (unsigned) PARAM_VALUE (PARAM_IPA_MAX_AA_STEPS); -} - /* Find the nearest valid aa status for parameter specified by INDEX that dominates BB. */ @@ -922,28 +915,24 @@ parm_preserved_before_stmt_p (struct ipa_func_body_info *fbi, int index, if (TREE_READONLY (base)) return true; - /* FIXME: FBI can be NULL if we are being called from outside - ipa_node_analysis or ipcp_transform_function, which currently happens - during inlining analysis. It would be great to extend fbi's lifetime and - always have it. Currently, we are just not afraid of too much walking in - that case. */ - if (fbi) - { - if (aa_overwalked (fbi)) - return false; - paa = parm_bb_aa_status_for_bb (fbi, gimple_bb (stmt), index); - if (paa->parm_modified) - return false; - } - else - paa = NULL; + gcc_checking_assert (fbi); + paa = parm_bb_aa_status_for_bb (fbi, gimple_bb (stmt), index); + if (paa->parm_modified) + return false; gcc_checking_assert (gimple_vuse (stmt) != NULL_TREE); ao_ref_init (&refd, parm_load); int walked = walk_aliased_vdefs (&refd, gimple_vuse (stmt), mark_modified, - &modified, NULL); - if (fbi) - fbi->aa_walked += walked; + &modified, NULL, NULL, + fbi->aa_walk_budget + 1); + if (walked < 0) + { + modified = true; + if (fbi) + fbi->aa_walk_budget = 0; + } + else if (fbi) + fbi->aa_walk_budget -= walked; if (paa && modified) paa->parm_modified = true; return !modified; @@ -988,29 +977,24 @@ parm_ref_data_preserved_p (struct ipa_func_body_info *fbi, bool modified = false; ao_ref refd; - /* FIXME: FBI can be NULL if we are being called from outside - ipa_node_analysis or ipcp_transform_function, which currently happens - during inlining analysis. It would be great to extend fbi's lifetime and - always have it. Currently, we are just not afraid of too much walking in - that case. */ - if (fbi) - { - if (aa_overwalked (fbi)) - return false; - paa = parm_bb_aa_status_for_bb (fbi, gimple_bb (stmt), index); - if (paa->ref_modified) - return false; - } - else - paa = NULL; + gcc_checking_assert (fbi); + paa = parm_bb_aa_status_for_bb (fbi, gimple_bb (stmt), index); + if (paa->ref_modified) + return false; gcc_checking_assert (gimple_vuse (stmt)); ao_ref_init (&refd, ref); int walked = walk_aliased_vdefs (&refd, gimple_vuse (stmt), mark_modified, - &modified, NULL); - if (fbi) - fbi->aa_walked += walked; - if (paa && modified) + &modified, NULL, NULL, + fbi->aa_walk_budget + 1); + if (walked < 0) + { + modified = true; + fbi->aa_walk_budget = 0; + } + else + fbi->aa_walk_budget -= walked; + if (modified) paa->ref_modified = true; return !modified; } @@ -1030,8 +1014,7 @@ parm_ref_data_pass_through_p (struct ipa_func_body_info *fbi, int index, function because it is not goin to use it. But do not cache the result either. Also, no such calculations for non-pointers. */ if (!gimple_vuse (call) - || !POINTER_TYPE_P (TREE_TYPE (parm)) - || aa_overwalked (fbi)) + || !POINTER_TYPE_P (TREE_TYPE (parm))) return false; struct ipa_param_aa_status *paa = parm_bb_aa_status_for_bb (fbi, @@ -1042,8 +1025,15 @@ parm_ref_data_pass_through_p (struct ipa_func_body_info *fbi, int index, ao_ref_init_from_ptr_and_size (&refd, parm, NULL_TREE); int walked = walk_aliased_vdefs (&refd, gimple_vuse (call), mark_modified, - &modified, NULL); - fbi->aa_walked += walked; + &modified, NULL, NULL, + fbi->aa_walk_budget + 1); + if (walked < 0) + { + fbi->aa_walk_budget = 0; + modified = true; + } + else + fbi->aa_walk_budget -= walked; if (modified) paa->pt_modified = true; return !modified; @@ -1850,7 +1840,8 @@ ipa_compute_jump_functions_for_edge (struct ipa_func_body_info *fbi, struct ipa_polymorphic_call_context context (cs->caller->decl, arg, cs->call_stmt, &instance); - context.get_dynamic_type (instance, arg, NULL, cs->call_stmt); + context.get_dynamic_type (instance, arg, NULL, cs->call_stmt, + &fbi->aa_walk_budget); *ipa_get_ith_polymorhic_call_context (args, n) = context; if (!context.useless_p ()) useful_context = true; @@ -2323,7 +2314,7 @@ ipa_analyze_virtual_call_uses (struct ipa_func_body_info *fbi, anc_offset = 0; index = ipa_get_param_decl_index (info, SSA_NAME_VAR (obj)); gcc_assert (index >= 0); - if (detect_type_change_ssa (obj, obj_type_ref_class (target), + if (detect_type_change_ssa (fbi, obj, obj_type_ref_class (target), call, &jfunc)) return; } @@ -2339,7 +2330,7 @@ ipa_analyze_virtual_call_uses (struct ipa_func_body_info *fbi, index = ipa_get_param_decl_index (info, SSA_NAME_VAR (TREE_OPERAND (expr, 0))); gcc_assert (index >= 0); - if (detect_type_change (obj, expr, obj_type_ref_class (target), + if (detect_type_change (fbi, obj, expr, obj_type_ref_class (target), call, &jfunc, anc_offset)) return; } @@ -2387,7 +2378,8 @@ ipa_analyze_call_uses (struct ipa_func_body_info *fbi, gcall *call) cs->indirect_info->vptr_changed = !context.get_dynamic_type (instance, OBJ_TYPE_REF_OBJECT (target), - obj_type_ref_class (target), call); + obj_type_ref_class (target), call, + &fbi->aa_walk_budget); cs->indirect_info->context = context; } @@ -2587,7 +2579,7 @@ ipa_analyze_node (struct cgraph_node *node) fbi.bb_infos = vNULL; fbi.bb_infos.safe_grow_cleared (last_basic_block_for_fn (cfun)); fbi.param_count = ipa_get_param_count (info); - fbi.aa_walked = 0; + fbi.aa_walk_budget = PARAM_VALUE (PARAM_IPA_MAX_AA_STEPS); for (struct cgraph_edge *cs = node->callees; cs; cs = cs->next_callee) { @@ -5106,7 +5098,7 @@ ipcp_transform_function (struct cgraph_node *node) fbi.bb_infos = vNULL; fbi.bb_infos.safe_grow_cleared (last_basic_block_for_fn (cfun)); fbi.param_count = param_count; - fbi.aa_walked = 0; + fbi.aa_walk_budget = PARAM_VALUE (PARAM_IPA_MAX_AA_STEPS); vec_safe_grow_cleared (descriptors, param_count); ipa_populate_param_decls (node, *descriptors); diff --git a/gcc/ipa-prop.h b/gcc/ipa-prop.h index 5e826c5d3ba..c03109837a4 100644 --- a/gcc/ipa-prop.h +++ b/gcc/ipa-prop.h @@ -428,8 +428,9 @@ struct ipa_func_body_info /* Number of parameters. */ int param_count; - /* Number of statements already walked by when analyzing this function. */ - unsigned int aa_walked; + /* Number of statements we are still allowed to walked by when analyzing this + function. */ + unsigned int aa_walk_budget; }; /* ipa_node_params access functions. Please use these to access fields that diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c index 700aa1fadc5..5b65b463772 100644 --- a/gcc/tree-ssa-sccvn.c +++ b/gcc/tree-ssa-sccvn.c @@ -5226,7 +5226,7 @@ eliminate_dom_walker::eliminate_stmt (basic_block b, gimple_stmt_iterator *gsi) ipa_polymorphic_call_context context (current_function_decl, fn, stmt, &instance); context.get_dynamic_type (instance, OBJ_TYPE_REF_OBJECT (fn), - otr_type, stmt); + otr_type, stmt, NULL); bool final; vec targets = possible_polymorphic_call_targets (obj_type_ref_class (fn), -- 2.19.1