From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 23233 invoked by alias); 27 Oct 2011 23:05:32 -0000 Received: (qmail 23213 invoked by uid 22791); 27 Oct 2011 23:05:30 -0000 X-SWARE-Spam-Status: No, hits=-1.9 required=5.0 tests=AWL,BAYES_00 X-Spam-Check-By: sourceware.org Received: from relay1.mentorg.com (HELO relay1.mentorg.com) (192.94.38.131) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 27 Oct 2011 23:05:15 +0000 Received: from nat-ies.mentorg.com ([192.94.31.2] helo=EU1-MAIL.mgc.mentorg.com) by relay1.mentorg.com with esmtp id 1RJZ0b-0007Hi-RA from Maxim_Kuvyrkov@mentor.com ; Thu, 27 Oct 2011 16:05:14 -0700 Received: from [127.0.0.1] ([172.16.63.104]) by EU1-MAIL.mgc.mentorg.com with Microsoft SMTPSVC(6.0.3790.1830); Fri, 28 Oct 2011 00:05:11 +0100 Subject: Re: [PATCH] Add capability to run several iterations of early optimizations Mime-Version: 1.0 (Apple Message framework v1251.1) Content-Type: multipart/mixed; boundary="Apple-Mail=_F275DFAB-D321-4738-85DA-4E637EA4E909" From: Maxim Kuvyrkov In-Reply-To: Date: Thu, 27 Oct 2011 23:29:00 -0000 Cc: GCC Patches , Matt Message-Id: <2047F9D7-5DE8-42C3-8E6E-B20A2752AB46@codesourcery.com> References: <01F22181-5EA1-46B1-95F6-0F24B92E5FC9@codesourcery.com> To: Richard Guenther Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2011-10/txt/msg02522.txt.bz2 --Apple-Mail=_F275DFAB-D321-4738-85DA-4E637EA4E909 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset=iso-8859-1 Content-length: 3129 Richard, Just as Matt posted his findings about the effect of iterating early optimi= zations, I've got the new patch ready. This patch is essentially a complet= e rewrite and addresses the comments you made. On 18/10/2011, at 9:56 PM, Richard Guenther wrote: >>>=20 >>> If we'd want to iterate early optimizations we'd want to do it by itera= ting >>> an IPA pass so that we benefit from more precise size estimates >>> when trying to inline a function the second time. >>=20 >> Could you elaborate on this a bit? Early optimizations are gimple passe= s, so I'm missing your point here. >=20 > pass_early_local_passes is an IPA pass, you want to iterate > fn1, fn2, fn1, fn2, ..., not fn1, fn1 ..., fn2, fn2 ... precisely for bet= ter > inlining. Thus you need to split pass_early_local_passes into pieces > so you can iterate one of the IPA pieces. Early_local_passes are now split into _main, _iter and _late parts. To avo= id changing the default case, _late part is merged into _main when no itera= tive optimizations are requested. >=20 >>> Also statically >>> scheduling the passes will mess up dump files and you have no >>> chance of say, noticing that nothing changed for function f and its >>> callees in iteration N and thus you can skip processing them in >>> iteration N + 1. >>=20 >> Yes, these are the shortcomings. The dump files name changes can be fix= ed, e.g., by adding a suffix to the passes on iterations after the first on= e. The analysis to avoid unnecessary iterations is more complex problem. To avoid changing the dump file names the patch appends "_iter" suffix to t= he dumps of iterative passes. >=20 > Sure. I analyzed early passes by manually duplicating them and > test that they do nothing for tramp3d, which they pretty much all did > at some point. >=20 >>>=20 >>> So, at least you should split the pass_early_local_passes IPA pass >>> into three, you'd iterate over the 2nd (definitely not over pass_split_= functions >>> though), the third would be pass_profile and pass_split_functions only. >>> And you'd iterate from the place the 2nd IPA pass is executed, not >>> by scheduling them N times. >>=20 >> OK, I will look into this. Done. >>=20 >>>=20 >>> Then you'd have to analyze the compile-time impact of the IPA >>> splitting on its own when not iterating.=20=20 I decided to avoid this and keep the pass pipeline effectively the same whe= n not running iterative optimizations. This is achieved by scheduling pass= _early_optimizations_late in different places in the pipeline depending on = whether iterative optimizations are enabled or not. The patch bootstraps and passes regtest on i686-pc-linux-gnu {-m32/-m64} wi= th 3 iterations enabled by default. The only failures are 5 scan-dump test= s that are due to more functions being inlined than expected. With iterati= ve optimizations disabled there is no change. I've kicked off SPEC2000/SPEC2006 benchmark runs to see the performance eff= ect of the patch, and those will be posted in the same Google Docs spreadsh= eet in several days. OK for trunk? -- Maxim Kuvyrkov CodeSourcery / Mentor Graphics --Apple-Mail=_F275DFAB-D321-4738-85DA-4E637EA4E909 Content-Disposition: attachment; filename=fsf-gcc-iter-eipa-2.ChangeLog Content-Type: application/octet-stream; name="fsf-gcc-iter-eipa-2.ChangeLog" Content-Transfer-Encoding: 7bit Content-length: 1403 2011-10-28 Maxim Kuvyrkov Add scheduling of several iterations of early IPA passes. * Makefile.in (tree-optimize.o): Add dependency on PARAMS_H. * cgraph.c (cgraph_add_new_functions): Update. * cgraphunit.c (cgraph_process_new_functions): Update. * doc/invoke.texi (eipa-iterations): Document new parameter. * params.def (PARAM_EIPA_ITERATIONS): Define. * passes.c (make_pass_instance): Add new argument. Handle sub-passes. (next_pass_1): Add new argument. (init_optimization_passes): Split pass_early_local_passes into pass_early_local_passes_main pass_early_local_passes_iter and pass_early_local_passes_late. Split pass_all_early_optimizations into pass_early_optimizations_main and pass_early_optimizations_late. Schedule iterative optimizations if requested. * toplev.c (general_init, toplev_main): Move init_optimizations_passes after processing of arguments. * tree-optimize.c (params.h): New include. (pass_early_local_passes): Split into pass_early_local_passes_main, pass_early_local_passes_iter and pass_early_local_passes_late. (execute_early_local_passes_iter): New function. (execute_early_local_passes_for_current_function): New wrapper for running early_local_passes. (pass_all_early_optimizations): Split into pass_early_optimizations_main and pass_early_optimizations_late. (tree_lowering_passes): Update. * tree-pass.h: Update. --Apple-Mail=_F275DFAB-D321-4738-85DA-4E637EA4E909 Content-Disposition: attachment; filename=fsf-gcc-iter-eipa-2.patch Content-Type: application/octet-stream; name="fsf-gcc-iter-eipa-2.patch" Content-Transfer-Encoding: 7bit Content-length: 17961 commit 659f197e185606ae5d4dea904e18f4e392a52f68 Author: Maxim Kuvyrkov Date: Sat Oct 1 01:09:50 2011 -0700 Add iterative optimization passes diff --git a/gcc/Makefile.in b/gcc/Makefile.in index 3608904..ac1839d 100644 --- a/gcc/Makefile.in +++ b/gcc/Makefile.in @@ -2657,7 +2657,7 @@ tree-optimize.o : tree-optimize.c $(TREE_FLOW_H) $(CONFIG_H) $(SYSTEM_H) \ coretypes.h $(TREE_DUMP_H) toplev.h $(DIAGNOSTIC_CORE_H) $(FUNCTION_H) langhooks.h \ $(FLAGS_H) $(CGRAPH_H) $(PLUGIN_H) \ $(TREE_INLINE_H) tree-mudflap.h $(GGC_H) graph.h $(CGRAPH_H) \ - $(TREE_PASS_H) $(CFGLOOP_H) $(EXCEPT_H) $(REGSET_H) + $(TREE_PASS_H) $(CFGLOOP_H) $(EXCEPT_H) $(REGSET_H) $(PARAMS_H) gimplify.o : gimplify.c $(CONFIG_H) $(SYSTEM_H) $(TREE_H) $(GIMPLE_H) \ $(DIAGNOSTIC_H) $(GIMPLE_H) $(TREE_INLINE_H) langhooks.h \ diff --git a/gcc/cgraph.c b/gcc/cgraph.c index f056d3d..4738b28 100644 --- a/gcc/cgraph.c +++ b/gcc/cgraph.c @@ -2416,7 +2416,7 @@ cgraph_add_new_function (tree fndecl, bool lowered) tree_lowering_passes (fndecl); bitmap_obstack_initialize (NULL); if (!gimple_in_ssa_p (DECL_STRUCT_FUNCTION (fndecl))) - execute_pass_list (pass_early_local_passes.pass.sub); + execute_early_local_passes_for_current_function (); bitmap_obstack_release (NULL); pop_cfun (); current_function_decl = NULL; @@ -2441,7 +2441,7 @@ cgraph_add_new_function (tree fndecl, bool lowered) gimple_register_cfg_hooks (); bitmap_obstack_initialize (NULL); if (!gimple_in_ssa_p (DECL_STRUCT_FUNCTION (fndecl))) - execute_pass_list (pass_early_local_passes.pass.sub); + execute_early_local_passes_for_current_function (); bitmap_obstack_release (NULL); tree_rest_of_compilation (fndecl); pop_cfun (); diff --git a/gcc/cgraphunit.c b/gcc/cgraphunit.c index 25d7561..9a0e06d 100644 --- a/gcc/cgraphunit.c +++ b/gcc/cgraphunit.c @@ -255,7 +255,7 @@ cgraph_process_new_functions (void) /* When not optimizing, be sure we run early local passes anyway to expand OMP. */ || !optimize) - execute_pass_list (pass_early_local_passes.pass.sub); + execute_early_local_passes_for_current_function (); else compute_inline_parameters (node, true); free_dominance_info (CDI_POST_DOMINATORS); diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi index 50e875a..d5cc035 100644 --- a/gcc/doc/invoke.texi +++ b/gcc/doc/invoke.texi @@ -9088,6 +9088,12 @@ the parameter is reserved exclusively for debug insns created by @option{-fvar-tracking-assignments}, but debug insns may get (non-overlapping) uids above it if the reserved range is exhausted. +@item eipa-iterations +The pass scheduler will execute @option{eipa-iterations} iterations of +early optimization passes before running interprocedural analysis. +Running several iterations of optimization passes allows the compiler +to provide thoroughly optimized code to the interprocedural analysis. + @item ipa-sra-ptr-growth-factor IPA-SRA will replace a pointer to an aggregate with one or more new parameters only when their cumulative size is less or equal to diff --git a/gcc/params.def b/gcc/params.def index b160530..2baad91 100644 --- a/gcc/params.def +++ b/gcc/params.def @@ -861,6 +861,11 @@ DEFPARAM (PARAM_MIN_NONDEBUG_INSN_UID, "The minimum UID to be used for a nondebug insn", 0, 1, 0) +DEFPARAM (PARAM_EIPA_ITERATIONS, + "eipa-iterations", + "Number of iterations of early optimizations before IPA analysis", + 3, 1, 0) + DEFPARAM (PARAM_IPA_SRA_PTR_GROWTH_FACTOR, "ipa-sra-ptr-growth-factor", "Maximum allowed growth of size of new parameters ipa-sra replaces " diff --git a/gcc/passes.c b/gcc/passes.c index 887007f..c3caf3f 100644 --- a/gcc/passes.c +++ b/gcc/passes.c @@ -898,10 +898,13 @@ is_pass_explicitly_enabled_or_disabled (struct opt_pass *pass, } /* Look at the static_pass_number and duplicate the pass - if it is already added to a list. */ + if it is already added to a list. + If SUFFIX is non-NULL, append it to the name of the new pass instead + of an increasing number. */ static struct opt_pass * -make_pass_instance (struct opt_pass *pass, bool track_duplicates) +make_pass_instance (struct opt_pass *pass, bool track_duplicates, + const char *suffix) { /* A nonzero static_pass_number indicates that the pass is already in the list. */ @@ -924,15 +927,40 @@ make_pass_instance (struct opt_pass *pass, bool track_duplicates) else gcc_unreachable (); + if (pass->sub) + /* Duplicate sub-passes. */ + { + struct opt_pass *sub = pass->sub; + struct opt_pass **new_sub_ptr = &new_pass->sub; + + do + { + *new_sub_ptr = make_pass_instance (sub, track_duplicates, suffix); + new_sub_ptr = &(*new_sub_ptr)->next; + sub = sub->next; + } while (sub); + } + new_pass->next = NULL; new_pass->todo_flags_start &= ~TODO_mark_first_instance; + if (suffix) + /* Make pass name unique by appending SUFFIX to it. */ + { + /* Make sure we don't append suffix twice. */ + { + int name_len = strlen (pass->name), suffix_len = strlen (suffix); + gcc_assert (strcmp (suffix, &pass->name[name_len - suffix_len])); + } + new_pass->static_pass_number = -1; + new_pass->name = concat (pass->name, suffix, NULL); + } /* Indicate to register_dump_files that this pass has duplicates, and so it should rename the dump file. The first instance will be -1, and be number of duplicates = -static_pass_number - 1. Subsequent instances will be > 0 and just the duplicate number. */ - if ((pass->name && pass->name[0] != '*') || track_duplicates) + else if ((pass->name && pass->name[0] != '*') || track_duplicates) { pass->static_pass_number -= 1; new_pass->static_pass_number = -pass->static_pass_number; @@ -953,12 +981,12 @@ make_pass_instance (struct opt_pass *pass, bool track_duplicates) in the list. */ static struct opt_pass ** -next_pass_1 (struct opt_pass **list, struct opt_pass *pass) +next_pass_1 (struct opt_pass **list, struct opt_pass *pass, const char *suffix) { /* Every pass should have a name so that plugins can refer to them. */ gcc_assert (pass->name != NULL); - *list = make_pass_instance (pass, false); + *list = make_pass_instance (pass, false, suffix); return &(*list)->next; } @@ -1008,7 +1036,7 @@ position_pass (struct register_pass_info *new_pass_info, struct opt_pass *new_pass; struct pass_list_node *new_pass_node; - new_pass = make_pass_instance (new_pass_info->pass, true); + new_pass = make_pass_instance (new_pass_info->pass, true, NULL); /* Insert the new pass instance based on the positioning op. */ switch (new_pass_info->pos_op) @@ -1164,8 +1192,9 @@ void init_optimization_passes (void) { struct opt_pass **p; + const char *suffix = NULL; -#define NEXT_PASS(PASS) (p = next_pass_1 (p, &((PASS).pass))) +#define NEXT_PASS(PASS) (p = next_pass_1 (p, &((PASS).pass), suffix)) /* All passes needed to lower the function into shape optimizers can operate on. These passes are always run first on the function, but @@ -1188,9 +1217,9 @@ init_optimization_passes (void) p = &all_small_ipa_passes; NEXT_PASS (pass_ipa_free_lang_data); NEXT_PASS (pass_ipa_function_and_variable_visibility); - NEXT_PASS (pass_early_local_passes); + NEXT_PASS (pass_early_local_passes_main); { - struct opt_pass **p = &pass_early_local_passes.pass.sub; + struct opt_pass **p = &pass_early_local_passes_main.pass.sub; NEXT_PASS (pass_fixup_cfg); NEXT_PASS (pass_init_datastructures); NEXT_PASS (pass_expand_omp); @@ -1202,9 +1231,9 @@ init_optimization_passes (void) NEXT_PASS (pass_rebuild_cgraph_edges); NEXT_PASS (pass_inline_parameters); NEXT_PASS (pass_early_inline); - NEXT_PASS (pass_all_early_optimizations); - { - struct opt_pass **p = &pass_all_early_optimizations.pass.sub; + NEXT_PASS (pass_early_optimizations_main); + { + struct opt_pass **p = &pass_early_optimizations_main.pass.sub; NEXT_PASS (pass_remove_cgraph_callee_edges); NEXT_PASS (pass_rename_ssa_copies); NEXT_PASS (pass_ccp); @@ -1222,18 +1251,61 @@ init_optimization_passes (void) NEXT_PASS (pass_early_ipa_sra); NEXT_PASS (pass_tail_recursion); NEXT_PASS (pass_convert_switch); - NEXT_PASS (pass_cleanup_eh); - NEXT_PASS (pass_profile); - NEXT_PASS (pass_local_pure_const); + NEXT_PASS (pass_cleanup_eh); + NEXT_PASS (pass_local_pure_const); + } + /* Define pass_early_optimizations_late, we run these optimizations + strictly once per function. + When not running iterative optimizations, schedule these + optimizations together with pass_early_local_pass_main. + Otherwise, run them after iterative optimizations. */ + { + struct opt_pass **p = &pass_early_optimizations_late.pass.sub; + NEXT_PASS (pass_profile); /* Split functions creates parts that are not run through early optimizations again. It is thus good idea to do this late. */ - NEXT_PASS (pass_split_functions); + NEXT_PASS (pass_split_functions); + NEXT_PASS (pass_release_ssa_names); } - NEXT_PASS (pass_release_ssa_names); + if (PARAM_VALUE (PARAM_EIPA_ITERATIONS) == 1) + NEXT_PASS (pass_early_optimizations_late); NEXT_PASS (pass_rebuild_cgraph_edges); NEXT_PASS (pass_inline_parameters); } + + if (PARAM_VALUE (PARAM_EIPA_ITERATIONS) > 1) + { + /* Prepare and run several iterations of early_optimizations. */ + NEXT_PASS (pass_early_local_passes_iter); + { + struct opt_pass **p = &pass_early_local_passes_iter.pass.sub; + const char *suffix = "_iter"; + /* Fixup CFG as some of callees might have had their attributes + (nothrow, pure, etc.) changed by pass_local_pure_const. + Pass_fixup_cfg should always be followed by + rebuild_cgraph_edges to remove outdated call-graph edges. */ + NEXT_PASS (pass_fixup_cfg); + NEXT_PASS (pass_rebuild_cgraph_edges); + /* Run early inlining. */ + NEXT_PASS (pass_inline_parameters); + NEXT_PASS (pass_early_inline); + /* Run the bulk of early optimizations. */ + NEXT_PASS (pass_early_optimizations_main); + /* Clean up and prepare to be inlined to some other function. */ + NEXT_PASS (pass_rebuild_cgraph_edges); + NEXT_PASS (pass_inline_parameters); + } + NEXT_PASS (pass_early_local_passes_late); + { + struct opt_pass **p = &pass_early_local_passes_late.pass.sub; + NEXT_PASS (pass_fixup_cfg); + NEXT_PASS (pass_rebuild_cgraph_edges); + NEXT_PASS (pass_early_optimizations_late); + NEXT_PASS (pass_rebuild_cgraph_edges); + NEXT_PASS (pass_inline_parameters); + } + } NEXT_PASS (pass_ipa_tree_profile); { struct opt_pass **p = &pass_ipa_tree_profile.pass.sub; diff --git a/gcc/toplev.c b/gcc/toplev.c index 86eed5d..4186157 100644 --- a/gcc/toplev.c +++ b/gcc/toplev.c @@ -1228,7 +1228,6 @@ general_init (const char *argv0) /* This must be done after global_init_params but before argument processing. */ init_ggc_heuristics(); - init_optimization_passes (); statistics_early_init (); finish_params (); } @@ -1989,6 +1988,8 @@ toplev_main (int argc, char **argv) save_decoded_options, save_decoded_options_count, UNKNOWN_LOCATION, global_dc); + init_optimization_passes (); + handle_common_deferred_options (); init_local_tick (); diff --git a/gcc/tree-optimize.c b/gcc/tree-optimize.c index d7978b9..8516070 100644 --- a/gcc/tree-optimize.c +++ b/gcc/tree-optimize.c @@ -47,6 +47,7 @@ along with GCC; see the file COPYING3. If not see #include "except.h" #include "plugin.h" #include "regset.h" /* FIXME: For reg_obstack. */ +#include "params.h" /* Gate: execute, or not, all of the non-trivial optimizations. */ @@ -101,11 +102,11 @@ execute_all_early_local_passes (void) return 0; } -struct simple_ipa_opt_pass pass_early_local_passes = +struct simple_ipa_opt_pass pass_early_local_passes_main = { { SIMPLE_IPA_PASS, - "early_local_cleanups", /* name */ + "early_local_cleanups_main", /* name */ gate_all_early_local_passes, /* gate */ execute_all_early_local_passes, /* execute */ NULL, /* sub */ @@ -120,6 +121,86 @@ struct simple_ipa_opt_pass pass_early_local_passes = } }; +static unsigned int +execute_early_local_passes_iter (void) +{ + struct opt_pass *current = current_pass; + struct opt_pass *next = current_pass->next; + int i; + + /* Don't recurse or wonder on to the next pass when running + execute_ipa_pass_list below. */ + current->execute = NULL; + current->next = NULL; + + /* Run PARAM_EIPA_ITERATIONS-1 iterations of early optimizations. + We run these passes the first time as part of + pass_early_local_passes_main. */ + gcc_assert (PARAM_VALUE (PARAM_EIPA_ITERATIONS) > 1); + for (i = 1; i < PARAM_VALUE (PARAM_EIPA_ITERATIONS); ++i) + execute_ipa_pass_list (current); + + /* Restore. */ + current->next = next; + current->execute = execute_early_local_passes_iter; + current_pass = current; + + /* Tell execute_early_local_passes_for_current_function that it is OK to + run late optimizations now. This also avoids running an extra iteration + of optimizations by execute_ipa_pass_list machinery. */ + current->sub = NULL; + + return 0; +} + +struct simple_ipa_opt_pass pass_early_local_passes_iter = +{ + { + SIMPLE_IPA_PASS, + "early_local_cleanups_iter", /* name */ + gate_all_early_local_passes, /* gate */ + execute_early_local_passes_iter, /* execute */ + NULL, /* sub */ + NULL, /* next */ + 0, /* static_pass_number */ + TV_EARLY_LOCAL, /* tv_id */ + 0, /* properties_required */ + 0, /* properties_provided */ + 0, /* properties_destroyed */ + 0, /* todo_flags_start */ + TODO_remove_functions /* todo_flags_finish */ + } +}; + +struct simple_ipa_opt_pass pass_early_local_passes_late = +{ + { + SIMPLE_IPA_PASS, + "early_local_cleanups_late", /* name */ + gate_all_early_local_passes, /* gate */ + NULL, /* execute */ + NULL, /* sub */ + NULL, /* next */ + 0, /* static_pass_number */ + TV_EARLY_LOCAL, /* tv_id */ + 0, /* properties_required */ + 0, /* properties_provided */ + 0, /* properties_destroyed */ + 0, /* todo_flags_start */ + TODO_remove_functions /* todo_flags_finish */ + } +}; + +void +execute_early_local_passes_for_current_function (void) +{ + execute_pass_list (pass_early_local_passes_main.pass.sub); + /* Run early_local_passes_late if we are done with iterative optimizations + or not running them (iterative optimizations) at all. */ + if (pass_early_local_passes_iter.pass.sub == NULL) + execute_pass_list (pass_early_local_passes_late.pass.sub); +} + /* Gate: execute, or not, all of the non-trivial optimizations. */ static bool @@ -130,11 +211,30 @@ gate_all_early_optimizations (void) && !seen_error ()); } -struct gimple_opt_pass pass_all_early_optimizations = +struct gimple_opt_pass pass_early_optimizations_main = +{ + { + GIMPLE_PASS, + "early_optimizations_main", /* name */ + gate_all_early_optimizations, /* gate */ + NULL, /* execute */ + NULL, /* sub */ + NULL, /* next */ + 0, /* static_pass_number */ + TV_NONE, /* tv_id */ + 0, /* properties_required */ + 0, /* properties_provided */ + 0, /* properties_destroyed */ + 0, /* todo_flags_start */ + 0 /* todo_flags_finish */ + } +}; + +struct gimple_opt_pass pass_early_optimizations_late = { { GIMPLE_PASS, - "early_optimizations", /* name */ + "early_optimizations_late", /* name */ gate_all_early_optimizations, /* gate */ NULL, /* execute */ NULL, /* sub */ @@ -384,7 +484,7 @@ tree_lowering_passes (tree fn) bitmap_obstack_initialize (NULL); execute_pass_list (all_lowering_passes); if (optimize && cgraph_global_info_ready) - execute_pass_list (pass_early_local_passes.pass.sub); + execute_early_local_passes_for_current_function (); free_dominance_info (CDI_POST_DOMINATORS); free_dominance_info (CDI_DOMINATORS); compact_blocks (); diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h index df1e24c..c1f44b9 100644 --- a/gcc/tree-pass.h +++ b/gcc/tree-pass.h @@ -455,7 +455,9 @@ extern struct simple_ipa_opt_pass pass_ipa_lower_emutls; extern struct simple_ipa_opt_pass pass_ipa_function_and_variable_visibility; extern struct simple_ipa_opt_pass pass_ipa_tree_profile; -extern struct simple_ipa_opt_pass pass_early_local_passes; +extern struct simple_ipa_opt_pass pass_early_local_passes_main; +extern struct simple_ipa_opt_pass pass_early_local_passes_iter; +extern struct simple_ipa_opt_pass pass_early_local_passes_late; extern struct ipa_opt_pass_d pass_ipa_whole_program_visibility; extern struct ipa_opt_pass_d pass_ipa_lto_gimple_out; @@ -575,7 +577,8 @@ extern struct rtl_opt_pass pass_rtl_seqabstr; extern struct gimple_opt_pass pass_release_ssa_names; extern struct gimple_opt_pass pass_early_inline; extern struct gimple_opt_pass pass_inline_parameters; -extern struct gimple_opt_pass pass_all_early_optimizations; +extern struct gimple_opt_pass pass_early_optimizations_main; +extern struct gimple_opt_pass pass_early_optimizations_late; extern struct gimple_opt_pass pass_update_address_taken; extern struct gimple_opt_pass pass_convert_switch; @@ -638,6 +641,8 @@ extern void register_pass (struct register_pass_info *); directly in jump threading, and avoid peeling them next time. */ extern bool first_pass_instance; +extern void execute_early_local_passes_for_current_function (void); + /* Declare for plugins. */ extern void do_per_function_toporder (void (*) (void *), void *); --Apple-Mail=_F275DFAB-D321-4738-85DA-4E637EA4E909--