From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 7668 invoked by alias); 9 Jun 2016 20:23:37 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 7654 invoked by uid 89); 9 Jun 2016 20:23:36 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=0.5 required=5.0 tests=AWL,BAYES_50,KAM_LAZY_DOMAIN_SECURITY,KAM_STOCKGEN,RP_MATCHES_RCVD autolearn=no version=3.3.2 spammy=varpool_node, sk:SYMBOL_, Seems, a.e X-HELO: nikam.ms.mff.cuni.cz Received: from nikam.ms.mff.cuni.cz (HELO nikam.ms.mff.cuni.cz) (195.113.20.16) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Thu, 09 Jun 2016 20:23:26 +0000 Received: by nikam.ms.mff.cuni.cz (Postfix, from userid 16202) id ECDAC54177B; Thu, 9 Jun 2016 22:23:22 +0200 (CEST) Date: Thu, 09 Jun 2016 20:23:00 -0000 From: Jan Hubicka To: Prathamesh Kulkarni Cc: Jan Hubicka , Richard Biener , David Edelsohn , GCC Patches , "William J. Schmidt" , Segher Boessenkool Subject: Re: move increase_alignment from simple to regular ipa pass Message-ID: <20160609202322.GB98613@kam.mff.cuni.cz> References: <20160602150106.GA48112@kam.mff.cuni.cz> <20160603080543.GA78035@kam.mff.cuni.cz> <20160608150855.GB2550@atrey.karlin.mff.cuni.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-SW-Source: 2016-06/txt/msg00748.txt.bz2 > On 8 June 2016 at 20:38, Jan Hubicka wrote: > >> I think it would be nice to work towards transitioning > >> flag_section_anchors to a flag on varpool nodes, thereby removing > >> the Optimization flag from common.opt:fsection-anchors > >> > >> That would simplify the walk over varpool candidates. > > > > Makes sense to me, too. There are more candidates for sutff that should be > > variable specific in common.opt (such as variable alignment, -fdata-sctions, > > -fmerge-constants) and targets. We may try to do it in an easy to extend way > > so incrementally we can get rid of those global flags, too. > In this version I removed Optimization from fsection-anchors entry in > common.opt, > and gated the increase_alignment pass on flag_section_anchors != 0. > Cross tested on arm*-*-*, aarch64*-*-*. > Does it look OK ? If you go this way you will need to do something sane for LTO. Here one can compile some object files with -fsection-anchors and other without and link with random setting (because in traditional compilation linktime flags does not matter). For global flags we have magic in merge_and_complain that determines flags to pass to the LTO compiler. It is not very robust though. > > > > One thing that needs to be done for LTO is sane merging, I guess in this case > > it is clear that the variable should be anchored when its previaling definition > > is. > Um could we determine during WPA if symbol is a section anchor for merging ? > Seems to me SYMBOL_REF_ANCHOR_P is defined only on DECL_RTL and not at > tree level. > Do we have DECL_RTL info available during WPA ? We don't have anchros computed, but we can decide whether we want to potentially anchor the variable if we can. I would say all you need is to have section_anchor flag in varpool node itself which controls RTL production. At varpool_finalize_decl you will set it according to flag_varpool and stream it to LTO objects. At WPA when doing linking, the section_anchor flag of the previaling decl wins.. Honza > > Thanks, > Prathamesh > > > > Honza > >> > >> Richard. > >> > >> > Thanks, > >> > Prathamesh > >> > > > >> > > Honza > >> > >> > >> > >> Richard. > >> > > >> > > >> > >> -- > >> Richard Biener > >> SUSE LINUX GmbH, GF: Felix Imendoerffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nuernberg) > diff --git a/gcc/common.opt b/gcc/common.opt > index f0d7196..f93f26c 100644 > --- a/gcc/common.opt > +++ b/gcc/common.opt > @@ -2133,7 +2133,7 @@ Common Report Var(flag_sched_dep_count_heuristic) Init(1) Optimization > Enable the dependent count heuristic in the scheduler. > > fsection-anchors > -Common Report Var(flag_section_anchors) Optimization > +Common Report Var(flag_section_anchors) > Access data in the same section from shared anchor points. > > fsee > diff --git a/gcc/passes.def b/gcc/passes.def > index 3647e90..3a8063c 100644 > --- a/gcc/passes.def > +++ b/gcc/passes.def > @@ -138,12 +138,12 @@ along with GCC; see the file COPYING3. If not see > PUSH_INSERT_PASSES_WITHIN (pass_ipa_tree_profile) > NEXT_PASS (pass_feedback_split_functions); > POP_INSERT_PASSES () > - NEXT_PASS (pass_ipa_increase_alignment); > NEXT_PASS (pass_ipa_tm); > NEXT_PASS (pass_ipa_lower_emutls); > TERMINATE_PASS_LIST (all_small_ipa_passes) > > INSERT_PASSES_AFTER (all_regular_ipa_passes) > + NEXT_PASS (pass_ipa_increase_alignment); > NEXT_PASS (pass_ipa_whole_program_visibility); > NEXT_PASS (pass_ipa_profile); > NEXT_PASS (pass_ipa_icf); > diff --git a/gcc/testsuite/gcc.dg/vect/aligned-section-anchors-vect-73.c b/gcc/testsuite/gcc.dg/vect/aligned-section-anchors-vect-73.c > new file mode 100644 > index 0000000..74eaed8 > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/vect/aligned-section-anchors-vect-73.c > @@ -0,0 +1,25 @@ > +/* { dg-do compile } */ > +/* { dg-require-effective-target section_anchors } */ > +/* { dg-require-effective-target vect_int } */ > + > +#define N 32 > + > +/* Clone of section-anchors-vect-70.c with foo() having -fno-tree-loop-vectorize. */ > + > +static struct A { > + int p1, p2; > + int e[N]; > +} a, b, c; > + > +__attribute__((optimize("-fno-tree-loop-vectorize"))) > +int foo(void) > +{ > + for (int i = 0; i < N; i++) > + a.e[i] = b.e[i] + c.e[i]; > + > + return a.e[0]; > +} > + > +/* { dg-final { scan-ipa-dump-times "Increasing alignment of decl" 0 "increase_alignment" { target aarch64*-*-* } } } */ > +/* { dg-final { scan-ipa-dump-times "Increasing alignment of decl" 0 "increase_alignment" { target powerpc64*-*-* } } } */ > +/* { dg-final { scan-ipa-dump-times "Increasing alignment of decl" 0 "increase_alignment" { target arm*-*-* } } } */ > diff --git a/gcc/tree-pass.h b/gcc/tree-pass.h > index 36299a6..d36aa1d 100644 > --- a/gcc/tree-pass.h > +++ b/gcc/tree-pass.h > @@ -483,7 +483,7 @@ extern simple_ipa_opt_pass *make_pass_local_optimization_passes (gcc::context *c > > extern ipa_opt_pass_d *make_pass_ipa_whole_program_visibility (gcc::context > *ctxt); > -extern simple_ipa_opt_pass *make_pass_ipa_increase_alignment (gcc::context > +extern ipa_opt_pass_d *make_pass_ipa_increase_alignment (gcc::context > *ctxt); > extern ipa_opt_pass_d *make_pass_ipa_inline (gcc::context *ctxt); > extern simple_ipa_opt_pass *make_pass_ipa_free_lang_data (gcc::context *ctxt); > diff --git a/gcc/tree-vectorizer.c b/gcc/tree-vectorizer.c > index 2669813..d34e560 100644 > --- a/gcc/tree-vectorizer.c > +++ b/gcc/tree-vectorizer.c > @@ -899,6 +899,34 @@ get_vec_alignment_for_type (tree type) > return (alignment > TYPE_ALIGN (type)) ? alignment : 0; > } > > +/* Return true if alignment should be increased for this vnode. > + This is done if every function that references/referring to vnode > + has flag_tree_loop_vectorize and flag_section_anchors set. */ > + > +static bool > +increase_alignment_p (varpool_node *vnode) > +{ > + ipa_ref *ref; > + > + for (int i = 0; vnode->iterate_reference (i, ref); i++) > + if (cgraph_node *cnode = dyn_cast (ref->referred)) > + { > + struct cl_optimization *opts = opts_for_fn (cnode->decl); > + if (!opts->x_flag_tree_loop_vectorize) > + return false; > + } > + > + for (int i = 0; vnode->iterate_referring (i, ref); i++) > + if (cgraph_node *cnode = dyn_cast (ref->referring)) > + { > + struct cl_optimization *opts = opts_for_fn (cnode->decl); > + if (!opts->x_flag_tree_loop_vectorize) > + return false; > + } > + > + return true; > +} > + > /* Entry point to increase_alignment pass. */ > static unsigned int > increase_alignment (void) > @@ -916,7 +944,8 @@ increase_alignment (void) > > if ((decl_in_symtab_p (decl) > && !symtab_node::get (decl)->can_increase_alignment_p ()) > - || DECL_USER_ALIGN (decl) || DECL_ARTIFICIAL (decl)) > + || DECL_USER_ALIGN (decl) || DECL_ARTIFICIAL (decl) > + || !increase_alignment_p (vnode)) > continue; > > alignment = get_vec_alignment_for_type (TREE_TYPE (decl)); > @@ -938,7 +967,7 @@ namespace { > > const pass_data pass_data_ipa_increase_alignment = > { > - SIMPLE_IPA_PASS, /* type */ > + IPA_PASS, /* type */ > "increase_alignment", /* name */ > OPTGROUP_LOOP | OPTGROUP_VEC, /* optinfo_flags */ > TV_IPA_OPT, /* tv_id */ > @@ -949,17 +978,26 @@ const pass_data pass_data_ipa_increase_alignment = > 0, /* todo_flags_finish */ > }; > > -class pass_ipa_increase_alignment : public simple_ipa_opt_pass > +class pass_ipa_increase_alignment : public ipa_opt_pass_d > { > public: > pass_ipa_increase_alignment (gcc::context *ctxt) > - : simple_ipa_opt_pass (pass_data_ipa_increase_alignment, ctxt) > + : ipa_opt_pass_d (pass_data_ipa_increase_alignment, ctxt, > + NULL, /* generate_summary */ > + NULL, /* write summary */ > + NULL, /* read summary */ > + NULL, /* write optimization summary */ > + NULL, /* read optimization summary */ > + NULL, /* stmt fixup */ > + 0, /* function_transform_todo_flags_start */ > + NULL, /* transform function */ > + NULL )/* variable transform */ > {} > > /* opt_pass methods: */ > virtual bool gate (function *) > { > - return flag_section_anchors && flag_tree_loop_vectorize; > + return flag_section_anchors != 0; > } > > virtual unsigned int execute (function *) { return increase_alignment (); } > @@ -968,7 +1006,7 @@ public: > > } // anon namespace > > -simple_ipa_opt_pass * > +ipa_opt_pass_d * > make_pass_ipa_increase_alignment (gcc::context *ctxt) > { > return new pass_ipa_increase_alignment (ctxt);