From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 18622 invoked by alias); 28 Mar 2012 13:57:57 -0000 Received: (qmail 18587 invoked by uid 22791); 28 Mar 2012 13:57:48 -0000 X-SWARE-Spam-Status: No, hits=-2.2 required=5.0 tests=AWL,BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,RCVD_IN_DNSWL_LOW,TW_CF,TW_FN,TW_HF,TW_TM X-Spam-Check-By: sourceware.org Received: from mail-iy0-f175.google.com (HELO mail-iy0-f175.google.com) (209.85.210.175) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 28 Mar 2012 13:57:28 +0000 Received: by iaag37 with SMTP id g37so1579933iaa.20 for ; Wed, 28 Mar 2012 06:57:27 -0700 (PDT) MIME-Version: 1.0 Received: by 10.50.196.161 with SMTP id in1mr2259133igc.25.1332943047072; Wed, 28 Mar 2012 06:57:27 -0700 (PDT) Received: by 10.42.165.132 with HTTP; Wed, 28 Mar 2012 06:57:27 -0700 (PDT) In-Reply-To: <1331066954.17488.7.camel@gnopaine> References: <1331066954.17488.7.camel@gnopaine> Date: Wed, 28 Mar 2012 13:57:00 -0000 Message-ID: Subject: Re: [PATCH] Fix PR18589 From: Richard Guenther To: "William J. Schmidt" Cc: gcc-patches@gcc.gnu.org, bergner@vnet.ibm.com Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org X-SW-Source: 2012-03/txt/msg01784.txt.bz2 On Tue, Mar 6, 2012 at 9:49 PM, William J. Schmidt wrote: > Hi, > > This is a re-post of the patch I posted for comments in January to > address http://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D18589. =A0The patch > modifies reassociation to expose repeated factors from __builtin_pow* > calls, optimally reassociate repeated factors, and possibly reconstitute > __builtin_powi calls from the results of reassociation. > > Bootstrapped and passes regression tests for powerpc64-linux-gnu. =A0I > expect there may need to be some small changes, but I am targeting this > for trunk approval. > > Thanks very much for the review, Hmm. How much work would it be to extend the reassoc 'IL' to allow a repeat factor per op? I realize what you do is all within what reassoc already does though ideally we would not require any GIMPLE IL changes for building up / optimizing the reassoc IL but only do so when we commit changes. Thanks, Richard. > Bill > > > gcc: > > 2012-03-06 =A0Bill Schmidt =A0 > > =A0 =A0 =A0 =A0* tree-pass.h: Replace pass_reassoc with pass_early_reasso= c and > =A0 =A0 =A0 =A0pass_late_reassoc. > =A0 =A0 =A0 =A0* passes.c (init_optimization_passes): Change pass_reassoc= calls to > =A0 =A0 =A0 =A0pass_early_reassoc and pass_late_reassoc. > =A0 =A0 =A0 =A0* tree-ssa-reassoc.c (reassociate_stats): Add two fields. > =A0 =A0 =A0 =A0(early_reassoc): New static var. > =A0 =A0 =A0 =A0(MAX_POW_EXPAND): New #define'd constant. > =A0 =A0 =A0 =A0(linear_expand_pow_common): New function. > =A0 =A0 =A0 =A0(linear_expand_powi): Likewise. > =A0 =A0 =A0 =A0(linear_expand_pow): Likewise. > =A0 =A0 =A0 =A0(break_up_subtract_bb): Attempt to expand __builtin_pow[i]. > =A0 =A0 =A0 =A0(repeat_factor_d): New struct and associated typedefs. > =A0 =A0 =A0 =A0(repeat_factor_vec): New static vector variable. > =A0 =A0 =A0 =A0(compare_repeat_factors): New function. > =A0 =A0 =A0 =A0(get_reassoc_pow_ssa_name): Likewise. > =A0 =A0 =A0 =A0(attempt_builtin_powi): Likewise. > =A0 =A0 =A0 =A0(reassociate_bb): Attempt to reconstitute __builtin_powi c= alls, and > =A0 =A0 =A0 =A0multiply their results by any leftover reassociated factor= s. > =A0 =A0 =A0 =A0(fini_reassoc): Two new calls to statistics_counter_event. > =A0 =A0 =A0 =A0(execute_early_reassoc): New function. > =A0 =A0 =A0 =A0(execute_late_reassoc): Likewise. > =A0 =A0 =A0 =A0(pass_early_reassoc): Replace pass_reassoc, renamed to rea= ssoc1, > =A0 =A0 =A0 =A0call execute_early_reassoc. > =A0 =A0 =A0 =A0(pass_late_reassoc): New gimple_opt_pass named reassoc2 th= at calls > =A0 =A0 =A0 =A0execute_late_reassoc. > > gcc/testsuite: > > 2012-03-06 =A0Bill Schmidt =A0 > > =A0 =A0 =A0 =A0* gcc.dg/pr46309.c: Change -fdump-tree-reassoc-details to > =A0 =A0 =A0 =A0-fdump-tree-reassoc[12]-details. > =A0 =A0 =A0 =A0* gcc.dg/tree-ssa/pr18589-1.c: New test. > =A0 =A0 =A0 =A0* gcc.dg/tree-ssa/pr18589-2.c: Likewise. > =A0 =A0 =A0 =A0* gcc.dg/tree-ssa/pr18589-3.c: Likewise. > =A0 =A0 =A0 =A0* gcc.dg/tree-ssa/pr18589-4.c: Likewise. > =A0 =A0 =A0 =A0* gcc.dg/tree-ssa/pr18589-5.c: Likewise. > =A0 =A0 =A0 =A0* gcc.dg/tree-ssa/pr18589-6.c: Likewise. > =A0 =A0 =A0 =A0* gcc.dg/tree-ssa/pr18589-7.c: Likewise. > =A0 =A0 =A0 =A0* gcc.dg/tree-ssa/pr18589-8.c: Likewise. > > > Index: gcc/tree-pass.h > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/tree-pass.h =A0 =A0 (revision 184997) > +++ gcc/tree-pass.h =A0 =A0 (working copy) > @@ -440,7 +440,8 @@ extern struct gimple_opt_pass pass_copy_prop; > =A0extern struct gimple_opt_pass pass_vrp; > =A0extern struct gimple_opt_pass pass_uncprop; > =A0extern struct gimple_opt_pass pass_return_slot; > -extern struct gimple_opt_pass pass_reassoc; > +extern struct gimple_opt_pass pass_early_reassoc; > +extern struct gimple_opt_pass pass_late_reassoc; > =A0extern struct gimple_opt_pass pass_rebuild_cgraph_edges; > =A0extern struct gimple_opt_pass pass_remove_cgraph_callee_edges; > =A0extern struct gimple_opt_pass pass_build_cgraph_edges; > Index: gcc/testsuite/gcc.dg/pr46309.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/testsuite/gcc.dg/pr46309.c =A0 =A0 =A0(revision 184997) > +++ gcc/testsuite/gcc.dg/pr46309.c =A0 =A0 =A0(working copy) > @@ -1,6 +1,6 @@ > =A0/* PR tree-optimization/46309 */ > =A0/* { dg-do compile } */ > -/* { dg-options "-O2 -fdump-tree-reassoc-details" } */ > +/* { dg-options "-O2 -fdump-tree-reassoc1-details -fdump-tree-reassoc2-d= etails" } */ > =A0/* The transformation depends on BRANCH_COST being greater than 1 > =A0 =A0(see the notes in the PR), so try to force that. =A0*/ > =A0/* { dg-additional-options "-mtune=3Docteon2" { target mips*-*-* } } */ > Index: gcc/testsuite/gcc.dg/tree-ssa/pr18589-4.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/testsuite/gcc.dg/tree-ssa/pr18589-4.c =A0 (revision 0) > +++ gcc/testsuite/gcc.dg/tree-ssa/pr18589-4.c =A0 (revision 0) > @@ -0,0 +1,10 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3 -ffast-math -fdump-tree-optimized" } */ > + > +double baz (double x, double y, double z, double u) > +{ > + =A0return x * x * y * y * y * z * z * z * z * u; > +} > + > +/* { dg-final { scan-tree-dump-times " \\* " 7 "optimized" } } */ > +/* { dg-final { cleanup-tree-dump "optimized" } } */ > Index: gcc/testsuite/gcc.dg/tree-ssa/pr18589-5.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/testsuite/gcc.dg/tree-ssa/pr18589-5.c =A0 (revision 0) > +++ gcc/testsuite/gcc.dg/tree-ssa/pr18589-5.c =A0 (revision 0) > @@ -0,0 +1,10 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3 -ffast-math -fdump-tree-optimized" } */ > + > +double baz (double x, double y, double z, double u) > +{ > + =A0return x * x * x * y * y * y * z * z * z * z * u * u * u * u; > +} > + > +/* { dg-final { scan-tree-dump-times " \\* " 6 "optimized" } } */ > +/* { dg-final { cleanup-tree-dump "optimized" } } */ > Index: gcc/testsuite/gcc.dg/tree-ssa/pr18589-6.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/testsuite/gcc.dg/tree-ssa/pr18589-6.c =A0 (revision 0) > +++ gcc/testsuite/gcc.dg/tree-ssa/pr18589-6.c =A0 (revision 0) > @@ -0,0 +1,11 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3 -ffast-math -fdump-tree-optimized" } */ > + > +double baz (double x, double y) > +{ > + =A0return __builtin_pow (x, -4.0) * __builtin_pow (y, -4.0); > +} > + > +/* { dg-final { scan-tree-dump-times " \\* " 3 "optimized" } } */ > +/* { dg-final { scan-tree-dump-times " / " 1 "optimized" } } */ > +/* { dg-final { cleanup-tree-dump "optimized" } } */ > Index: gcc/testsuite/gcc.dg/tree-ssa/pr18589-7.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/testsuite/gcc.dg/tree-ssa/pr18589-7.c =A0 (revision 0) > +++ gcc/testsuite/gcc.dg/tree-ssa/pr18589-7.c =A0 (revision 0) > @@ -0,0 +1,10 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3 -ffast-math -fdump-tree-optimized" } */ > + > +float baz (float x, float y) > +{ > + =A0return x * x * x * x * y * y * y * y; > +} > + > +/* { dg-final { scan-tree-dump-times " \\* " 3 "optimized" } } */ > +/* { dg-final { cleanup-tree-dump "optimized" } } */ > Index: gcc/testsuite/gcc.dg/tree-ssa/pr18589-8.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/testsuite/gcc.dg/tree-ssa/pr18589-8.c =A0 (revision 0) > +++ gcc/testsuite/gcc.dg/tree-ssa/pr18589-8.c =A0 (revision 0) > @@ -0,0 +1,10 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3 -ffast-math -fdump-tree-optimized" } */ > + > +long double baz (long double x, long double y) > +{ > + =A0return x * x * x * x * y * y * y * y; > +} > + > +/* { dg-final { scan-tree-dump-times " \\* " 3 "optimized" } } */ > +/* { dg-final { cleanup-tree-dump "optimized" } } */ > Index: gcc/testsuite/gcc.dg/tree-ssa/pr18589-1.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/testsuite/gcc.dg/tree-ssa/pr18589-1.c =A0 (revision 0) > +++ gcc/testsuite/gcc.dg/tree-ssa/pr18589-1.c =A0 (revision 0) > @@ -0,0 +1,10 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3 -ffast-math -fdump-tree-optimized" } */ > + > +double baz (double x, double y) > +{ > + =A0return x * x * x * x * y * y * y * y; > +} > + > +/* { dg-final { scan-tree-dump-times " \\* " 3 "optimized" } } */ > +/* { dg-final { cleanup-tree-dump "optimized" } } */ > Index: gcc/testsuite/gcc.dg/tree-ssa/pr18589-2.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/testsuite/gcc.dg/tree-ssa/pr18589-2.c =A0 (revision 0) > +++ gcc/testsuite/gcc.dg/tree-ssa/pr18589-2.c =A0 (revision 0) > @@ -0,0 +1,10 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3 -ffast-math -fdump-tree-optimized" } */ > + > +double baz (double x, double y) > +{ > + =A0return x * y * y * x * y * x * x * y; > +} > + > +/* { dg-final { scan-tree-dump-times " \\* " 3 "optimized" } } */ > +/* { dg-final { cleanup-tree-dump "optimized" } } */ > Index: gcc/testsuite/gcc.dg/tree-ssa/pr18589-3.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/testsuite/gcc.dg/tree-ssa/pr18589-3.c =A0 (revision 0) > +++ gcc/testsuite/gcc.dg/tree-ssa/pr18589-3.c =A0 (revision 0) > @@ -0,0 +1,10 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O3 -ffast-math -fdump-tree-optimized" } */ > + > +double baz (double x, double y, double z) > +{ > + =A0return x * x * y * y * y * z * z * z * z; > +} > + > +/* { dg-final { scan-tree-dump-times " \\* " 6 "optimized" } } */ > +/* { dg-final { cleanup-tree-dump "optimized" } } */ > Index: gcc/passes.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/passes.c =A0 =A0 =A0 =A0(revision 184997) > +++ gcc/passes.c =A0 =A0 =A0 =A0(working copy) > @@ -1325,7 +1325,7 @@ init_optimization_passes (void) > =A0 =A0 =A0 =A0 opportunities. =A0*/ > =A0 =A0 =A0 NEXT_PASS (pass_phi_only_cprop); > =A0 =A0 =A0 NEXT_PASS (pass_dse); > - =A0 =A0 =A0NEXT_PASS (pass_reassoc); > + =A0 =A0 =A0NEXT_PASS (pass_early_reassoc); > =A0 =A0 =A0 NEXT_PASS (pass_dce); > =A0 =A0 =A0 NEXT_PASS (pass_forwprop); > =A0 =A0 =A0 NEXT_PASS (pass_phiopt); > @@ -1377,7 +1377,7 @@ init_optimization_passes (void) > =A0 =A0 =A0 =A0} > =A0 =A0 =A0 NEXT_PASS (pass_lower_vector_ssa); > =A0 =A0 =A0 NEXT_PASS (pass_cse_reciprocals); > - =A0 =A0 =A0NEXT_PASS (pass_reassoc); > + =A0 =A0 =A0NEXT_PASS (pass_late_reassoc); > =A0 =A0 =A0 NEXT_PASS (pass_vrp); > =A0 =A0 =A0 NEXT_PASS (pass_dominator); > =A0 =A0 =A0 /* The only const/copy propagation opportunities left after > Index: gcc/tree-ssa-reassoc.c > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > --- gcc/tree-ssa-reassoc.c =A0 =A0 =A0(revision 184997) > +++ gcc/tree-ssa-reassoc.c =A0 =A0 =A0(working copy) > @@ -53,6 +53,9 @@ along with GCC; see the file COPYING3. =A0If not see > =A0 =A0 1. Breaking up subtract operations into addition + negate, where > =A0 =A0 it would promote the reassociation of adds. > > + =A0 =A01a. During the same pass, expanding __builtin_pow variants with > + =A0 =A0small integer exponents into linearized multiply trees. > + > =A0 =A0 2. Left linearization of the expression trees, so that (A+B)+(C+D) > =A0 =A0 becomes (((A+B)+C)+D), which is easier for us to rewrite later. > =A0 =A0 During linearization, we place the operands of the binary > @@ -61,6 +64,10 @@ along with GCC; see the file COPYING3. =A0If not see > =A0 =A0 3. Optimization of the operand lists, eliminating things like a + > =A0 =A0 -a, a & a, etc. > > + =A0 =A03a. Combine repeated factors with the same occurrence counts > + =A0 =A0into a __builtin_powi call that will later be optimized into > + =A0 =A0an optimal number of multiplies. > + > =A0 =A0 4. Rewrite the expression trees we linearized and optimized so > =A0 =A0 they are in proper rank order. > > @@ -169,6 +176,8 @@ static struct > =A0 int constants_eliminated; > =A0 int ops_eliminated; > =A0 int rewritten; > + =A0int pows_expanded; > + =A0int pows_created; > =A0} reassociate_stats; > > =A0/* Operator, rank pair. =A0*/ > @@ -190,6 +199,12 @@ static int next_operand_entry_id; > =A0 =A0depth. =A0*/ > =A0static long *bb_rank; > > +/* Distinguish between early and late reassociation passes. =A0Early > + =A0 reassociation expands and rebuilds __builtin_pow* calls. =A0This > + =A0 is not done in late reassociation to preserve the builtin > + =A0 optimizations performed in pass_cse_sincos. =A0*/ > +static bool early_reassoc; > + > =A0/* Operand->rank hashtable. =A0*/ > =A0static struct pointer_map_t *operand_rank; > > @@ -2759,6 +2774,164 @@ can_reassociate_p (tree op) > =A0 return false; > =A0} > > +/* Define a limit on the exponent of a __builtin_pow or __builtin_powi > + =A0 that will be converted into a linear chain of multiplies prior to > + =A0 reassociation. =A0*/ > + > +#define MAX_POW_EXPAND 32 > + > +/* Given STMT at *GSIP that is a __builtin_pow or __builtin_powi > + =A0 operation with base ARG0 and integer power EXPONENT, transform > + =A0 it into a linear chain of multiplies, provided that EXPONENT is > + =A0 of sufficiently small magnitude. */ > +static void > +linear_expand_pow_common (gimple stmt, gimple_stmt_iterator *gsip, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 tree arg0, HOST_WIDE_IN= T exponent) > +{ > + =A0tree lhs =3D gimple_call_lhs (stmt); > + =A0HOST_WIDE_INT abs_exp =3D abs (exponent); > + =A0location_t loc =3D gimple_location (stmt); > + =A0gimple mul_stmt =3D NULL; > + =A0gimple div_stmt =3D NULL; > + =A0tree result, type, target; > + =A0gimple copy_stmt; > + > + =A0if (abs_exp > MAX_POW_EXPAND) > + =A0 =A0return; > + > + =A0if (dump_file && (dump_flags & TDF_DETAILS)) > + =A0 =A0{ > + =A0 =A0 =A0fprintf (dump_file, "Expanding builtin pow/powi: "); > + =A0 =A0 =A0print_gimple_stmt (dump_file, stmt, 0, 0); > + =A0 =A0} > + > + =A0reassociate_stats.pows_expanded++; > + > + =A0type =3D TREE_TYPE (arg0); > + > + =A0if (exponent =3D=3D 0) > + =A0 =A0result =3D build_real (type, dconst1); > + =A0else if (exponent =3D=3D 1) > + =A0 =A0result =3D arg0; > + =A0else > + =A0 =A0{ > + =A0 =A0 =A0tree target_ssa; > + > + =A0 =A0 =A0target =3D create_tmp_reg (type, "reassocpow"); > + =A0 =A0 =A0add_referenced_var (target); > + > + =A0 =A0 =A0if (exponent =3D=3D -1) > + =A0 =A0 =A0 result =3D arg0; > + =A0 =A0 =A0else > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 unsigned i; > + > + =A0 =A0 =A0 =A0 target_ssa =3D make_ssa_name (target, NULL); > + =A0 =A0 =A0 =A0 mul_stmt =3D gimple_build_assign_with_ops (MULT_EXPR, t= arget_ssa, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0arg0, arg0); > + =A0 =A0 =A0 =A0 gimple_set_location (mul_stmt, loc); > + =A0 =A0 =A0 =A0 gsi_insert_before (gsip, mul_stmt, GSI_SAME_STMT); > + > + =A0 =A0 =A0 =A0 for (i =3D abs_exp - 2; i > 0; i--) > + =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 tree prev_target_ssa =3D target_ssa; > + =A0 =A0 =A0 =A0 =A0 =A0 target_ssa =3D make_ssa_name (target, NULL); > + =A0 =A0 =A0 =A0 =A0 =A0 mul_stmt =3D gimple_build_assign_with_ops (MULT= _EXPR, target_ssa, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0prev_target_ssa, arg0); > + =A0 =A0 =A0 =A0 =A0 =A0 gimple_set_location (mul_stmt, loc); > + =A0 =A0 =A0 =A0 =A0 =A0 gsi_insert_before (gsip, mul_stmt, GSI_SAME_STM= T); > + =A0 =A0 =A0 =A0 =A0 } > + > + =A0 =A0 =A0 =A0 result =3D target_ssa; > + =A0 =A0 =A0 } > + > + =A0 =A0 =A0/* Possible TODO: =A0If we expand two __builtin_pow's with d= ifferent > + =A0 =A0 =A0 =A0negative exponents, the introduction of the RDIVs for in= version > + =A0 =A0 =A0 =A0prevents combining their factors. =A0(If they have the s= ame negative > + =A0 =A0 =A0 =A0exponent, expression folding combines them as expected.)= =A0I'm > + =A0 =A0 =A0 =A0not worrying about this as it should be quite rare in pr= actice. =A0*/ > + =A0 =A0 =A0if (exponent < 0) > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 target_ssa =3D make_ssa_name (target, NULL); > + =A0 =A0 =A0 =A0 div_stmt =3D gimple_build_assign_with_ops (RDIV_EXPR, t= arget_ssa, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0build_real (type, dconst1), > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0result); > + =A0 =A0 =A0 =A0 gimple_set_location (div_stmt, loc); > + =A0 =A0 =A0 =A0 gsi_insert_before (gsip, div_stmt, GSI_SAME_STMT); > + =A0 =A0 =A0 =A0 result =3D target_ssa; > + =A0 =A0 =A0 } > + =A0 =A0} > + > + =A0/* If we introduced multiplies but no inversion, avoid introducing a > + =A0 =A0 copy so that the copy doesn't truncate a larger reassociation c= hain. =A0*/ > + =A0if (mul_stmt && !div_stmt) > + =A0 =A0{ > + =A0 =A0 =A0unlink_stmt_vdef (stmt); > + =A0 =A0 =A0gimple_set_lhs (mul_stmt, lhs); > + =A0 =A0 =A0gsi_remove (gsip, true); > + =A0 =A0 =A0update_stmt (mul_stmt); > + =A0 =A0 =A0/* If we're at the end of the block, leave the iterator in a > + =A0 =A0 =A0 =A0usable state. =A0*/ > + =A0 =A0 =A0if (gsi_end_p (*gsip)) > + =A0 =A0 =A0 *gsip =3D gsi_for_stmt (mul_stmt); > + =A0 =A0} > + =A0else > + =A0 =A0{ > + =A0 =A0 =A0copy_stmt =3D gimple_build_assign (lhs, result); > + =A0 =A0 =A0gimple_set_location (copy_stmt, loc); > + =A0 =A0 =A0unlink_stmt_vdef (stmt); > + =A0 =A0 =A0gsi_replace (gsip, copy_stmt, true); > + =A0 =A0} > +} > + > +/* Transform __builtin_powi into a linear chain of multiplies, > + =A0 if the exponent is of sufficiently small magnitude. =A0*/ > + > +static void > +linear_expand_powi (gimple stmt, gimple_stmt_iterator *gsip) > +{ > + =A0tree arg0 =3D gimple_call_arg (stmt, 0); > + =A0tree arg1 =3D gimple_call_arg (stmt, 1); > + =A0HOST_WIDE_INT exponent; > + > + =A0if (TREE_CODE (arg1) !=3D INTEGER_CST) > + =A0 =A0return; > + > + =A0if (host_integerp (arg1, 0)) > + =A0 =A0{ > + =A0 =A0 =A0exponent =3D TREE_INT_CST_LOW (arg1); > + =A0 =A0 =A0linear_expand_pow_common (stmt, gsip, arg0, exponent); > + =A0 =A0} > +} > + > +/* Transform __builtin_pow into a linear chain of multiplies, > + =A0 if the exponent is constant and equivalent to an integer of > + =A0 sufficiently small magnitude. =A0*/ > + > +static void > +linear_expand_pow (gimple stmt, gimple_stmt_iterator *gsip) > +{ > + =A0tree arg0 =3D gimple_call_arg (stmt, 0); > + =A0tree arg1 =3D gimple_call_arg (stmt, 1); > + =A0REAL_VALUE_TYPE c, cint; > + =A0HOST_WIDE_INT n; > + > + =A0/* The exponent must be a constant equivalent to an integer that > + =A0 =A0 fits in a HOST_WIDE_INT. =A0*/ > + =A0if (TREE_CODE (arg1) !=3D REAL_CST) > + =A0 =A0return; > + > + =A0c =3D TREE_REAL_CST (arg1); > + =A0if (REAL_EXP (&c) > HOST_BITS_PER_WIDE_INT) > + =A0 =A0return; > + > + =A0n =3D real_to_integer (&c); > + =A0real_from_integer (&cint, VOIDmode, n, n < 0 ? -1 : 0, 0); > + > + =A0if (real_identical (&c, &cint)) > + =A0 =A0linear_expand_pow_common (stmt, gsip, arg0, n); > +} > + > =A0/* Break up subtract operations in block BB. > > =A0 =A0We do this top down because we don't know whether the subtract is > @@ -2784,9 +2957,32 @@ break_up_subtract_bb (basic_block bb) > > =A0 for (gsi =3D gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) > =A0 =A0 { > + =A0 =A0 =A0tree fndecl; > =A0 =A0 =A0 gimple stmt =3D gsi_stmt (gsi); > =A0 =A0 =A0 gimple_set_visited (stmt, false); > > + =A0 =A0 =A0/* Look for __builtin_pow and __builtin_powi calls, > + =A0 =A0 =A0 =A0and expand them to multiplies if possible. =A0*/ > + =A0 =A0 =A0if (early_reassoc > + =A0 =A0 =A0 =A0 && flag_unsafe_math_optimizations > + =A0 =A0 =A0 =A0 && is_gimple_call (stmt) > + =A0 =A0 =A0 =A0 && (fndecl =3D gimple_call_fndecl (stmt)) > + =A0 =A0 =A0 =A0 && DECL_BUILT_IN_CLASS (fndecl) =3D=3D BUILT_IN_NORMAL) > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 switch (DECL_FUNCTION_CODE (fndecl)) > + =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 CASE_FLT_FN (BUILT_IN_POW): > + =A0 =A0 =A0 =A0 =A0 =A0 linear_expand_pow (stmt, &gsi); > + =A0 =A0 =A0 =A0 =A0 =A0 break; > + =A0 =A0 =A0 =A0 =A0 CASE_FLT_FN (BUILT_IN_POWI): > + =A0 =A0 =A0 =A0 =A0 =A0 linear_expand_powi (stmt, &gsi); > + =A0 =A0 =A0 =A0 =A0 =A0 break; > + =A0 =A0 =A0 =A0 =A0 default: > + =A0 =A0 =A0 =A0 =A0 =A0 ; > + =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 =A0 continue; > + =A0 =A0 =A0 } > + > =A0 =A0 =A0 if (!is_gimple_assign (stmt) > =A0 =A0 =A0 =A0 =A0|| !can_reassociate_p (gimple_assign_lhs (stmt))) > =A0 =A0 =A0 =A0continue; > @@ -2815,6 +3011,342 @@ break_up_subtract_bb (basic_block bb) > =A0 =A0 break_up_subtract_bb (son); > =A0} > > +/* Used for repeated factor analysis. =A0*/ > +struct repeat_factor_d > +{ > + =A0/* An SSA name that occurs in a multiply chain. =A0*/ > + =A0tree factor; > + > + =A0/* Cached rank of the factor. =A0*/ > + =A0unsigned rank; > + > + =A0/* Number of occurrences of the factor in the chain. =A0*/ > + =A0HOST_WIDE_INT count; > + > + =A0/* An SSA name representing the product of this factor and > + =A0 =A0 all factors appearing later in the repeated factor vector. =A0*/ > + =A0tree repr; > +}; > + > +typedef struct repeat_factor_d repeat_factor, *repeat_factor_t; > +typedef const struct repeat_factor_d *const_repeat_factor_t; > + > +DEF_VEC_O (repeat_factor); > +DEF_VEC_ALLOC_O (repeat_factor, heap); > + > +static VEC (repeat_factor, heap) *repeat_factor_vec; > + > +/* Used for sorting the repeat factor vector. =A0Sort primarily by > + =A0 ascending occurrence count, secondarily by descending rank. =A0*/ > + > +static int > +compare_repeat_factors (const void *x1, const void *x2) > +{ > + =A0const_repeat_factor_t rf1 =3D (const_repeat_factor_t) x1; > + =A0const_repeat_factor_t rf2 =3D (const_repeat_factor_t) x2; > + > + =A0if (rf1->count !=3D rf2->count) > + =A0 =A0return rf1->count - rf2->count; > + > + =A0return rf2->rank - rf1->rank; > +} > + > +/* Get a new SSA name for register variable *TARGET of type TYPE. > + =A0 If *TARGET is null or incompatible with TYPE, create the variable > + =A0 first. =A0*/ > + > +static tree > +get_reassoc_pow_ssa_name (tree *target, tree type) > +{ > + =A0if (!*target || !types_compatible_p (type, TREE_TYPE (*target))) > + =A0 =A0{ > + =A0 =A0 =A0*target =3D create_tmp_reg (type, "reassocpow"); > + =A0 =A0 =A0add_referenced_var (*target); > + =A0 =A0} > + > + =A0return make_ssa_name (*target, NULL); > +} > + > +/* Look for repeated operands in OPS in the multiply tree rooted at > + =A0 STMT. =A0Replace them with an optimal sequence of multiplies and po= wi > + =A0 builtin calls, and remove the used operands from OPS. =A0Return an > + =A0 SSA name representing the value of the replacement sequence. =A0*/ > + > +static tree > +attempt_builtin_powi (gimple stmt, VEC(operand_entry_t, heap) **ops, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 tree *target) > +{ > + =A0unsigned i, j, cached, vec_len; > + =A0int ii; > + =A0operand_entry_t oe; > + =A0repeat_factor_t rf1, rf2; > + =A0repeat_factor rfnew; > + =A0tree target_ssa; > + =A0tree result =3D NULL_TREE; > + =A0tree type =3D TREE_TYPE (gimple_get_lhs (stmt)); > + =A0tree powi_fndecl =3D mathfn_built_in (type, BUILT_IN_POWI); > + =A0gimple_stmt_iterator gsi =3D gsi_for_stmt (stmt); > + =A0gimple mul_stmt, pow_stmt; > + > + =A0/* Nothing to do if BUILT_IN_POWI doesn't exist for this type and > + =A0 =A0 target. =A0*/ > + =A0if (!powi_fndecl) > + =A0 =A0return NULL_TREE; > + > + =A0/* Allocate the repeated factor vector. =A0*/ > + =A0repeat_factor_vec =3D VEC_alloc (repeat_factor, heap, 10); > + > + =A0/* Scan the OPS vector for all SSA names in the product and build > + =A0 =A0 up a vector of occurrence counts for each factor. =A0*/ > + =A0FOR_EACH_VEC_ELT (operand_entry_t, *ops, i, oe) > + =A0 =A0{ > + =A0 =A0 =A0if (TREE_CODE (oe->op) =3D=3D SSA_NAME) > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 FOR_EACH_VEC_ELT (repeat_factor, repeat_factor_vec, j, = rf1) > + =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 if (rf1->factor =3D=3D oe->op) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 rf1->count++; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 =A0 =A0 } > + > + =A0 =A0 =A0 =A0 if (j >=3D VEC_length (repeat_factor, repeat_factor_vec= )) > + =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 rfnew.factor =3D oe->op; > + =A0 =A0 =A0 =A0 =A0 =A0 rfnew.rank =3D oe->rank; > + =A0 =A0 =A0 =A0 =A0 =A0 rfnew.count =3D 1; > + =A0 =A0 =A0 =A0 =A0 =A0 rfnew.repr =3D NULL_TREE; > + =A0 =A0 =A0 =A0 =A0 =A0 VEC_safe_push (repeat_factor, heap, repeat_fact= or_vec, &rfnew); > + =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 } > + =A0 =A0} > + > + =A0vec_len =3D VEC_length (repeat_factor, repeat_factor_vec); > + > + =A0/* Sort the repeated factor vector by (a) increasing occurrence coun= t, > + =A0 =A0 and (b) decreasing rank. =A0*/ > + =A0VEC_qsort (repeat_factor, repeat_factor_vec, compare_repeat_factors); > + > + =A0/* There are a variety of ways we could combine factors with differe= nt > + =A0 =A0 occurrence counts. =A0It seems best in practice to try to combi= ne as > + =A0 =A0 many base factors as possible into a product before applying > + =A0 =A0 builtin_powi to the result. =A0The sort order chosen for the re= peated > + =A0 =A0 factor vector allows us to cache partial results for the product > + =A0 =A0 of the base factors for subsequent use. > + > + =A0 =A0 As an example, consider x * x * y * y * y * y * z * z * z * z. > + =A0 =A0 We want to first compose the product x * y * z, raise it to the > + =A0 =A0 second power, and then multiply this by the product y * z raised > + =A0 =A0 to the second power. =A0This can be done in 5 multiplies provid= ed > + =A0 =A0 we cache y * z for use in both expressions: > + > + =A0 =A0 =A0 =A0t1 =3D y * z > + =A0 =A0 =A0 t2 =3D t1 * x > + =A0 =A0 =A0 t3 =3D t2 * t2 > + =A0 =A0 =A0 t4 =3D t1 * t1 > + =A0 =A0 =A0 result =3D t3 * t4 > + > + =A0 =A0 Alternatively we could also do this in 5 multiplies by first > + =A0 =A0 calculating y * z, squaring it twice, and multiplying by x * x. > + =A0 =A0 However, if the occurrence counts were not powers of 2 as in > + =A0 =A0 this example, combining higher occurrence counts first would > + =A0 =A0 require more multiplies than combining lower ones first. =A0*/ > + > + =A0/* Repeatedly look for opportunities to create a builtin_powi call. = =A0*/ > + =A0while (true) > + =A0 =A0{ > + =A0 =A0 =A0HOST_WIDE_INT power; > + > + =A0 =A0 =A0/* Find the first factor in the repeated factor vector whose > + =A0 =A0 =A0 =A0occurrence count is at least 2. =A0If no such factor exi= sts, > + =A0 =A0 =A0 =A0there are no builtin_powi opportunities remaining. =A0*/ > + =A0 =A0 =A0FOR_EACH_VEC_ELT (repeat_factor, repeat_factor_vec, j, rf1) > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 if (rf1->count >=3D 2) > + =A0 =A0 =A0 =A0 =A0 break; > + =A0 =A0 =A0 } > + > + =A0 =A0 =A0if (j >=3D vec_len) > + =A0 =A0 =A0 break; > + > + =A0 =A0 =A0power =3D rf1->count; > + > + =A0 =A0 =A0if (dump_file && (dump_flags & TDF_DETAILS)) > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 unsigned elt; > + =A0 =A0 =A0 =A0 repeat_factor_t rf; > + =A0 =A0 =A0 =A0 fputs ("Building __builtin_pow call for (", dump_file); > + =A0 =A0 =A0 =A0 for (elt =3D j; elt < vec_len; elt++) > + =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 rf =3D VEC_index (repeat_factor, repeat_factor_= vec, elt); > + =A0 =A0 =A0 =A0 =A0 =A0 print_generic_expr (dump_file, rf->factor, 0); > + =A0 =A0 =A0 =A0 =A0 =A0 if (elt < vec_len - 1) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 fputs (" * ", dump_file); > + =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 =A0 fprintf (dump_file, ")^%ld\n", power); > + =A0 =A0 =A0 } > + > + =A0 =A0 =A0reassociate_stats.pows_created++; > + > + =A0 =A0 =A0/* Visit each element of the vector in reverse order (so that > + =A0 =A0 =A0 =A0high-occurrence elements are visited first, and within t= he > + =A0 =A0 =A0 =A0same occurrence count, lower-ranked elements are visited > + =A0 =A0 =A0 =A0first). =A0Form a linear product of all elements in this= order > + =A0 =A0 =A0 =A0whose occurrencce count is at least that of element J. > + =A0 =A0 =A0 =A0Record the SSA name representing the product of each ele= ment > + =A0 =A0 =A0 =A0with all subsequent elements in the vector. =A0*/ > + =A0 =A0 =A0if (j =3D=3D vec_len - 1) > + =A0 =A0 =A0 rf1->repr =3D rf1->factor; > + =A0 =A0 =A0else > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 for (ii =3D vec_len - 2; ii >=3D (int)j; ii--) > + =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 tree op1, op2; > + > + =A0 =A0 =A0 =A0 =A0 =A0 rf1 =3D VEC_index (repeat_factor, repeat_factor= _vec, ii); > + =A0 =A0 =A0 =A0 =A0 =A0 rf2 =3D VEC_index (repeat_factor, repeat_factor= _vec, ii + 1); > + > + =A0 =A0 =A0 =A0 =A0 =A0 /* Init the last factor's representative to be = itself. =A0*/ > + =A0 =A0 =A0 =A0 =A0 =A0 if (!rf2->repr) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 rf2->repr =3D rf2->factor; > + > + =A0 =A0 =A0 =A0 =A0 =A0 op1 =3D rf1->factor; > + =A0 =A0 =A0 =A0 =A0 =A0 op2 =3D rf2->repr; > + > + =A0 =A0 =A0 =A0 =A0 =A0 target_ssa =3D get_reassoc_pow_ssa_name (target= , type); > + =A0 =A0 =A0 =A0 =A0 =A0 mul_stmt =3D gimple_build_assign_with_ops (MULT= _EXPR, target_ssa, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0op1, op2); > + =A0 =A0 =A0 =A0 =A0 =A0 gimple_set_location (mul_stmt, gimple_location = (stmt)); > + =A0 =A0 =A0 =A0 =A0 =A0 gsi_insert_before (&gsi, mul_stmt, GSI_SAME_STM= T); > + =A0 =A0 =A0 =A0 =A0 =A0 rf1->repr =3D target_ssa; > + =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 } > + > + =A0 =A0 =A0/* Form a call to __builtin_powi for the maximum product > + =A0 =A0 =A0 =A0just formed, raised to the power obtained earlier. =A0*/ > + =A0 =A0 =A0rf1 =3D VEC_index (repeat_factor, repeat_factor_vec, j); > + =A0 =A0 =A0target_ssa =3D get_reassoc_pow_ssa_name (target, type); > + =A0 =A0 =A0pow_stmt =3D gimple_build_call (powi_fndecl, 2, rf1->repr, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 bui= ld_int_cst (integer_type_node, power)); > + =A0 =A0 =A0gimple_call_set_lhs (pow_stmt, target_ssa); > + =A0 =A0 =A0gimple_set_location (pow_stmt, gimple_location (stmt)); > + =A0 =A0 =A0gsi_insert_before (&gsi, pow_stmt, GSI_SAME_STMT); > + > + =A0 =A0 =A0/* Decrement the occurrence count of each element in the pro= duct > + =A0 =A0 =A0 =A0by the count found above, and remove this many copies of= each > + =A0 =A0 =A0 =A0factor from OPS. =A0*/ > + =A0 =A0 =A0for (i =3D j; i < vec_len; i++) > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 unsigned k =3D power; > + =A0 =A0 =A0 =A0 unsigned n; > + > + =A0 =A0 =A0 =A0 rf1 =3D VEC_index (repeat_factor, repeat_factor_vec, i); > + =A0 =A0 =A0 =A0 rf1->count -=3D power; > + > + =A0 =A0 =A0 =A0 FOR_EACH_VEC_ELT_REVERSE (operand_entry_t, *ops, n, oe) > + =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 if (oe->op =3D=3D rf1->factor) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 VEC_ordered_remove (operand_entry_t, *o= ps, n); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (--k =3D=3D 0) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 break; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 } > + > + =A0 =A0 =A0/* If we previously formed at least one other builtin_powi c= all, > + =A0 =A0 =A0 =A0form the product of this one and those others. =A0*/ > + =A0 =A0 =A0if (result) > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 tree new_target_ssa =3D get_reassoc_pow_ssa_name (targe= t, type); > + =A0 =A0 =A0 =A0 mul_stmt =3D gimple_build_assign_with_ops (MULT_EXPR, n= ew_target_ssa, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0target_ssa, result); > + =A0 =A0 =A0 =A0 gimple_set_location (mul_stmt, gimple_location (stmt)); > + =A0 =A0 =A0 =A0 gsi_insert_before (&gsi, mul_stmt, GSI_SAME_STMT); > + =A0 =A0 =A0 =A0 result =3D new_target_ssa; > + =A0 =A0 =A0 } > + =A0 =A0 =A0else > + =A0 =A0 =A0 result =3D target_ssa; > + =A0 =A0} > + > + =A0/* At this point all elements in the repeated factor vector have a > + =A0 =A0 remaining occurrence count of 0 or 1. =A0Scanning the vector in > + =A0 =A0 reverse order, find the last element with a 1 before a 0 is fou= nd. > + =A0 =A0 If this element has an SSA representative and is not the last > + =A0 =A0 element, then it represents a multiply we have already calculat= ed. > + =A0 =A0 Multiply the result so far by this SSA name. =A0Remove one copy= of > + =A0 =A0 each factor in the product from OPS. =A0*/ > + =A0cached =3D vec_len; > + > + =A0for (ii =3D vec_len - 1; ii >=3D 0; ii--) > + =A0 =A0{ > + =A0 =A0 =A0rf1 =3D VEC_index (repeat_factor, repeat_factor_vec, ii); > + =A0 =A0 =A0if (rf1->count =3D=3D 0) > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 cached =3D ii + 1; > + =A0 =A0 =A0 =A0 break; > + =A0 =A0 =A0 } > + =A0 =A0} > + > + =A0if (cached < vec_len) > + =A0 =A0{ > + =A0 =A0 =A0rf1 =3D VEC_index (repeat_factor, repeat_factor_vec, cached); > + > + =A0 =A0 =A0if (dump_file && (dump_flags & TDF_DETAILS)) > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 unsigned elt; > + =A0 =A0 =A0 =A0 repeat_factor_t rf; > + =A0 =A0 =A0 =A0 fputs ("Multiplying by cached product ", dump_file); > + =A0 =A0 =A0 =A0 for (elt =3D cached; elt < vec_len; elt++) > + =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 rf =3D VEC_index (repeat_factor, repeat_factor_= vec, elt); > + =A0 =A0 =A0 =A0 =A0 =A0 print_generic_expr (dump_file, rf->factor, 0); > + =A0 =A0 =A0 =A0 =A0 =A0 if (elt < vec_len - 1) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 fputs (" * ", dump_file); > + =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 =A0 fputs ("\n", dump_file); > + =A0 =A0 =A0 } > + > + =A0 =A0 =A0if (!result) > + =A0 =A0 =A0 result =3D rf1->repr; > + =A0 =A0 =A0else > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 tree new_target_ssa =3D get_reassoc_pow_ssa_name (targe= t, type); > + =A0 =A0 =A0 =A0 mul_stmt =3D gimple_build_assign_with_ops (MULT_EXPR, n= ew_target_ssa, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0rf1->repr, result); > + =A0 =A0 =A0 =A0 gimple_set_location (mul_stmt, gimple_location (stmt)); > + =A0 =A0 =A0 =A0 gsi_insert_before (&gsi, mul_stmt, GSI_SAME_STMT); > + =A0 =A0 =A0 =A0 result =3D new_target_ssa; > + =A0 =A0 =A0 } > + =A0 =A0} > + > + =A0for (i =3D cached; i < vec_len; i++) > + =A0 =A0{ > + =A0 =A0 =A0unsigned n; > + > + =A0 =A0 =A0rf1 =3D VEC_index (repeat_factor, repeat_factor_vec, i); > + =A0 =A0 =A0rf1->count--; > + > + =A0 =A0 =A0FOR_EACH_VEC_ELT_REVERSE (operand_entry_t, *ops, n, oe) > + =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 if (oe->op =3D=3D rf1->factor) > + =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 VEC_ordered_remove (operand_entry_t, *ops, n); > + =A0 =A0 =A0 =A0 =A0 =A0 break; > + =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 } > + =A0 =A0} > + > + =A0/* Clean up. =A0*/ > + =A0VEC_free (repeat_factor, heap, repeat_factor_vec); > + > + =A0/* Return the final product as computed herein. =A0Note that there > + =A0 =A0 may still be some elements with single occurrence count left > + =A0 =A0 in OPS; those will be handled by the normal reassociation logic= . =A0*/ > + =A0return result; > +} > + > =A0/* Reassociate expressions in basic block BB and its post-dominator as > =A0 =A0children. =A0*/ > > @@ -2823,6 +3355,7 @@ reassociate_bb (basic_block bb) > =A0{ > =A0 gimple_stmt_iterator gsi; > =A0 basic_block son; > + =A0tree target =3D NULL_TREE; > > =A0 for (gsi =3D gsi_last_bb (bb); !gsi_end_p (gsi); gsi_prev (&gsi)) > =A0 =A0 { > @@ -2883,6 +3416,8 @@ reassociate_bb (basic_block bb) > > =A0 =A0 =A0 =A0 =A0if (associative_tree_code (rhs_code)) > =A0 =A0 =A0 =A0 =A0 =A0{ > + =A0 =A0 =A0 =A0 =A0 =A0 tree powi_result =3D NULL_TREE; > + > =A0 =A0 =A0 =A0 =A0 =A0 =A0VEC(operand_entry_t, heap) *ops =3D NULL; > > =A0 =A0 =A0 =A0 =A0 =A0 =A0/* There may be no immediate uses left by the = time we > @@ -2904,7 +3439,14 @@ reassociate_bb (basic_block bb) > =A0 =A0 =A0 =A0 =A0 =A0 =A0if (rhs_code =3D=3D BIT_IOR_EXPR || rhs_code = =3D=3D BIT_AND_EXPR) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0optimize_range_tests (rhs_code, &ops); > > - =A0 =A0 =A0 =A0 =A0 =A0 if (VEC_length (operand_entry_t, ops) =3D=3D 1) > + =A0 =A0 =A0 =A0 =A0 =A0 if (early_reassoc > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 && rhs_code =3D=3D MULT_EXPR > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 && flag_unsafe_math_optimizations) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 powi_result =3D attempt_builtin_powi (stmt,= &ops, &target); > + > + =A0 =A0 =A0 =A0 =A0 =A0 /* If the operand vector is now empty, all oper= ands were > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0consumed by the __builtin_pow optimizati= on. =A0*/ > + =A0 =A0 =A0 =A0 =A0 =A0 if (VEC_length (operand_entry_t, ops) =3D=3D 0) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0{ > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if (dump_file && (dump_flags & TDF_DET= AILS)) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0{ > @@ -2913,9 +3455,7 @@ reassociate_bb (basic_block bb) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0} > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0rhs1 =3D gimple_assign_rhs1 (stmt); > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gimple_assign_set_rhs_from_tree (&gsi, > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0VEC_last (operand_entry_t, > - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ops)->op); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gimple_assign_set_rhs_from_tree (&gsi, = powi_result); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0update_stmt (stmt); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0remove_visited_stmt_chain (rhs1); > > @@ -2925,6 +3465,39 @@ reassociate_bb (basic_block bb) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0print_gimple_stmt (dump_file, = stmt, 0, 0); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0} > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0} > + > + =A0 =A0 =A0 =A0 =A0 =A0 else if (VEC_length (operand_entry_t, ops) =3D= =3D 1) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 tree last_op =3D VEC_last (operand_entr= y_t, ops)->op; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (dump_file && (dump_flags & TDF_DETA= ILS)) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 fprintf (dump_file, "Transformi= ng "); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 print_gimple_stmt (dump_file, s= tmt, 0, 0); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > + > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (powi_result) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 rhs1 =3D gimple_assign_rhs1 (st= mt); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gimple_assign_set_rhs_with_ops_= 1 (&gsi, MULT_EXPR, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 powi_result, last_op, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 NULL_TREE); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 update_stmt (gsi_stmt (gsi)); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 remove_visited_stmt_chain (rhs1= ); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 else > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 rhs1 =3D gimple_assign_rhs1 (st= mt); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gimple_assign_set_rhs_from_tree= (&gsi, last_op); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 update_stmt (stmt); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 remove_visited_stmt_chain (rhs1= ); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > + > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (dump_file && (dump_flags & TDF_DETA= ILS)) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 fprintf (dump_file, " into "); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 print_gimple_stmt (dump_file, s= tmt, 0, 0); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > =A0 =A0 =A0 =A0 =A0 =A0 =A0else > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0{ > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0enum machine_mode mode =3D TYPE_MODE (= TREE_TYPE (lhs)); > @@ -2940,6 +3513,24 @@ reassociate_bb (basic_block bb) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0rewrite_expr_tree_parallel (stmt, = width, ops); > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0else > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0rewrite_expr_tree (stmt, 0, ops, f= alse); > + > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* If we combined some repeated factors= into a > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0__builtin_powi call, multiply th= at result by the > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0reassociated operands. =A0*/ > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 if (powi_result) > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 { > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gimple mul_stmt; > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 tree type =3D TREE_TYPE (gimple= _get_lhs (stmt)); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 tree target_ssa =3D get_reassoc= _pow_ssa_name (&target, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 type); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gimple_set_lhs (stmt, target_ss= a); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 update_stmt (stmt); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 mul_stmt =3D gimple_build_assig= n_with_ops (MULT_EXPR, lhs, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0powi_result, > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0target_ssa); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gimple_set_location (mul_stmt, = gimple_location (stmt)); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 gsi_insert_after (&gsi, mul_stm= t, GSI_NEW_STMT); > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 } > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0} > > =A0 =A0 =A0 =A0 =A0 =A0 =A0VEC_free (operand_entry_t, heap, ops); > @@ -3054,6 +3645,10 @@ fini_reassoc (void) > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0reassociate_stats.= ops_eliminated); > =A0 statistics_counter_event (cfun, "Statements rewritten", > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0reassociate_stats.= rewritten); > + =A0statistics_counter_event (cfun, "Built-in pow[i] calls expanded", > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 reassociate_stats.p= ows_expanded); > + =A0statistics_counter_event (cfun, "Built-in powi calls created", > + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 reassociate_stats.p= ows_created); > > =A0 pointer_map_destroy (operand_rank); > =A0 free_alloc_pool (operand_entry_pool); > @@ -3077,19 +3672,33 @@ execute_reassoc (void) > =A0 return 0; > =A0} > > +static unsigned int > +execute_early_reassoc (void) > +{ > + =A0early_reassoc =3D true; > + =A0return execute_reassoc (); > +} > + > +static unsigned int > +execute_late_reassoc (void) > +{ > + =A0early_reassoc =3D false; > + =A0return execute_reassoc (); > +} > + > =A0static bool > =A0gate_tree_ssa_reassoc (void) > =A0{ > =A0 return flag_tree_reassoc !=3D 0; > =A0} > > -struct gimple_opt_pass pass_reassoc =3D > +struct gimple_opt_pass pass_early_reassoc =3D > =A0{ > =A0{ > =A0 GIMPLE_PASS, > - =A0"reassoc", =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* na= me */ > + =A0"reassoc1", =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* na= me */ > =A0 gate_tree_ssa_reassoc, =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* gate */ > - =A0execute_reassoc, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* execute = */ > + =A0execute_early_reassoc, =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* execute */ > =A0 NULL, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0/* sub */ > =A0 NULL, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0/* next */ > =A0 0, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 /* static_pass_number */ > @@ -3103,3 +3712,24 @@ gate_tree_ssa_reassoc (void) > =A0 =A0 | TODO_ggc_collect =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* todo_flags_= finish */ > =A0} > =A0}; > + > +struct gimple_opt_pass pass_late_reassoc =3D > +{ > + { > + =A0GIMPLE_PASS, > + =A0"reassoc2", =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0/* na= me */ > + =A0gate_tree_ssa_reassoc, =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* gate */ > + =A0execute_late_reassoc, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0/* execute */ > + =A0NULL, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0/* sub */ > + =A0NULL, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0/* next */ > + =A00, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 /* static_pass_number */ > + =A0TV_TREE_REASSOC, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* tv_id */ > + =A0PROP_cfg | PROP_ssa, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* properties_r= equired */ > + =A00, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 /* properties_provided */ > + =A00, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 /* properties_destroyed */ > + =A00, =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 /* todo_flags_start */ > + =A0TODO_verify_ssa > + =A0 =A0| TODO_verify_flow > + =A0 =A0| TODO_ggc_collect =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 /* todo_flags= _finish */ > + } > +}; > >