From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 23315 invoked by alias); 10 May 2017 08:31:26 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Received: (qmail 22737 invoked by uid 89); 10 May 2017 08:31:25 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-24.3 required=5.0 tests=AWL,BAYES_00,FREEMAIL_FROM,GIT_PATCH_0,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,RCVD_IN_DNSWL_NONE,RCVD_IN_SORBS_SPAM,SPF_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mail-oi0-f65.google.com Received: from mail-oi0-f65.google.com (HELO mail-oi0-f65.google.com) (209.85.218.65) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 10 May 2017 08:31:22 +0000 Received: by mail-oi0-f65.google.com with SMTP id w10so3952569oif.1 for ; Wed, 10 May 2017 01:31:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=facAwS3dlOMkwC57txhMvSTSLP9K8++KXzkSW3T7op8=; b=QrgfGX9mkIeCLVLBiQ3RbZxamLmpA5IxhXWF96I49MpBrGf5ypaV3qK3D5CZLXKcwD wDZ5yqnAb7p+3ishIqJP14aJlL7seQNdTajekuvqzx4YZjWCvDWMKB4At0lrFImRztAk E3ZN0acdo3UwM122gKie00FvwEfVhjXuspmqlPKhmYSoha+hvuQDSrSKgZqw/hZ/w7fn K/OMWPmG14eaWoZPv3n9jSvW/9JgdmoDsAMjr7WnwjcVwP5gxs462Ab5zycCGDq6dfH6 Qpq5MForJN515GB8uU1Zv+B2RZkmbcQeMKgTTaMa9HvvlkR/2WcoTSqeXMk4/ZPfWjE1 AJFQ== X-Gm-Message-State: AODbwcAPpq6AUANeGhZIJR6ru0WtRxVvA09T1YOSILDa+FFKSM0dIxTZ qRa5d6kQP5/FA98HHn+GsYW88i9Jgg== X-Received: by 10.157.15.205 with SMTP id m13mr2193239otd.6.1494405082860; Wed, 10 May 2017 01:31:22 -0700 (PDT) MIME-Version: 1.0 Received: by 10.157.51.83 with HTTP; Wed, 10 May 2017 01:31:22 -0700 (PDT) In-Reply-To: <20170509205242.2237-14-tbsaunde+gcc@tbsaunde.org> References: <20170509205242.2237-1-tbsaunde+gcc@tbsaunde.org> <20170509205242.2237-14-tbsaunde+gcc@tbsaunde.org> From: Richard Biener Date: Wed, 10 May 2017 08:44:00 -0000 Message-ID: Subject: Re: [PATCH 13/13] make inverted_post_order_compute() operate on a vec To: tbsaunde+gcc@tbsaunde.org Cc: GCC Patches Content-Type: text/plain; charset=UTF-8 X-IsSubscribed: yes X-SW-Source: 2017-05/txt/msg00757.txt.bz2 On Tue, May 9, 2017 at 10:52 PM, wrote: > From: Trevor Saunders > > gcc/ChangeLog: Ok. Richard. > 2017-05-09 Trevor Saunders > > * cfganal.c (inverted_post_order_compute): Change argument type > to vec *. > * cfganal.h (inverted_post_order_compute): Adjust prototype. > * df-core.c (rest_of_handle_df_initialize): Adjust. > (rest_of_handle_df_finish): Likewise. > (df_analyze_1): Likewise. > (df_analyze): Likewise. > (loop_inverted_post_order_compute): Change argument to be a vec *. > (df_analyze_loop): Adjust. > (df_get_n_blocks): Likewise. > (df_get_postorder): Likewise. > * df.h (struct df_d): Change field to be a vec. > * lcm.c (compute_laterin): Adjust. > (compute_available): Likewise. > * lra-lives.c (lra_create_live_ranges_1): Likewise. > * tree-ssa-dce.c (remove_dead_stmt): Likewise. > * tree-ssa-pre.c (compute_antic): Likewise. > --- > gcc/cfganal.c | 14 ++++++-------- > gcc/cfganal.h | 2 +- > gcc/df-core.c | 56 +++++++++++++++++++++++++----------------------------- > gcc/df.h | 4 +--- > gcc/lcm.c | 14 ++++++-------- > gcc/lra-lives.c | 9 ++++----- > gcc/tree-ssa-dce.c | 10 ++++------ > gcc/tree-ssa-pre.c | 9 ++++----- > 8 files changed, 52 insertions(+), 66 deletions(-) > > diff --git a/gcc/cfganal.c b/gcc/cfganal.c > index 27b453ca3f7..a3a6ea86994 100644 > --- a/gcc/cfganal.c > +++ b/gcc/cfganal.c > @@ -790,12 +790,12 @@ dfs_find_deadend (basic_block bb) > and start looking for a "dead end" from that block > and do another inverted traversal from that block. */ > > -int > -inverted_post_order_compute (int *post_order, > +void > +inverted_post_order_compute (vec *post_order, > sbitmap *start_points) > { > basic_block bb; > - int post_order_num = 0; > + post_order->reserve_exact (n_basic_blocks_for_fn (cfun)); > > if (flag_checking) > verify_no_unreachable_blocks (); > @@ -863,13 +863,13 @@ inverted_post_order_compute (int *post_order, > time, check its predecessors. */ > stack.quick_push (ei_start (pred->preds)); > else > - post_order[post_order_num++] = pred->index; > + post_order->quick_push (pred->index); > } > else > { > if (bb != EXIT_BLOCK_PTR_FOR_FN (cfun) > && ei_one_before_end_p (ei)) > - post_order[post_order_num++] = bb->index; > + post_order->quick_push (bb->index); > > if (!ei_one_before_end_p (ei)) > ei_next (&stack.last ()); > @@ -927,9 +927,7 @@ inverted_post_order_compute (int *post_order, > while (!stack.is_empty ()); > > /* EXIT_BLOCK is always included. */ > - post_order[post_order_num++] = EXIT_BLOCK; > - > - return post_order_num; > + post_order->quick_push (EXIT_BLOCK); > } > > /* Compute the depth first search order of FN and store in the array > diff --git a/gcc/cfganal.h b/gcc/cfganal.h > index 7df484b8441..39bb5e547a5 100644 > --- a/gcc/cfganal.h > +++ b/gcc/cfganal.h > @@ -63,7 +63,7 @@ extern void add_noreturn_fake_exit_edges (void); > extern void connect_infinite_loops_to_exit (void); > extern int post_order_compute (int *, bool, bool); > extern basic_block dfs_find_deadend (basic_block); > -extern int inverted_post_order_compute (int *, sbitmap *start_points = 0); > +extern void inverted_post_order_compute (vec *postorder, sbitmap *start_points = 0); > extern int pre_and_rev_post_order_compute_fn (struct function *, > int *, int *, bool); > extern int pre_and_rev_post_order_compute (int *, int *, bool); > diff --git a/gcc/df-core.c b/gcc/df-core.c > index 1b270d417aa..1e84d4d948f 100644 > --- a/gcc/df-core.c > +++ b/gcc/df-core.c > @@ -702,10 +702,9 @@ rest_of_handle_df_initialize (void) > df_live_add_problem (); > > df->postorder = XNEWVEC (int, last_basic_block_for_fn (cfun)); > - df->postorder_inverted = XNEWVEC (int, last_basic_block_for_fn (cfun)); > df->n_blocks = post_order_compute (df->postorder, true, true); > - df->n_blocks_inverted = inverted_post_order_compute (df->postorder_inverted); > - gcc_assert (df->n_blocks == df->n_blocks_inverted); > + inverted_post_order_compute (&df->postorder_inverted); > + gcc_assert ((unsigned) df->n_blocks == df->postorder_inverted.length ()); > > df->hard_regs_live_count = XCNEWVEC (unsigned int, FIRST_PSEUDO_REGISTER); > > @@ -816,7 +815,7 @@ rest_of_handle_df_finish (void) > } > > free (df->postorder); > - free (df->postorder_inverted); > + df->postorder_inverted.release (); > free (df->hard_regs_live_count); > free (df); > df = NULL; > @@ -1198,7 +1197,7 @@ df_analyze_1 (void) > int i; > > /* These should be the same. */ > - gcc_assert (df->n_blocks == df->n_blocks_inverted); > + gcc_assert ((unsigned) df->n_blocks == df->postorder_inverted.length ()); > > /* We need to do this before the df_verify_all because this is > not kept incrementally up to date. */ > @@ -1222,8 +1221,8 @@ df_analyze_1 (void) > if (dflow->problem->dir == DF_FORWARD) > df_analyze_problem (dflow, > df->blocks_to_analyze, > - df->postorder_inverted, > - df->n_blocks_inverted); > + df->postorder_inverted.address (), > + df->postorder_inverted.length ()); > else > df_analyze_problem (dflow, > df->blocks_to_analyze, > @@ -1249,23 +1248,21 @@ void > df_analyze (void) > { > bitmap current_all_blocks = BITMAP_ALLOC (&df_bitmap_obstack); > - int i; > > free (df->postorder); > - free (df->postorder_inverted); > df->postorder = XNEWVEC (int, last_basic_block_for_fn (cfun)); > - df->postorder_inverted = XNEWVEC (int, last_basic_block_for_fn (cfun)); > df->n_blocks = post_order_compute (df->postorder, true, true); > - df->n_blocks_inverted = inverted_post_order_compute (df->postorder_inverted); > + df->postorder_inverted.truncate (0); > + inverted_post_order_compute (&df->postorder_inverted); > > - for (i = 0; i < df->n_blocks; i++) > + for (int i = 0; i < df->n_blocks; i++) > bitmap_set_bit (current_all_blocks, df->postorder[i]); > > if (flag_checking) > { > /* Verify that POSTORDER_INVERTED only contains blocks reachable from > the ENTRY block. */ > - for (i = 0; i < df->n_blocks_inverted; i++) > + for (unsigned int i = 0; i < df->postorder_inverted.length (); i++) > gcc_assert (bitmap_bit_p (current_all_blocks, > df->postorder_inverted[i])); > } > @@ -1277,9 +1274,10 @@ df_analyze (void) > bitmap_and_into (df->blocks_to_analyze, current_all_blocks); > df->n_blocks = df_prune_to_subcfg (df->postorder, > df->n_blocks, df->blocks_to_analyze); > - df->n_blocks_inverted = df_prune_to_subcfg (df->postorder_inverted, > - df->n_blocks_inverted, > + unsigned int newlen = df_prune_to_subcfg (df->postorder_inverted.address (), > + df->postorder_inverted.length (), > df->blocks_to_analyze); > + df->postorder_inverted.truncate (newlen); > BITMAP_FREE (current_all_blocks); > } > else > @@ -1355,13 +1353,14 @@ loop_post_order_compute (int *post_order, struct loop *loop) > /* Compute the reverse top sort order of the inverted sub-CFG specified > by LOOP. Returns the number of blocks which is always loop->num_nodes. */ > > -static int > -loop_inverted_post_order_compute (int *post_order, struct loop *loop) > +static void > +loop_inverted_post_order_compute (vec *post_order, struct loop *loop) > { > basic_block bb; > edge_iterator *stack; > int sp; > - int post_order_num = 0; > + > + post_order->reserve_exact (loop->num_nodes); > > /* Allocate stack for back-tracking up CFG. */ > stack = XNEWVEC (edge_iterator, loop->num_nodes + 1); > @@ -1398,13 +1397,13 @@ loop_inverted_post_order_compute (int *post_order, struct loop *loop) > time, check its predecessors. */ > stack[sp++] = ei_start (pred->preds); > else > - post_order[post_order_num++] = pred->index; > + post_order->quick_push (pred->index); > } > else > { > if (flow_bb_inside_loop_p (loop, bb) > && ei_one_before_end_p (ei)) > - post_order[post_order_num++] = bb->index; > + post_order->quick_push (bb->index); > > if (!ei_one_before_end_p (ei)) > ei_next (&stack[sp - 1]); > @@ -1414,7 +1413,6 @@ loop_inverted_post_order_compute (int *post_order, struct loop *loop) > } > > free (stack); > - return post_order_num; > } > > > @@ -1424,15 +1422,13 @@ void > df_analyze_loop (struct loop *loop) > { > free (df->postorder); > - free (df->postorder_inverted); > > df->postorder = XNEWVEC (int, loop->num_nodes); > - df->postorder_inverted = XNEWVEC (int, loop->num_nodes); > + df->postorder_inverted.truncate (0); > df->n_blocks = loop_post_order_compute (df->postorder, loop); > - df->n_blocks_inverted > - = loop_inverted_post_order_compute (df->postorder_inverted, loop); > + loop_inverted_post_order_compute (&df->postorder_inverted, loop); > gcc_assert ((unsigned) df->n_blocks == loop->num_nodes); > - gcc_assert ((unsigned) df->n_blocks_inverted == loop->num_nodes); > + gcc_assert (df->postorder_inverted.length () == loop->num_nodes); > > bitmap blocks = BITMAP_ALLOC (&df_bitmap_obstack); > for (int i = 0; i < df->n_blocks; ++i) > @@ -1453,8 +1449,8 @@ df_get_n_blocks (enum df_flow_dir dir) > > if (dir == DF_FORWARD) > { > - gcc_assert (df->postorder_inverted); > - return df->n_blocks_inverted; > + gcc_assert (df->postorder_inverted.length ()); > + return df->postorder_inverted.length (); > } > > gcc_assert (df->postorder); > @@ -1473,8 +1469,8 @@ df_get_postorder (enum df_flow_dir dir) > > if (dir == DF_FORWARD) > { > - gcc_assert (df->postorder_inverted); > - return df->postorder_inverted; > + gcc_assert (df->postorder_inverted.length ()); > + return df->postorder_inverted.address (); > } > gcc_assert (df->postorder); > return df->postorder; > diff --git a/gcc/df.h b/gcc/df.h > index 681ff32098e..07fd3345d9d 100644 > --- a/gcc/df.h > +++ b/gcc/df.h > @@ -582,11 +582,9 @@ struct df_d > bitmap_head insns_to_notes_rescan; > int *postorder; /* The current set of basic blocks > in reverse postorder. */ > - int *postorder_inverted; /* The current set of basic blocks > + vec postorder_inverted; /* The current set of basic blocks > in reverse postorder of inverted CFG. */ > int n_blocks; /* The number of blocks in reverse postorder. */ > - int n_blocks_inverted; /* The number of blocks > - in reverse postorder of inverted CFG. */ > > /* An array [FIRST_PSEUDO_REGISTER], indexed by regno, of the number > of refs that qualify as being real hard regs uses. Artificial > diff --git a/gcc/lcm.c b/gcc/lcm.c > index edc86b57009..e8666274211 100644 > --- a/gcc/lcm.c > +++ b/gcc/lcm.c > @@ -270,9 +270,9 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest, > > /* Add all the blocks to the worklist. This prevents an early exit from > the loop given our optimistic initialization of LATER above. */ > - int *postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); > - int postorder_num = inverted_post_order_compute (postorder); > - for (int i = 0; i < postorder_num; ++i) > + auto_vec postorder; > + inverted_post_order_compute (&postorder); > + for (unsigned int i = 0; i < postorder.length (); ++i) > { > bb = BASIC_BLOCK_FOR_FN (cfun, postorder[i]); > if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) > @@ -281,7 +281,6 @@ compute_laterin (struct edge_list *edge_list, sbitmap *earliest, > *qin++ = bb; > bb->aux = bb; > } > - free (postorder); > > /* Note that we do not use the last allocated element for our queue, > as EXIT_BLOCK is never inserted into it. */ > @@ -512,9 +511,9 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout, > /* Put every block on the worklist; this is necessary because of the > optimistic initialization of AVOUT above. Use inverted postorder > to make the dataflow problem require less iterations. */ > - int *postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); > - int postorder_num = inverted_post_order_compute (postorder); > - for (int i = 0; i < postorder_num; ++i) > + auto_vec postorder; > + inverted_post_order_compute (&postorder); > + for (unsigned int i = 0; i < postorder.length (); ++i) > { > bb = BASIC_BLOCK_FOR_FN (cfun, postorder[i]); > if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) > @@ -523,7 +522,6 @@ compute_available (sbitmap *avloc, sbitmap *kill, sbitmap *avout, > *qin++ = bb; > bb->aux = bb; > } > - free (postorder); > > qin = worklist; > qend = &worklist[n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS]; > diff --git a/gcc/lra-lives.c b/gcc/lra-lives.c > index 5d4015b5ab9..e728e348215 100644 > --- a/gcc/lra-lives.c > +++ b/gcc/lra-lives.c > @@ -1287,11 +1287,11 @@ lra_create_live_ranges_1 (bool all_p, bool dead_insn_p) > point_freq_vec.truncate (0); > point_freq_vec.reserve_exact (new_length); > lra_point_freq = point_freq_vec.address (); > - int *post_order_rev_cfg = XNEWVEC (int, last_basic_block_for_fn (cfun)); > - int n_blocks_inverted = inverted_post_order_compute (post_order_rev_cfg); > - lra_assert (n_blocks_inverted == n_basic_blocks_for_fn (cfun)); > + auto_vec post_order_rev_cfg; > + inverted_post_order_compute (&post_order_rev_cfg); > + lra_assert (post_order_rev_cfg.length () == (unsigned) n_basic_blocks_for_fn (cfun)); > bb_live_change_p = false; > - for (i = n_blocks_inverted - 1; i >= 0; --i) > + for (i = post_order_rev_cfg.length () - 1; i >= 0; --i) > { > bb = BASIC_BLOCK_FOR_FN (cfun, post_order_rev_cfg[i]); > if (bb == EXIT_BLOCK_PTR_FOR_FN (cfun) || bb > @@ -1338,7 +1338,6 @@ lra_create_live_ranges_1 (bool all_p, bool dead_insn_p) > } > } > } > - free (post_order_rev_cfg); > lra_live_max_point = curr_point; > if (lra_dump_file != NULL) > print_live_ranges (lra_dump_file); > diff --git a/gcc/tree-ssa-dce.c b/gcc/tree-ssa-dce.c > index e17659df91f..150e4f73185 100644 > --- a/gcc/tree-ssa-dce.c > +++ b/gcc/tree-ssa-dce.c > @@ -1042,14 +1042,12 @@ remove_dead_stmt (gimple_stmt_iterator *i, basic_block bb) > { > if (!bb_postorder) > { > - int *postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); > - int postorder_num > - = inverted_post_order_compute (postorder, > - &bb_contains_live_stmts); > + auto_vec postorder; > + inverted_post_order_compute (&postorder, > + &bb_contains_live_stmts); > bb_postorder = XNEWVEC (int, last_basic_block_for_fn (cfun)); > - for (int i = 0; i < postorder_num; ++i) > + for (unsigned int i = 0; i < postorder.length (); ++i) > bb_postorder[postorder[i]] = i; > - free (postorder); > } > FOR_EACH_EDGE (e2, ei, bb->succs) > if (!e || e2->dest == EXIT_BLOCK_PTR_FOR_FN (cfun) > diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c > index ca212daee62..6ffcd7b8eb4 100644 > --- a/gcc/tree-ssa-pre.c > +++ b/gcc/tree-ssa-pre.c > @@ -2388,8 +2388,8 @@ compute_antic (void) > /* For ANTIC computation we need a postorder that also guarantees that > a block with a single successor is visited after its successor. > RPO on the inverted CFG has this property. */ > - int *postorder = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); > - int postorder_num = inverted_post_order_compute (postorder); > + auto_vec postorder; > + inverted_post_order_compute (&postorder); > > auto_sbitmap worklist (last_basic_block_for_fn (cfun) + 1); > bitmap_ones (worklist); > @@ -2403,7 +2403,7 @@ compute_antic (void) > for PA ANTIC computation. */ > num_iterations++; > changed = false; > - for (i = postorder_num - 1; i >= 0; i--) > + for (i = postorder.length () - 1; i >= 0; i--) > { > if (bitmap_bit_p (worklist, postorder[i])) > { > @@ -2430,7 +2430,7 @@ compute_antic (void) > { > /* For partial antic we ignore backedges and thus we do not need > to perform any iteration when we process blocks in postorder. */ > - postorder_num = pre_and_rev_post_order_compute (NULL, postorder, false); > + int postorder_num = pre_and_rev_post_order_compute (NULL, postorder.address (), false); > for (i = postorder_num - 1 ; i >= 0; i--) > { > basic_block block = BASIC_BLOCK_FOR_FN (cfun, postorder[i]); > @@ -2441,7 +2441,6 @@ compute_antic (void) > } > > sbitmap_free (has_abnormal_preds); > - free (postorder); > } > > > -- > 2.11.0 >