From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 31191 invoked by alias); 20 Jan 2014 10:51:01 -0000 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org Received: (qmail 31156 invoked by uid 48); 20 Jan 2014 10:50:58 -0000 From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug c++/53958] set_slot_part and canon_value_cmp using 90% of compile time Date: Mon, 20 Jan 2014 10:51:00 -0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: c++ X-Bugzilla-Version: 4.7.1 X-Bugzilla-Keywords: compile-time-hog X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: NEW X-Bugzilla-Priority: P2 X-Bugzilla-Assigned-To: unassigned at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: cc Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 X-SW-Source: 2014-01/txt/msg02132.txt.bz2 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=53958 Richard Biener changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rguenth at gcc dot gnu.org --- Comment #6 from Richard Biener --- As usual, the iteration order imposed by pre_and_rev_postorder isn't the best one for a forward dataflow problem. Using inverted_post_order_compute improves compile-time somewhat. But I can still confirm variable tracking : 0.39 ( 1%) usr 0.00 ( 0%) sys 0.41 ( 1%) wall 5504 kB ( 6%) ggc var-tracking dataflow : 34.96 (84%) usr 0.16 (53%) sys 35.10 (84%) wall 47 kB ( 0%) ggc as general observation I wonder why the dataflow problem computes the _union_ as opposed to the intersection of the info on the preds in the !MAY_HAVE_DEBUG_INSNS case. Also if the MAY_HAVE_DEBUG_INSNS case really computes an intersection (as documented) then it can avoid repeated clearing of the in set and only needs to dataflow_set_merge from changed edges. Now the discrepancy in wrt the !MAY_HAVE_DEBUG_INSNS case makes me not trust that comment blindly ... That said, handling only changed edges can be done by doing the intersection in the if (changed) FOR_EACH_EDGE loop and dropping the initial ->in set compute (just retaining the post-merge-adjust). >>From a quick look var-tracking doesn't seem to take the opportunity of pruning its sets based on variable scopes (it would need to compute scope liveness in vt_initialize). Anyway, here's a patch improving compile-time for this testcase by ~6% Index: gcc/var-tracking.c =================================================================== --- gcc/var-tracking.c (revision 206599) +++ gcc/var-tracking.c (working copy) @@ -6934,12 +6934,12 @@ vt_find_locations (void) bool success = true; timevar_push (TV_VAR_TRACKING_DATAFLOW); - /* Compute reverse completion order of depth first search of the CFG + /* Compute reverse top sord order of the inverted CFG so that the data-flow runs faster. */ - rc_order = XNEWVEC (int, n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS); + rc_order = XNEWVEC (int, n_basic_blocks_for_fn (cfun)); bb_order = XNEWVEC (int, last_basic_block_for_fn (cfun)); - pre_and_rev_post_order_compute (NULL, rc_order, false); - for (i = 0; i < n_basic_blocks_for_fn (cfun) - NUM_FIXED_BLOCKS; i++) + int num = inverted_post_order_compute (rc_order); + for (i = 0; i < num; i++) bb_order[rc_order[i]] = i; free (rc_order);