From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 8013 invoked by alias); 12 Mar 2013 14:05:52 -0000 Received: (qmail 7850 invoked by uid 48); 12 Mar 2013 14:05:13 -0000 From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug middle-end/39326] Segmentation fault with -O1, out of memory with -O2 Date: Tue, 12 Mar 2013 14:05:00 -0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: middle-end X-Bugzilla-Keywords: compile-time-hog, memory-hog X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: ASSIGNED X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: steven at gcc dot gnu.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated Content-Type: text/plain; charset="UTF-8" MIME-Version: 1.0 Mailing-List: contact gcc-bugs-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-bugs-owner@gcc.gnu.org X-SW-Source: 2013-03/txt/msg00953.txt.bz2 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=39326 --- Comment #47 from Richard Biener 2013-03-12 14:05:10 UTC --- With caching affine-combination compute and ao_ref compute I have it down to tree loop invariant motion: 596.91 (79%) usr 0.73 (29%) sys 599.77 (78%) wall 31135 kB ( 3%) ggc without limiting anything. I tried a more intelligent ref_indep_in_loop (avoiding another quadraticness in loop depth ...) but it gets slower ... *************** ref_indep_loop_p_1 (struct loop *loop, m *** 2230,2242 **** bitmap refs_to_check; unsigned i; bitmap_iterator bi; ! bool ret = true, stored = bitmap_bit_p (ref->stored, loop->num); mem_ref_p aref; ! if (stored) ! refs_to_check = memory_accesses.all_refs_in_loop[loop->num]; else ! refs_to_check = memory_accesses.all_refs_stored_in_loop[loop->num]; EXECUTE_IF_SET_IN_BITMAP (refs_to_check, 0, i, bi) { --- 2233,2245 ---- bitmap refs_to_check; unsigned i; bitmap_iterator bi; ! bool ret = true; mem_ref_p aref; ! if (bitmap_bit_p (ref->stored, loop->num)) ! refs_to_check = memory_accesses.refs_in_loop[loop->num]; else ! refs_to_check = memory_accesses.refs_stored_in_loop[loop->num]; EXECUTE_IF_SET_IN_BITMAP (refs_to_check, 0, i, bi) { *************** ref_indep_loop_p_1 (struct loop *loop, m *** 2259,2272 **** static bool ref_indep_loop_p (struct loop *loop, mem_ref_p ref) { - bool ret; - if (bitmap_bit_p (ref->indep_loop, loop->num)) return true; if (bitmap_bit_p (ref->dep_loop, loop->num)) return false; ! ret = ref_indep_loop_p_1 (loop, ref); if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, "Querying dependencies of ref %u in loop %d: %s\n", --- 2262,2286 ---- static bool ref_indep_loop_p (struct loop *loop, mem_ref_p ref) { if (bitmap_bit_p (ref->indep_loop, loop->num)) return true; if (bitmap_bit_p (ref->dep_loop, loop->num)) return false; ! bool ret = true; ! struct loop *inner = loop->inner; ! while (inner) ! { ! if (!ref_indep_loop_p (inner, ref)) ! { ! ret = false; ! break; ! } ! inner = inner->next; ! } ! ! if (ret) ! ret = ref_indep_loop_p_1 (loop, ref); if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, "Querying dependencies of ref %u in loop %d: %s\n", with it we can also get rid of the all_refs_in_loop bitmap (only all_refs_stored_in_loop is then used, by store-motion). I don't see why the above should result in slower operation :/ It should save us the time to re-query references in sub-loops (yes, they should be cached already ... still walking the bitmap is not free). To mimic parts of this idea the dep_loop/indep_loop checking/setting can be improved to cover outer loops like with @@ -2296,7 +2218,12 @@ record_indep_loop (struct loop *loop, me if (indep) bitmap_set_bit (ref->indep_loop, loop->num); else - bitmap_set_bit (ref->dep_loop, loop->num); + do + { + bitmap_set_bit (ref->dep_loop, loop->num); + loop = loop_outer (loop); + } + while (loop != current_loops->tree_root); } and @@ -2339,8 +2266,13 @@ ref_indep_loop_p (struct loop *loop, mem { bool ret; - if (bitmap_bit_p (ref->indep_loop, loop->num)) - return true; + struct loop *oloop = loop; + do + { + if (bitmap_bit_p (ref->indep_loop, oloop->num)) + return true; + oloop = loop_outer (oloop); + } + while (oloop != current_loops->tree_root); if (bitmap_bit_p (ref->dep_loop, loop->num)) return false; (no difference in compile-time) anyway, another obvious improvement is to merge the two two-bitmap sets (they are tri-state, dependent, independent and unknown). That's more cache-friendly (now, where's that tri-state "bitmap" implementation that also saves the half bit we waste ... ;)) It helps a bit.