From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 31465 invoked by alias); 4 Sep 2004 09:59:17 -0000 Mailing-List: contact gcc-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Archive: List-Post: List-Help: Sender: gcc-owner@gcc.gnu.org Received: (qmail 31457 invoked from network); 4 Sep 2004 09:59:14 -0000 Received: from unknown (HELO nikam.ms.mff.cuni.cz) (195.113.20.16) by sourceware.org with SMTP; 4 Sep 2004 09:59:14 -0000 Received: from camelot.ms.mff.cuni.cz (kampanus.ms.mff.cuni.cz [195.113.18.107]) by nikam.ms.mff.cuni.cz (Postfix) with SMTP id D9E7A4DDD6; Sat, 4 Sep 2004 11:59:14 +0200 (CEST) Received: by camelot.ms.mff.cuni.cz (sSMTP sendmail emulation); Sat, 4 Sep 2004 11:59:15 +0200 Date: Sat, 04 Sep 2004 09:59:00 -0000 From: Jan Hubicka To: gcc@gcc.gnu.org, law@redhat.com, amacleod@redhat.com Subject: Varray memory consumption strikes back Message-ID: <20040904095915.GV1947@kam.mff.cuni.cz> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.3.28i X-SW-Source: 2004-09/txt/msg00181.txt.bz2 Jeff, In January I was concerned about memory consumption: > I am somewhat concerned about use of varrays in GGC memory that produce > relatively large amount of garbage (a lot of that without good reason as > we can use malloc scheme instead, just we don't), thus I've implemented > statistics routines. Quick checking found that for Geralds testcase > does about 13MB of varrays, while tree-SSA does 63MB. The average > overhead per varray is over 10KB so operands are not to blame here IMO. and the proposed patch to make varrays allocatable either in GGC and memory flag has been rejected and you mentioned to have better sollution in prototype stage (moving away from varrays to sane datastructures and improving them at the same time). Today it seems that memory consumption on the same testcase is 122MB (up from 63MB I reported but with realistic chance that 43MB of immediate uses varray might go away before the freeze, but these are ggcfreed anyway so missing in the garbage statistics) counting only the GGC garbage, not the overall memory allocation like I did in January before ggc_free has been implemented as can be seen in the -fmem-report: varray.c:138 (varray_init) 59901920: 8.5% 23203392: 6.6% 11280: 0.0% 12794096: 7.5% 419188 varray.c:171 (varray_grow) 68242924: 9.7% 224774832:63.6% 271448: 0.3% 83130484:49.0% 244896 source location Garbage Freed Leak Overhead Times Is the better sollution anywhere near to be ready (huge portion of the arrays is still attributed to tree-ssa-dom code as can be seen in the report attached)? If not, I would suggest to fall back for the allocation patch at least for 3.5 release. It has been able to cut significant portion of the memory usage or perhaps more consistenly use ggc_free to explicitely deallocate varrays but I don't like the overall increased overhead of this allocation method either. Honza VARRAY Kind Count Bytes Resized copied ------------------------------------------------------- bbs_to_duplicate 208 24656 15 15 reg_base_value 217 432680 717 23 Elimination Constant Copies 334 64128 0 0 immediate_uses 254956 43235680 144954 144954 vrp_variables 1775 85200 0 0 reg_equiv_memory_loc 1 440904 668 4 local_classes 1 96 0 0 processed_ptrs 1002 1358064 949 949 num_references 1002 1444588 523 92 used_temp_slots 334 68768 580 580 aliases 26707 1772144 10223 10223 ttype_data 126 20160 0 0 mangling substitutions 1 544 6 5 ehspec_data 126 12096 0 0 saved_tokens 2402 127064 107 107 ssdf_decls 1 288 0 0 referenced_vars 334 1580128 1177 1177 block_locators_locs 334 85184 147 147 reg_n_info 1 7816 14 9 build uses 334 37408 0 0 ssa_names table 334 6489088 1074 1074 work list 1336 1155840 517 517 basic blocks 334 100440 0 0 block_locators_blocks 334 159680 147 147 inlined_fns 312 137216 141 141 line_locators_locs 334 53440 0 0 prologue 1 5580 505 2 Elimination Edge List 334 37408 0 0 file_locators_locs 334 53440 0 0 ib_boundaries_block 334 1381888 489 489 Elimination Stack 334 50768 0 0 classes of registers early clobbered in an insn 334 37408 0 0 sibcall_epilogue 1 428 37 4 const_and_copies 1002 10140376 34 2 dest_array 374 41888 0 0 stack 28523 3214692 0 0 build defs 334 24048 0 0 work_stack 2076 783448 0 0 epilogue 1 6768 609 2 build v_may_defs 334 103328 357 357 build vuses 334 48368 99 99 vrp records 3213 154224 0 0 build v_must_defs 334 37408 0 0 current_lang_base 10355 1159760 0 0 first_partition 932 470516 583 577 interesting_ssa_edges 334 64608 3 3 basic blocks for the next iter. 334 100440 0 0 varying_ssa_edges 334 80768 91 91 redirection data 191 9168 0 0 cfg_blocks 334 66208 12 12 RTTI decls 1 1312 4 4 VARS worklist 1002 117504 52 52 COND worklist 1002 223984 645 645 freelist 3746 540320 8594 8594 block_data 3746 540320 8594 8594 strings 2402 691776 0 0 block_defs 28179 19576768 30032 30032 trees 932 911208 583 580 vrp_data 1002 10140376 34 2 alias sets 1 10656 1319 9 tpa nodes 269 22240 0 0 basic_block_info 668 382864 2041 785 tpa to clear 269 62408 0 0 label to block map 334 362008 1007 566 inline_parm_levels 1 48 0 0 block_nonzero_vars 6413 401232 3637 3637 block_avail_exprs 8800 1815520 719 719 part_link 269 46200 0 0 Static declarations 4 5088 1 1 line_locators_lines 334 53440 0 0 block_const_and_copies 7947 5335568 28111 28111 case_labels 21 2528 6 6 local_names 1 96 0 0 action_record_data 126 12096 0 0 file_locators_files 334 96192 0 0 stmts_to_rescan 5213 1094816 524 524 succ 12409 601680 12653 244 insn_addresses 334 638272 0 0 deferred_fns 1 65568 8 8 fns 19441 882688 0 0 pending_statics 1 8224 5 5 Elimination Node List 334 90848 0 0 ------------------------------------------------------- Total 449727 121700084 -------------------------------------------------------