From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: by sourceware.org (Postfix, from userid 48) id 09A333858D33; Mon, 13 Feb 2023 07:42:12 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 09A333858D33 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1676274133; bh=SATPSsgNbVh1ddWbpnWHNVz0fml0dwajj5IySntXCNI=; h=From:To:Subject:Date:In-Reply-To:References:From; b=c7Eef0vWhWY5V1v8RNs2oqHoewAOqzB1VBofRyxO+0yyWlq8vejKzvHN4ZS3XV1ve ZgmU1xsEwKCrMYoguO4uBhbcMJcaZMURc8h63u03el/3+2aV4Ts+HK3vsdj3IqBrhF yFaYGvLn7s8X7B1SxHvL9z3QRwAYLMA9ryW0dW/0= From: "rguenth at gcc dot gnu.org" To: gcc-bugs@gcc.gnu.org Subject: [Bug tree-optimization/108500] [11/12 Regression] -O -finline-small-functions results in "internal compiler error: Segmentation fault" on a very large program (700k function calls) Date: Mon, 13 Feb 2023 07:42:10 +0000 X-Bugzilla-Reason: CC X-Bugzilla-Type: changed X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: gcc X-Bugzilla-Component: tree-optimization X-Bugzilla-Version: 12.2.0 X-Bugzilla-Keywords: compile-time-hog, ice-on-valid-code, memory-hog X-Bugzilla-Severity: normal X-Bugzilla-Who: rguenth at gcc dot gnu.org X-Bugzilla-Status: ASSIGNED X-Bugzilla-Resolution: X-Bugzilla-Priority: P3 X-Bugzilla-Assigned-To: rguenth at gcc dot gnu.org X-Bugzilla-Target-Milestone: 11.4 X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: Message-ID: In-Reply-To: References: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://gcc.gnu.org/bugzilla/ Auto-Submitted: auto-generated MIME-Version: 1.0 List-Id: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D108500 --- Comment #22 from Richard Biener --- (In reply to Vladimir Makarov from comment #20) > (In reply to Richard Biener from comment #14) > > Thanks for the new testcase. With -O0 (and a --enable-checking=3Drelea= se > > built compiler) this builds in ~11 minutes (on a Ryzen 9 7900X) with > >=20 > > integrated RA : 38.96 ( 6%) 1.94 ( 20%) 42.0= 0 (=20 > > 6%) 3392M ( 23%) > > LRA non-specific : 18.93 ( 3%) 1.24 ( 13%) 23.7= 8 (=20 > > 4%) 450M ( 3%) > > LRA virtuals elimination : 5.67 ( 1%) 0.05 ( 1%) 5.7= 5 (=20 > > 1%) 457M ( 3%) > > LRA reload inheritance : 318.25 ( 49%) 0.24 ( 2%) 318.5= 1 ( > > 48%) 0 ( 0%) > > LRA create live ranges : 199.24 ( 31%) 0.12 ( 1%) 199.3= 8 ( > > 30%) 228M ( 2%) > > 645.67user 10.29system 11:04.42elapsed 98%CPU (0avgtext+0avgdata > > 30577844maxresident)k > > 3936200inputs+1091808outputs (122053major+10664929minor)pagefaults 0swa= ps > > >=20 > I've tried test-1M.i with -O0 for clang-14. It took about 12hours on > E5-2697 v3 vs about 30min for GCC. The most time (99%) of clang is spent= in > "fast register allocator": >=20 > Total Execution Time: 42103.9395 seconds (42243.9819 wall clock) >=20 > ---User Time--- --System Time-- --User+System-- ---Wall Time---= =20 > --- Name --- > 41533.7657 ( 99.5%) 269.5347 ( 78.6%) 41803.3005 ( 99.3%) 41942.4177= ( > 99.3%) Fast Register Allocator > 139.1669 ( 0.3%) 16.4785 ( 4.8%) 155.6454 ( 0.4%) 156.3196 ( 0.4= %)=20 > X86 DAG->DAG Instruction Selection >=20 > I've tried the same for -O1. Again gcc took about 30min and I stopped cl= ang > (with another used RA algorithm) after 120hours. >=20 > So the situation with RA is not so bad for GCC. But in any case I'll try= to > improve the speed for this case. I bet the LLVM folks do not focus on making -O{0,1} usable for these kind of testcases which have practical application for auto-generated code. Of course that's not a reason to not improve GCC even more! ;) > > so register allocation taking all of the time. There's maybe the possi= bility > > to gate some of its features on the # of BBs or insns (or whatever the = actual > > "bad" thing is - I didn't look closer yet). > >=20 > > It also seems to use 30GB of peak memory at -O0 ... > >=20 >=20 > I see only 3GB. Improving this is hard task. The IRA for -O0 uses very > simple algorithm with usage of very few resources. We could use even > simpler method (assigning memory only for all pseudos) but I think it does > not worth to do as the generated code will be much bigger and probably wi= ll > be 1.5-2 times slower. For some RTL opts algorithm simply splitting large blocks tends to help. Also some gate on the number of BBs only but their algorithms are quadratic in the number of insns instead ... Of course we cannot simply gate RA ... maybe there's a way to have a "simpler" algorithm that works on smaller regions of a function and only promote allocnos live across region boundaries to memory? Ideally you'd have sth that has linear time complexity - for LRA that should be possible, since we have done global RA already? Anyway - thanks for improving things here!=