From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tim Josling To: gcc@gcc.gnu.org Subject: Re: Faster compilation speed Date: Wed, 21 Aug 2002 15:35:00 -0000 Message-id: <3D641556.2105A3A3@melbpc.org.au> X-SW-Source: 2002-08/msg01275.html "Tim Josling wrote: >This is consistent with my tests; I found that a simplistic allocation which >put everything on the same page, but which never freed anything, actually >bootstrapped GCC faster than the standard GC. > Not too surprising actually; GCC's own sources aren't the hard cases for GC. > >The GC was never supposed to make GCC faster, it was supposed to reduce >workload by getting rid of memory problems. But I doubt it achieves that >objective. Certainly, keeping track of all the attempts to 'fix' GC has burned >a lot of my time. > The original rationale that I remember was to deal with hairy C++ code where the compiler would literally exhaust available VM when doing function-at-a-time compilation. If that's still the case, then memory reclamation is a correctness issue. But it's worth tinkering with the heuristics; we got a little improvement on Darwin by bumping GGC_MIN_EXPAND_FOR_GC from 1.3 to 2.0 (it was a while back, don't have the comparative numbers). Stan" Much of the overhead of GC is not the collection as such, but the allocation process and its side-effects. In fact, if you allocate using the GC code, the build runs faster if you do the GC, though tweaking the threshold can help. However for many programs you are better off to allocate very simply and not do GC at all. The GC changes have, in my opinion, made small number of programs better at the expense of making most compiles slower. We should not be using GC for most compiles at all. This - an optimisation that actually make things worse overall - is unfortunately a common situation with 'improvments' to GCC. Tim Josling