From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 29829 invoked by alias); 19 Jan 2004 17:27:54 -0000 Mailing-List: contact gcc-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Archive: List-Post: List-Help: Sender: gcc-owner@gcc.gnu.org Received: (qmail 29822 invoked from network); 19 Jan 2004 17:27:53 -0000 Received: from unknown (HELO nile.gnat.com) (205.232.38.5) by sources.redhat.com with SMTP; 19 Jan 2004 17:27:53 -0000 Received: from gnat.com (ppp1.gnat.com [205.232.38.211]) by nile.gnat.com (Postfix) with ESMTP id 581E9F29F2; Mon, 19 Jan 2004 12:27:47 -0500 (EST) Message-ID: <400C1394.5090904@gnat.com> Date: Mon, 19 Jan 2004 17:27:00 -0000 From: Robert Dewar User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.5) Gecko/20031007 MIME-Version: 1.0 To: Gabriel Dos Reis Cc: Nick Burrett , Marc Espie , geoffk@apple.com, gcc@gcc.gnu.org Subject: Re: gcc 3.5 integration branch proposal References: <90200277-4301-11D8-BDBD-000A95B1F520@apple.com> <20040110002526.GA13568@disaster.jaj.com> <82D6F34E-4306-11D8-BDBD-000A95B1F520@apple.com> <20040110154129.GA28152@disaster.jaj.com> <1073935323.3458.42.camel@minax.codesourcery.com> <1073951351.3458.162.camel@minax.codesourcery.com> <20040119013113.044D74895@quatramaran.ens.fr> <400BB40B.4070101@dsvr.net> <400BE1D3.7010105@gnat.com> In-Reply-To: Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-SW-Source: 2004-01/txt/msg01305.txt.bz2 Gabriel Dos Reis wrote: > I suggest you spend some time in the bugzilla database, triaging bugs > and explaining people who say that the compiler segfaulted -- when > compiling their programs, and you have determined that GCC was > consuming huge memory -- that they are marginal. > Until then, I guess we're just going through an empty discussion. You are confusing apples and oranges. What I was talking about here was increases in memory requirements that mean that gcc does not operate well on outdated "small RAM" machines so that there are programs which would once have compiled OK on such machines and don't any longer. It is of course a totally different matter if algorithms are introduced which use absurd amounts of memory so that programs cannot even be compiled on machines with a gigabyte of memory. There are most definitely such cases, and to me such cases are plain bugs. There is really no excuse for a compiler requiring huge amounts of room. In fact I don't really like the style of relying on virtual memory to guarantee that huge data structures can be held in memory, but I am afraid we are pretty much stuck with that. It's somewhat the style in the C and C++ world to assume that each individual compiled file should be small, and that it's not so terrible if a compiler can't handle really large source files. I find this quite unacceptable. For one thing, you can legitimately get huge files if they are the output of code generators, and compilers should be able to handle such cases fine. We certainly have bumped up against several cases in which gcc algorithms blew up horribly in space and time (the two usually go together in these cases of serious performance bugs). To put it concretely, it would not worry me if gcc could compile all sorts of giant programs comfortably in 256 megabytes, but was hopeless on smaller machines. A machine with 256 megs is not what I would call a "small RAM" machine. Part of the trouble it seems to me with GCC is that there have never been any concrete performance requirements in terms of compile speed and space requirements. That's a matter of emphasis. In the case of Realia COBOL for instance, compile speed was a primary functional requirement, and space was constrained to 640 kilobytes by the hardware. We could still compile million line files, and did so routinely to check that this worked fine (on a PC1 at 5MHz, a million line file could take a couple of hours to compile, but we made sure it did not get suddently worse than that).