From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 12101 invoked by alias); 18 Jan 2008 10:35:46 -0000 Received: (qmail 12088 invoked by uid 22791); 18 Jan 2008 10:35:45 -0000 X-Spam-Check-By: sourceware.org Received: from mx1.redhat.com (HELO mx1.redhat.com) (66.187.233.31) by sourceware.org (qpsmtpd/0.31) with ESMTP; Fri, 18 Jan 2008 10:35:15 +0000 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id m0IAZDT6023724 for ; Fri, 18 Jan 2008 05:35:13 -0500 Received: from zebedee.littlepinkcloud.COM (vpn-14-225.rdu.redhat.com [10.11.14.225]) by int-mx1.corp.redhat.com (8.13.1/8.13.1) with ESMTP id m0IAZCf8032331; Fri, 18 Jan 2008 05:35:12 -0500 Received: from [127.0.0.1] (localhost.localdomain [127.0.0.1]) by zebedee.littlepinkcloud.COM (8.13.8/8.13.5) with ESMTP id m0IAZ96a017069; Fri, 18 Jan 2008 10:35:10 GMT Message-ID: <479080DD.8070608@redhat.com> Date: Sat, 19 Jan 2008 03:49:00 -0000 From: Andrew Haley User-Agent: Thunderbird 1.5.0.12 (X11/20071019) MIME-Version: 1.0 To: Alejandro Pulver CC: Tony Wetmore , gcc-help@gcc.gnu.org Subject: Re: Reducing compilation memory usage References: <20080117140156.201451e8@deimos.mars.bsd> <20080117170952.7d569374@deimos.mars.bsd> <478FAAF9.9000301@solipsys.com> <20080117180809.2f5db118@deimos.mars.bsd> In-Reply-To: <20080117180809.2f5db118@deimos.mars.bsd> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact gcc-help-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Archive: List-Post: List-Help: Sender: gcc-help-owner@gcc.gnu.org X-SW-Source: 2008-01/txt/msg00176.txt.bz2 Alejandro Pulver wrote: > On Thu, 17 Jan 2008 14:22:33 -0500 > Tony Wetmore wrote: > > >> Alejandro Pulver wrote: >> > As it fails with -O1 and all -fno-*, I tried parameters like >> > max-pending-list-length=1 without success. >> > >> > Do you know about an option or something that could help in this case? >> >> Alejandro, >> >> > > Hello. > > Thank you for your reply. > > >> I may have missed this earlier, but are the source files causing GCC to >> crash particularly large? Perhaps the problem could be avoided by >> splitting the code into more (smaller) files, so that GCC is compiling >> less of the code at once. >> >> > > Yes, the source is 4MB, consisting of one function with a jump table > and ~8000 labels (with little code on each), to simulate each machine > instruction. > > GCC doesn't crash, just outputs something like this and exits: > > cc1: out of memory allocating 4072 bytes after a total of 1073158016 bytes > > >> If I understand what you are doing, this code is program-generated, so >> you would have to modify the generator program to create multiple files. >> But you might be able to test this by manually splitting one file >> yourself. >> >> Good luck! >> >> > > I really don't know if I could split it, as it's inside the same > function (all the ~8000 labels with very short code), and it uses > computed gotos (like the ones generated by a switch statement, with a > "jump table" so there is no comparison of the cases), so this may not > work in different files (at least 2 jump tables are used instead of 1, > using a dictionary-like setup, see below). > > For example, I've seen a Motorola 68K emulator (included in Generator, > a SEGA Genesis emulator) generating 16 C files, each one containing the > code for instructions starting with 0 to F (in hexadecimal). It uses > switch and case statements for the jump table (not computed gotos), > which is more or less the same in practice. I haven't fully read the > source, but it seems that has 2 jump tables to avoid this problem. > > I'll see if they can be split, but I just was surprised when I tried > with GCC 4.x and actually compiled it (without optimizations). So I > thought there could be a way to optimize it by telling GCC specific > information about how to do it (as optimizing each block independently > should work fine). > Yeah, but that's not what gcc does -- we hold an entire function as trees and a data-flow graph, and then we optimize the whole thing. In your case the behaviour of gcc is not at all surprising, and the obvious way to solve your problem is to go out and buy more RAM! Of course we could make gcc more economical, and we could somewhat reduce memory usage, but you're asking for something really hard. Andrew.