From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 16704 invoked by alias); 19 Jan 2010 22:52:37 -0000 Received: (qmail 16688 invoked by uid 22791); 19 Jan 2010 22:52:36 -0000 X-SWARE-Spam-Status: No, hits=-2.4 required=5.0 tests=AWL,BAYES_00 X-Spam-Check-By: sourceware.org Received: from g1t0028.austin.hp.com (HELO g1t0028.austin.hp.com) (15.216.28.35) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 19 Jan 2010 22:52:31 +0000 Received: from G6W0640.americas.hpqcorp.net (g6w0640.atlanta.hp.com [16.230.34.76]) (using TLSv1 with cipher RC4-MD5 (128/128 bits)) (No client certificate requested) by g1t0028.austin.hp.com (Postfix) with ESMTPS id AD1BF1C2C4; Tue, 19 Jan 2010 22:52:29 +0000 (UTC) Received: from G4W1853.americas.hpqcorp.net (16.234.97.231) by G6W0640.americas.hpqcorp.net (16.230.34.76) with Microsoft SMTP Server (TLS) id 8.2.176.0; Tue, 19 Jan 2010 22:51:50 +0000 Received: from GVW0436EXB.americas.hpqcorp.net ([16.234.32.153]) by G4W1853.americas.hpqcorp.net ([16.234.97.231]) with mapi; Tue, 19 Jan 2010 22:51:48 +0000 From: "Boehm, Hans" To: Justin Santa Barbara , "java@gcc.gnu.org" CC: Andrew Haley , David Daney Date: Tue, 19 Jan 2010 22:52:00 -0000 Subject: RE: How to minimize (unshareable) memory usage? Message-ID: <238A96A773B3934685A7269CC8A8D042578321E2E0@GVW0436EXB.americas.hpqcorp.net> References: <4B542EE9.7030503@redhat.com> <4B54A0CB.3090301@caviumnetworks.com> In-Reply-To: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Mailing-List: contact java-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: java-owner@gcc.gnu.org X-SW-Source: 2010-01/txt/msg00027.txt.bz2 I think the current code fails the heap expansion if it would have resulted= in a heap larger than specified; it doesn't reduce the size of the heap ex= pansion so that it would barely still fit. This means it might be failing = sooner than you intended. Also setting GC_INITIAL_HEAP_SIZE may help. I would put a breakpoint in the failing GC_expand_hp_inner call and then ca= ll GC_dump(),which will tell you more about the collector's view of the hea= p. Since this collector doesn't move objects, it's quite possible that the= re just wasn't a hole large enough for the requested object, particularly i= f you allocate some really large objects. If that is indeed the case, you = may be able to circumvent the issue by allocating multiple smaller objects = instead. The other, sometimes significant, source of space overhead is the fact that= the collector splits the heap into chunks whose size is the smallest possi= ble multiple of 4K or 8K, depending on the paltform, and then uses each chu= nk for a fixed object size. Thus objects of size 2049 or 4097 bytes (inclu= ding the Java header) tend to waste a lot of space. Hans > -----Original Message----- > From: Justin Santa Barbara [mailto:justin@fathomdb.com]=20 > Sent: Tuesday, January 19, 2010 2:34 AM > To: java@gcc.gnu.org > Cc: Boehm, Hans; Andrew Haley; David Daney > Subject: Re: How to minimize (unshareable) memory usage? >=20 > Thanks to everyone for the great suggestions. I've tracked=20 > down the mysterious 8MB segments: they are the stacks for my=20 > threads (by putting a breakpoint on mmap) Though reserved in=20 > virtual memory space, pages shouldn't be allocated unless=20 > they're actually touched, so this is good news for physical=20 > memory. I verified this by dumping /proc//smaps; the=20 > output of one of my 8MB segments shows that only 16KB or so=20 > is really used. (Output attached at the end) >=20 > So the next biggest culprit is my heap memory usage ... I=20 > tried using GCInfo.setOOMDump, and did get a dump. But all=20 > is not entirely well: > my GC_MAXIMUM_HEAP_SIZE was set to 8,000,000 bytes, which=20 > results in an out of memory dump, but gc-analyze reports only=20 > 5.5MB usage, and of that half is listed under the 'free'=20 > column in the "Used Blocks" > section of the dump. >=20 > *** Used Blocks *** >=20 > Size Kind Blocks Used Free Wasted > ------- ------------- ------- ---------- ---------- ------- > <...snipped...> > ------- ------------- ------- ---------- ---------- ------- > 1,352 2,700,672 2,758,624 78,496 > Total bytes =3D 5,537,792 >=20 > The percentages elsewhere also support the idea that about=20 > 3MB of data is actually in use, so I'm not sure exactly why=20 > we're failing memory requests when we don't seem to be close=20 > to the limit. >=20 > As for the 6MB of writeable data that shows against libgcj, I=20 > believe this is the 'normal' data segment of the shared=20 > library. Cutting this down would probably be difficult. The=20 > easiest approach for me would be modularizing gcj so I don't=20 > have to pay for swing or awt. Again, suggestions are very=20 > welcome, but I think that this particular segment will be=20 > much more complicated than figuring out why the heap is not=20 > being used as efficiently as I hope it could be. >=20 > Thanks > Justin >=20 > --- >=20 > The smap output: >=20 > 8MB of stack, but only 16KB actually used: >=20 > b4b05000-b5305000 rw-p 00000000 00:00 0 > Size: 8192 kB > Rss: 16 kB > Pss: 16 kB > Shared_Clean: 0 kB > Shared_Dirty: 0 kB > Private_Clean: 0 kB > Private_Dirty: 16 kB > Referenced: 16 kB > Swap: 0 kB > KernelPageSize: 4 kB > MMUPageSize: 4 kB >=20 > The heap segment (with a 10,000,000 byte limit to avoid=20 > crashing, and captured on a different machine from the other two): > 7f43a12b0000-7f43a1bc0000 rw-p 7f43a12b0000 00:00 0 > Size: 9280 kB > Rss: 9280 kB > Shared_Clean: 0 kB > Shared_Dirty: 0 kB > Private_Clean: 0 kB > Private_Dirty: 9280 kB > Referenced: 9280 kB >=20 >=20 > The data segment of libgcj (?), mostly dirty: > b7122000-b7739000 rw-p 01ae0000 08:01 28534016=20=20=20 > /usr/local/lib/libgcj.so.10.0.0 > Size: 6236 kB > Rss: 6236 kB > Pss: 6236 kB > Shared_Clean: 0 kB > Shared_Dirty: 0 kB > Private_Clean: 1644 kB > Private_Dirty: 4592 kB > Referenced: 6236 kB > Swap: 0 kB > KernelPageSize: 4 kB > MMUPageSize: 4 kB >=20