public inbox for java@gcc.gnu.org
 help / color / mirror / Atom feed
* GC leaks debugging
@ 2011-04-01  8:39 Erik Groeneveld
  2011-04-01  8:45 ` Andrew Haley
  2011-04-01 17:41 ` David Daney
  0 siblings, 2 replies; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-01  8:39 UTC (permalink / raw)
  To: java

L.S.,

I am debugging memory leaks in the GC of libgcj.  Generally, libgcj
performs well, but there are some cases in which the heap literally
explodes over time. I wish to solve this. The OpenJDK vm runs the same
tests without any leaking.

I compiled GCC 4.6, build with --enable-libgcj-debug=yes and started
testing with GC_DUMP_REGULARLY=1.  From the results I have a few
questions to understand it better.  I am still in a phase of
pinpointing a minimal program to demonstrate the problem, and I would
like to get hints as to search further.

1. The finalization table entries keeps on increasing up to 39344
until it is killed, but the objects which are eligible for immediate
finalization remains 0.  It seems that no finalization takes place.
Could you give any hints as to what could be the reason for this?  It
logs:

***Finalization statistics:
39344 finalization table entries; 48 disappearing links
0 objects are eligible for immediate finalization

***Static roots:
From 0x804b284 to 0x804bf1c  (temporary)
From 0xb716d000 to 0xb7854cdc  (temporary)
From 0xb55ba92c to 0xb562ddbc  (temporary)
Total size: 7716356


2. There is a fair amount of black-listing.  From reading about the
GC, I understand that the GC knows what is a pointer and what not
because there is type information associated with the Java objects.
So I'd expect no black-listing at all.  It that a right observation?
The GC prints this just before it is killed:

***Heap sections:
Total heap size: 355028992
Section 0 from 0x30000 to 0x40000 15/16 blacklisted
Section 1 from 0x40000 to 0x50000 16/16 blacklisted
Section 2 from 0x50000 to 0x60000 16/16 blacklisted
Section 3 from 0x60000 to 0x71000 17/17 blacklisted
Section 4 from 0x71000 to 0x87000 22/22 blacklisted
Section 5 from 0x87000 to 0xa5000 30/30 blacklisted
Section 6 from 0xa5000 to 0xcd000 40/40 blacklisted
Section 7 from 0xcd000 to 0x102000 53/53 blacklisted
Section 8 from 0x112000 to 0x159000 71/71 blacklisted
Section 9 from 0x159000 to 0x1b7000 94/94 blacklisted
Section 10 from 0x1b7000 to 0x235000 83/126 blacklisted
Section 11 from 0x245000 to 0x2ed000 35/168 blacklisted
Section 12 from 0x30d000 to 0x3ed000 43/224 blacklisted
Section 13 from 0x3ed000 to 0x51b000 56/302 blacklisted
Section 14 from 0x53b000 to 0x6ca000 75/399 blacklisted
Section 15 from 0x6da000 to 0x8ee000 75/532 blacklisted
Section 16 from 0x8fe000 to 0xbcb000 61/717 blacklisted
Section 17 from 0xbdb000 to 0xf95000 64/954 blacklisted
Section 18 from 0xfb5000 to 0x14c7000 140/1298 blacklisted
Section 19 from 0x14f7000 to 0x1ba5000 201/1710 blacklisted
Section 20 from 0x1be5000 to 0x23e5000 153/2048 blacklisted
Section 21 from 0x2435000 to 0x2c35000 124/2048 blacklisted
Section 22 from 0x2c75000 to 0x3475000 120/2048 blacklisted
Section 23 from 0x34b5000 to 0x3cb5000 141/2048 blacklisted
Section 24 from 0x3d05000 to 0x4505000 128/2048 blacklisted
Section 25 from 0x4545000 to 0x4d45000 108/2048 blacklisted
Section 26 from 0x4d85000 to 0x5585000 615/2048 blacklisted
Section 27 from 0x55c5000 to 0x5dc5000 214/2048 blacklisted
Section 28 from 0x5e05000 to 0x6605000 1129/2048 blacklisted
Section 29 from 0x6655000 to 0x6e55000 1597/2048 blacklisted
Section 30 from 0x6e95000 to 0x7695000 992/2048 blacklisted
Section 31 from 0x76e5000 to 0x7ee5000 427/2048 blacklisted
Section 32 from 0xb3dea000 to 0xb45ea000 138/2048 blacklisted
Section 33 from 0xb359a000 to 0xb3d9a000 127/2048 blacklisted
Section 34 from 0xb2d4a000 to 0xb354a000 131/2048 blacklisted
Section 35 from 0xb250a000 to 0xb2d0a000 120/2048 blacklisted
Section 36 from 0xb1caa000 to 0xb24aa000 149/2048 blacklisted
Section 37 from 0xb146a000 to 0xb1c6a000 241/2048 blacklisted
Section 38 from 0xb0c1a000 to 0xb141a000 177/2048 blacklisted
Section 39 from 0xb03ca000 to 0xb0bca000 283/2048 blacklisted
Section 40 from 0xafb7a000 to 0xb037a000 609/2048 blacklisted
Section 41 from 0xaf32a000 to 0xafb2a000 152/2048 blacklisted
Section 42 from 0xaeada000 to 0xaf2da000 107/2048 blacklisted
Section 43 from 0xae28a000 to 0xaea8a000 174/2048 blacklisted
Section 44 from 0xada3a000 to 0xae23a000 89/2048 blacklisted
Section 45 from 0xad1ea000 to 0xad9ea000 125/2048 blacklisted
Section 46 from 0xac9aa000 to 0xad1aa000 80/2048 blacklisted
Section 47 from 0xac14a000 to 0xac94a000 117/2048 blacklisted
Section 48 from 0xab90a000 to 0xac10a000 97/2048 blacklisted
Section 49 from 0xab0ba000 to 0xab8ba000 131/2048 blacklisted
Section 50 from 0xaa86a000 to 0xab06a000 108/2048 blacklisted
Section 51 from 0xaa01a000 to 0xaa81a000 201/2048 blacklisted
Section 52 from 0xa97ca000 to 0xa9fca000 81/2048 blacklisted
Section 53 from 0xa8779000 to 0xa8f79000 97/2048 blacklisted
Section 54 from 0xa7f29000 to 0xa8729000 152/2048 blacklisted
Section 55 from 0xa76e9000 to 0xa7ee9000 423/2048 blacklisted
Section 56 from 0xa6e99000 to 0xa7699000 996/2048 blacklisted
Section 57 from 0xa6699000 to 0xa6e99000 1531/2048 blacklisted
Section 58 from 0xa5e09000 to 0xa6609000 1132/2048 blacklisted

***Free blocks:
Free list 60 (Total size 4194304):
	0x7399000 size 1470464 start black listed
	0xa76eb000 size 1159168 start black listed
	0x7501000 size 1564672 start black listed
Total of 4194304 bytes on free list

Could you help me with this?

Best regards,
Erik

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-01  8:39 GC leaks debugging Erik Groeneveld
@ 2011-04-01  8:45 ` Andrew Haley
  2011-04-01  9:03   ` Erik Groeneveld
  2011-04-01 17:41 ` David Daney
  1 sibling, 1 reply; 42+ messages in thread
From: Andrew Haley @ 2011-04-01  8:45 UTC (permalink / raw)
  To: java

On 01/04/11 09:39, Erik Groeneveld wrote:
> L.S.,
> 
> I am debugging memory leaks in the GC of libgcj.  Generally, libgcj
> performs well, but there are some cases in which the heap literally
> explodes over time. I wish to solve this. The OpenJDK vm runs the same
> tests without any leaking.
> 
> I compiled GCC 4.6, build with --enable-libgcj-debug=yes and started
> testing with GC_DUMP_REGULARLY=1.  From the results I have a few
> questions to understand it better.  I am still in a phase of
> pinpointing a minimal program to demonstrate the problem, and I would
> like to get hints as to search further.
> 
> 1. The finalization table entries keeps on increasing up to 39344
> until it is killed, but the objects which are eligible for immediate
> finalization remains 0.  It seems that no finalization takes place.
> Could you give any hints as to what could be the reason for this?  It
> logs:
> 
> ***Finalization statistics:
> 39344 finalization table entries; 48 disappearing links
> 0 objects are eligible for immediate finalization
> 
> ***Static roots:
> From 0x804b284 to 0x804bf1c  (temporary)
> From 0xb716d000 to 0xb7854cdc  (temporary)
> From 0xb55ba92c to 0xb562ddbc  (temporary)
> Total size: 7716356
> 
> 
> 2. There is a fair amount of black-listing.  From reading about the
> GC, I understand that the GC knows what is a pointer and what not
> because there is type information associated with the Java objects.
> So I'd expect no black-listing at all.  It that a right observation?

No.  Objects are scanned precisely, but the stack is not.  Also, depending
on your compilation options, the data segments of your program may be
scanned conservatively.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-01  8:45 ` Andrew Haley
@ 2011-04-01  9:03   ` Erik Groeneveld
  2011-04-01  9:34     ` Andrew Haley
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-01  9:03 UTC (permalink / raw)
  To: Andrew Haley; +Cc: java

On Fri, Apr 1, 2011 at 10:45, Andrew Haley <aph@redhat.com> wrote:
> On 01/04/11 09:39, Erik Groeneveld wrote:
>> L.S.,
>>
>> I am debugging memory leaks in the GC of libgcj.  Generally, libgcj
>> performs well, but there are some cases in which the heap literally
>> explodes over time. I wish to solve this. The OpenJDK vm runs the same
>> tests without any leaking.
>>
>> I compiled GCC 4.6, build with --enable-libgcj-debug=yes and started
>> testing with GC_DUMP_REGULARLY=1.  From the results I have a few
>> questions to understand it better.  I am still in a phase of
>> pinpointing a minimal program to demonstrate the problem, and I would
>> like to get hints as to search further.
>>
>> 1. The finalization table entries keeps on increasing up to 39344
>> until it is killed, but the objects which are eligible for immediate
>> finalization remains 0.  It seems that no finalization takes place.
>> Could you give any hints as to what could be the reason for this?  It
>> logs:
>>
>> ***Finalization statistics:
>> 39344 finalization table entries; 48 disappearing links
>> 0 objects are eligible for immediate finalization
>>
>> ***Static roots:
>> From 0x804b284 to 0x804bf1c  (temporary)
>> From 0xb716d000 to 0xb7854cdc  (temporary)
>> From 0xb55ba92c to 0xb562ddbc  (temporary)
>> Total size: 7716356
>>
>>
>> 2. There is a fair amount of black-listing.  From reading about the
>> GC, I understand that the GC knows what is a pointer and what not
>> because there is type information associated with the Java objects.
>> So I'd expect no black-listing at all.  It that a right observation?
>
> No.  Objects are scanned precisely, but the stack is not.

Thanks.  I think I can rule the stack out by reviewing/adapting my
test program.  I'll do that first.

> Also, depending
> on your compilation options, the data segments of your program may be
> scanned conservatively.

I have to think about this one.  Which options are you thinking of?

Erik

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-01  9:03   ` Erik Groeneveld
@ 2011-04-01  9:34     ` Andrew Haley
  2011-04-02  0:27       ` Boehm, Hans
  0 siblings, 1 reply; 42+ messages in thread
From: Andrew Haley @ 2011-04-01  9:34 UTC (permalink / raw)
  To: java

On 04/01/2011 10:02 AM, Erik Groeneveld wrote:
> On Fri, Apr 1, 2011 at 10:45, Andrew Haley <aph@redhat.com> wrote:
>> On 01/04/11 09:39, Erik Groeneveld wrote:
>>> From reading about the
>>> GC, I understand that the GC knows what is a pointer and what not
>>> because there is type information associated with the Java objects.
>>> So I'd expect no black-listing at all.  It that a right observation?
>>
>> No.  Objects are scanned precisely, but the stack is not.
> 
> Thanks.  I think I can rule the stack out by reviewing/adapting my
> test program.  I'll do that first.
> 
>> Also, depending
>> on your compilation options, the data segments of your program may be
>> scanned conservatively.
> 
> I have to think about this one.  Which options are you thinking of?

-findirect-dispatch

With that option, everything except the stack is scanned precisely.  However,
there is some runtime overhead.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-01  8:39 GC leaks debugging Erik Groeneveld
  2011-04-01  8:45 ` Andrew Haley
@ 2011-04-01 17:41 ` David Daney
  2011-04-02 16:21   ` Erik Groeneveld
  1 sibling, 1 reply; 42+ messages in thread
From: David Daney @ 2011-04-01 17:41 UTC (permalink / raw)
  To: Erik Groeneveld; +Cc: java

On 04/01/2011 01:39 AM, Erik Groeneveld wrote:
> L.S.,
>
> I am debugging memory leaks in the GC of libgcj.  Generally, libgcj
> performs well, but there are some cases in which the heap literally
> explodes over time. I wish to solve this. The OpenJDK vm runs the same
> tests without any leaking.
>
> I compiled GCC 4.6, build with --enable-libgcj-debug=yes and started
> testing with GC_DUMP_REGULARLY=1.  From the results I have a few
> questions to understand it better.  I am still in a phase of
> pinpointing a minimal program to demonstrate the problem, and I would
> like to get hints as to search further.
>
> 1. The finalization table entries keeps on increasing up to 39344
> until it is killed, but the objects which are eligible for immediate
> finalization remains 0.  It seems that no finalization takes place.
> Could you give any hints as to what could be the reason for this?  It
> logs:
>

You should also look at:

http://gcc.gnu.org/onlinedocs/gcc-4.6.0/gcj/Invoking-gc_002danalyze.html#Invoking-gc_002danalyze

David Daney

[...]


^ permalink raw reply	[flat|nested] 42+ messages in thread

* RE: GC leaks debugging
  2011-04-01  9:34     ` Andrew Haley
@ 2011-04-02  0:27       ` Boehm, Hans
  2011-04-02  9:39         ` Erik Groeneveld
  0 siblings, 1 reply; 42+ messages in thread
From: Boehm, Hans @ 2011-04-02  0:27 UTC (permalink / raw)
  To: Andrew Haley, java

> -----Original Message-----
> From: java-owner@gcc.gnu.org Andrew Haley
> Sent: Friday, April 01, 2011 2:34 AM
> To: java@gcc.gnu.org
> Subject: Re: GC leaks debugging
> 
> On 04/01/2011 10:02 AM, Erik Groeneveld wrote:
> > On Fri, Apr 1, 2011 at 10:45, Andrew Haley <aph@redhat.com> wrote:
> >> On 01/04/11 09:39, Erik Groeneveld wrote:
> >>> From reading about the
> >>> GC, I understand that the GC knows what is a pointer and what not
> >>> because there is type information associated with the Java objects.
> >>> So I'd expect no black-listing at all.  It that a right
> observation?
> >>
> >> No.  Objects are scanned precisely, but the stack is not.
> >
> > Thanks.  I think I can rule the stack out by reviewing/adapting my
> > test program.  I'll do that first.
> >
> >> Also, depending
> >> on your compilation options, the data segments of your program may
> be
> >> scanned conservatively.
> >
> > I have to think about this one.  Which options are you thinking of?
> 
> -findirect-dispatch
> 
> With that option, everything except the stack is scanned precisely.
> However,
> there is some runtime overhead.
> 
> Andrew.
Note that in the information you posted, the GC was scanning around 7.5MB of roots conservatively.  It might be worth checking what those regions are.

The number of black-listed pages seems really high to me.  Is the collector configured for too small a heap?

What's the platform?


Hans


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-02  0:27       ` Boehm, Hans
@ 2011-04-02  9:39         ` Erik Groeneveld
  2011-04-03 17:15           ` Erik Groeneveld
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-02  9:39 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Andrew Haley, java

[-- Attachment #1: Type: text/plain, Size: 2042 bytes --]

[...]

> Note that in the information you posted, the GC was scanning around 7.5MB of roots conservatively.  It might be worth checking what those regions are.
It seems it come from libgcj.  This is now my minimal program:

#include <gcj/cni.h>
void _Jv_RunGC(void);
int main(int argc, char *argv[]) {
    JvCreateJavaVM(NULL);
    _Jv_RunGC();
    //JvAttachCurrentThread(NULL, NULL);
}

I compile the test with:

g++ -O0 -fPIC -g -o test test.cpp -L../gccinstall/lib -lgcj

And run with:

export GC_DUMP_REGULARLY=1
export GC_BACKTRACES=10
./test

Only linking and initializing GCJ gives the log as attached.  The 7.5
MB root is already present.  And it gives nearly 2000 blacklisting
messages, resulting in a heap with 10 of 12 sections 100% blacklisted
and the other 2 already significantly polluted.

Since the program has yet to begin, it seems that the race is about to
start with one party on a severe penalty. ;-)

> The number of black-listed pages seems really high to me.  Is the collector configured for too small a heap?
I use GCC 4.6 (svn://gcc.gnu.org/svn/gcc/branches/gcc-4_6-branch) and
gc is build with LARGE_CONFIG and PRINT_BLACK_LIST.

> What's the platform?
Linux 2.6.32-5-686-bigmem with 4 GB memory.  I understand that the
changes of misidentifiying pointers are smaller on a 64 bits machine,
so I also ran (lots of) tests on an AMD64 machine, however I got the
same problems with exploding heaps consistently on both architectures.
  I attached a typical explosion graph.

Lastly, I have read this thread about "leak with finalizers"
(http://thread.gmane.org/gmane.comp.programming.garbage-collection.boehmgc/748)
and it seems that something quite similar happens with GCJ. I also
tried to explicitly call GC_gcollect as suggested, which delayed but
did not contain the explosion. However, I now think that it might be a
good idea to explore this 7.5 MB root that GCJ is starting out with.

Do you have any hints on how to find the sources of these roots?

Erik

[-- Attachment #2: gc-log.tgz --]
[-- Type: application/x-gzip, Size: 42082 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-01 17:41 ` David Daney
@ 2011-04-02 16:21   ` Erik Groeneveld
  0 siblings, 0 replies; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-02 16:21 UTC (permalink / raw)
  To: David Daney; +Cc: java

On Fri, Apr 1, 2011 at 19:41, David Daney <ddaney@caviumnetworks.com> wrote:
> On 04/01/2011 01:39 AM, Erik Groeneveld wrote:
> You should also look at:
>
> http://gcc.gnu.org/onlinedocs/gcc-4.6.0/gcj/Invoking-gc_002danalyze.html#Invoking-gc_002danalyze

I am running it now.

But I am using the debug version and since it is using GCJ itself, it
logs so incredibly many BLACKLIST messages it hardly proceeds....
I have over 50MB error log, and it is still only reading the symbols
from the shared libraries.
I'll recompile the latest non-debug version an give it another try.

Erik

>
> David Daney
>
> [...]
>
>
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-02  9:39         ` Erik Groeneveld
@ 2011-04-03 17:15           ` Erik Groeneveld
  2011-04-03 18:00             ` Erik Groeneveld
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-03 17:15 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Andrew Haley, java

On Sat, Apr 2, 2011 at 11:38 AM, Erik Groeneveld <erik@cq2.nl> wrote:
>
>> Note that in the information you posted, the GC was scanning around 7.5MB of roots conservatively.  It might be worth checking what those regions are.
> It seems it come from libgcj.  This is now my minimal program:
>
> #include <gcj/cni.h>
> void _Jv_RunGC(void);
> int main(int argc, char *argv[]) {
>    JvCreateJavaVM(NULL);
>    _Jv_RunGC();
>    //JvAttachCurrentThread(NULL, NULL);
> }

With GC_DUMP_REGULARLY it logs an empty heap just before
JvCreateJavaVM and right after it:

***Static roots:
From 0x8049bdc to 0x8049d34  (temporary)
From 0xb714c000 to 0xb787dcdc  (temporary)
Total size: 7544372

Using gc-analyse's memory map, I find the first root to be in this
block of 4 kB. Probably my own data segment or stack:
8049000-804a000 -> .../minimal/test offset 0

The second root spans the next block of 6792 kB and most of the
following block of 584 kB:

b714c000-b77ee000 -> .../gccinstall-def/lib/libgcj.so.12.0.0 offset 1b3a000
b77ee000-b7880000 -> TestDump001.bytes offset b0009c

The heap itself is only 368 kB and contains lots of static strings,
classes etc. Nothing special AFAICS.

So I am now off into JvCreateJavaVM, if you have any thoughts, please
let me know.
Erik

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-03 17:15           ` Erik Groeneveld
@ 2011-04-03 18:00             ` Erik Groeneveld
  2011-04-04  8:13               ` Andrew Haley
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-03 18:00 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Andrew Haley, java

On Sun, Apr 3, 2011 at 7:14 PM, Erik Groeneveld <erik@cq2.nl> wrote:
> On Sat, Apr 2, 2011 at 11:38 AM, Erik Groeneveld <erik@cq2.nl> wrote:
>>
>>> Note that in the information you posted, the GC was scanning around 7.5MB of roots conservatively.  It might be worth checking what those regions are.
[...]
> So I am now off into JvCreateJavaVM,

and I found that the 7.5 MB roots are the static data area of libgcj
itself.  The GC calls back -- the last arg being the size:

_Jv_GC_has_static_roots(../gccinstall/lib/libgcj.so.12, 0xb704f000, 7544028)

and since libgcj is in 'the store' (_Jv_print_gc_store() prints
"../gccinstall/lib/libgcj.so.12"), it tells the GC to scan its static
data area conservatively.

As of yet I don't understand why this static area is so big, and what
could be on it, but when I lay myself to rest, the little gray cells
will sing to me (free after Hercules Poirot ;-).

Erik

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-03 18:00             ` Erik Groeneveld
@ 2011-04-04  8:13               ` Andrew Haley
  2011-04-04  8:53                 ` Erik Groeneveld
  0 siblings, 1 reply; 42+ messages in thread
From: Andrew Haley @ 2011-04-04  8:13 UTC (permalink / raw)
  To: java

On 03/04/11 18:59, Erik Groeneveld wrote:
> On Sun, Apr 3, 2011 at 7:14 PM, Erik Groeneveld <erik@cq2.nl> wrote:
>> On Sat, Apr 2, 2011 at 11:38 AM, Erik Groeneveld <erik@cq2.nl> wrote:
>>>
>>>> Note that in the information you posted, the GC was scanning around 7.5MB of roots conservatively.  It might be worth checking what those regions are.
> [...]
>> So I am now off into JvCreateJavaVM,
> 
> and I found that the 7.5 MB roots are the static data area of libgcj
> itself.  The GC calls back -- the last arg being the size:
> 
> _Jv_GC_has_static_roots(../gccinstall/lib/libgcj.so.12, 0xb704f000, 7544028)
> 
> and since libgcj is in 'the store' (_Jv_print_gc_store() prints
> "../gccinstall/lib/libgcj.so.12"), it tells the GC to scan its static
> data area conservatively.
> 
> As of yet I don't understand why this static area is so big, and what
> could be on it, but when I lay myself to rest, the little gray cells
> will sing to me (free after Hercules Poirot ;-).

It'll mostly be introspection data.  Every class and every method has
this, and it can get to be quite large.  I doubt it's the cause of
your memory leak unless there's a bug elsewhere.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-04  8:13               ` Andrew Haley
@ 2011-04-04  8:53                 ` Erik Groeneveld
  2011-04-04  9:48                   ` Andrew Haley
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-04  8:53 UTC (permalink / raw)
  To: Andrew Haley; +Cc: java

On Mon, Apr 4, 2011 at 10:13 AM, Andrew Haley <aph@redhat.com> wrote:
> On 03/04/11 18:59, Erik Groeneveld wrote:
>> On Sun, Apr 3, 2011 at 7:14 PM, Erik Groeneveld <erik@cq2.nl> wrote:
>>> On Sat, Apr 2, 2011 at 11:38 AM, Erik Groeneveld <erik@cq2.nl> wrote:
>>>>
>>>>> Note that in the information you posted, the GC was scanning around 7.5MB of roots conservatively.  It might be worth checking what those regions are.
>> [...]
>>> So I am now off into JvCreateJavaVM,
>>
>> and I found that the 7.5 MB roots are the static data area of libgcj
>> itself.  The GC calls back -- the last arg being the size:
>>
>> _Jv_GC_has_static_roots(../gccinstall/lib/libgcj.so.12, 0xb704f000, 7544028)
>>
>> and since libgcj is in 'the store' (_Jv_print_gc_store() prints
>> "../gccinstall/lib/libgcj.so.12"), it tells the GC to scan its static
>> data area conservatively.
>>
>> As of yet I don't understand why this static area is so big, and what
>> could be on it, but when I lay myself to rest, the little gray cells
>> will sing to me (free after Hercules Poirot ;-).
>
> It'll mostly be introspection data.  Every class and every method has
> this, and it can get to be quite large.

I saw the (old) patch of yours that moves static Java objects onto the
heap, avoiding it to be scanned conservatively, so I couldn't think of
anything else to be on the static data area of libgcj than Java
pointers to the objects heap.  Now I see that there is still a lot
more data that must be scanned conservatively, so couldn't there be
similar problems as back then?  Couldn't it be an idea to try to move
this introspection data to the heap as well?

> I doubt it's the cause of
> your memory leak unless there's a bug elsewhere.

Probably there is no clear bug, or clear leak, perhaps just a matter
of pushing the GC to the limits?  Some code is running quite well for
long times, other isn't.  I all cases, the heap grows very fast, with
lots of black listing messages, and sometimes the GC just seems to
manage, sometimes it doesn't and things explode while issuing the
famous "need to allocate large block etc" repeatedly.  From what Hans
suggested and from what I see in the logs, the GC is under very heavy
stress, right from the beginning.  It doesn't get a fair chance so to
say.

My minimal program is now this:

int main(int argc, char *argv[]) {
    _Jv_InitGC();
}

It starts out with:

roots: 7,072 kB
heap: 64 kB
free: 64 kB
blacklisted: 15/16
blacklist messages: 991

Any real program produces so much blacklist messages that it hardly
runs.  I'd like to investigate this or am I on the wrong track
completely?

Erik

>
> Andrew.
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-04  8:53                 ` Erik Groeneveld
@ 2011-04-04  9:48                   ` Andrew Haley
  2011-04-05  4:44                     ` Boehm, Hans
  2011-04-05  6:50                     ` Erik Groeneveld
  0 siblings, 2 replies; 42+ messages in thread
From: Andrew Haley @ 2011-04-04  9:48 UTC (permalink / raw)
  To: java

On 04/04/2011 09:52 AM, Erik Groeneveld wrote:
> On Mon, Apr 4, 2011 at 10:13 AM, Andrew Haley <aph@redhat.com> wrote:
>> On 03/04/11 18:59, Erik Groeneveld wrote:
>>> On Sun, Apr 3, 2011 at 7:14 PM, Erik Groeneveld <erik@cq2.nl> wrote:
>>>> On Sat, Apr 2, 2011 at 11:38 AM, Erik Groeneveld <erik@cq2.nl> wrote:
>>>>>
>>>>>> Note that in the information you posted, the GC was scanning around 7.5MB of roots conservatively.  It might be worth checking what those regions are.
>>> [...]
>>>> So I am now off into JvCreateJavaVM,
>>>
>>> and I found that the 7.5 MB roots are the static data area of libgcj
>>> itself.  The GC calls back -- the last arg being the size:
>>>
>>> _Jv_GC_has_static_roots(../gccinstall/lib/libgcj.so.12, 0xb704f000, 7544028)
>>>
>>> and since libgcj is in 'the store' (_Jv_print_gc_store() prints
>>> "../gccinstall/lib/libgcj.so.12"), it tells the GC to scan its static
>>> data area conservatively.
>>>
>>> As of yet I don't understand why this static area is so big, and what
>>> could be on it, but when I lay myself to rest, the little gray cells
>>> will sing to me (free after Hercules Poirot ;-).
>>
>> It'll mostly be introspection data.  Every class and every method has
>> this, and it can get to be quite large.
> 
> I saw the (old) patch of yours that moves static Java objects onto the
> heap, avoiding it to be scanned conservatively, so I couldn't think of
> anything else to be on the static data area of libgcj than Java
> pointers to the objects heap.  Now I see that there is still a lot
> more data that must be scanned conservatively, so couldn't there be
> similar problems as back then?  Couldn't it be an idea to try to move
> this introspection data to the heap as well?

It's certainly possible, but you can't move all of it with our current
design because it doesn't play nicely with CNI.  Therefore, a lot of
libgcj is compiled with static introspection data that must be
conservatively scanned.

>> I doubt it's the cause of
>> your memory leak unless there's a bug elsewhere.
> 
> Probably there is no clear bug, or clear leak, perhaps just a matter
> of pushing the GC to the limits? 

I doubt that very much.  This has come up several times in the past,
and the problem has never been the garbage collector recognizing false
positives.  It's almost certainly a real memory leak caused by a
pointer somewhere not being nulled.

> Some code is running quite well for long times, other isn't.  I all
> cases, the heap grows very fast, with lots of black listing
> messages, and sometimes the GC just seems to manage, sometimes it
> doesn't and things explode while issuing the famous "need to
> allocate large block etc" repeatedly.  From what Hans suggested and
> from what I see in the logs, the GC is under very heavy stress,
> right from the beginning.  It doesn't get a fair chance so to say.

I don't think so.

> My minimal program is now this:
> 
> int main(int argc, char *argv[]) {
>     _Jv_InitGC();
> }
> 
> It starts out with:
> 
> roots: 7,072 kB
> heap: 64 kB
> free: 64 kB
> blacklisted: 15/16
> blacklist messages: 991
> 
> Any real program produces so much blacklist messages that it hardly
> runs.  I'd like to investigate this or am I on the wrong track
> completely?

I think you are.  The heap is small in this simple test case, so there
are no real problems.

You need to find out what the real problem is.  Find just one of those
"need to allocate large block" messages, and find out why it is being
called.  I suspect that there is an actual bug that is causing the
explosion and it can be found.  Forget about 991 blacklist messages:
not useful.

I'd have a look myself, but there is no way to duplicate your problem.

BTW, is this on a 32-bit or 64-bit platform?

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* RE: GC leaks debugging
  2011-04-04  9:48                   ` Andrew Haley
@ 2011-04-05  4:44                     ` Boehm, Hans
  2011-04-05  8:58                       ` Andrew Haley
  2011-04-05  6:50                     ` Erik Groeneveld
  1 sibling, 1 reply; 42+ messages in thread
From: Boehm, Hans @ 2011-04-05  4:44 UTC (permalink / raw)
  To: Andrew Haley, java

I'm still concerned about the amount of blacklisting here.  Can you track down some of those messages or calls to GC_add_to_black_list_normal, and find out where those bogus pointer-like bit patterns are coming from?

Is there any way to get the reflection information (and exception information? or was that fixed?) into read-only segments, so that the collector can know not to scan them?

Hans

> -----Original Message-----
> From: java-owner@gcc.gnu.org [mailto:java-owner@gcc.gnu.org] On Behalf
> Of Andrew Haley
> Sent: Monday, April 04, 2011 2:48 AM
> To: java@gcc.gnu.org
> Subject: Re: GC leaks debugging
> 
> On 04/04/2011 09:52 AM, Erik Groeneveld wrote:
> > On Mon, Apr 4, 2011 at 10:13 AM, Andrew Haley <aph@redhat.com> wrote:
> >> On 03/04/11 18:59, Erik Groeneveld wrote:
> >>> On Sun, Apr 3, 2011 at 7:14 PM, Erik Groeneveld <erik@cq2.nl>
> wrote:
> >>>> On Sat, Apr 2, 2011 at 11:38 AM, Erik Groeneveld <erik@cq2.nl>
> wrote:
> >>>>>
> >>>>>> Note that in the information you posted, the GC was scanning
> around 7.5MB of roots conservatively.  It might be worth checking what
> those regions are.
> >>> [...]
> >>>> So I am now off into JvCreateJavaVM,
> >>>
> >>> and I found that the 7.5 MB roots are the static data area of
> libgcj
> >>> itself.  The GC calls back -- the last arg being the size:
> >>>
> >>> _Jv_GC_has_static_roots(../gccinstall/lib/libgcj.so.12, 0xb704f000,
> 7544028)
> >>>
> >>> and since libgcj is in 'the store' (_Jv_print_gc_store() prints
> >>> "../gccinstall/lib/libgcj.so.12"), it tells the GC to scan its
> static
> >>> data area conservatively.
> >>>
> >>> As of yet I don't understand why this static area is so big, and
> what
> >>> could be on it, but when I lay myself to rest, the little gray
> cells
> >>> will sing to me (free after Hercules Poirot ;-).
> >>
> >> It'll mostly be introspection data.  Every class and every method
> has
> >> this, and it can get to be quite large.
> >
> > I saw the (old) patch of yours that moves static Java objects onto
> the
> > heap, avoiding it to be scanned conservatively, so I couldn't think
> of
> > anything else to be on the static data area of libgcj than Java
> > pointers to the objects heap.  Now I see that there is still a lot
> > more data that must be scanned conservatively, so couldn't there be
> > similar problems as back then?  Couldn't it be an idea to try to move
> > this introspection data to the heap as well?
> 
> It's certainly possible, but you can't move all of it with our current
> design because it doesn't play nicely with CNI.  Therefore, a lot of
> libgcj is compiled with static introspection data that must be
> conservatively scanned.
> 
> >> I doubt it's the cause of
> >> your memory leak unless there's a bug elsewhere.
> >
> > Probably there is no clear bug, or clear leak, perhaps just a matter
> > of pushing the GC to the limits?
> 
> I doubt that very much.  This has come up several times in the past,
> and the problem has never been the garbage collector recognizing false
> positives.  It's almost certainly a real memory leak caused by a
> pointer somewhere not being nulled.
> 
> > Some code is running quite well for long times, other isn't.  I all
> > cases, the heap grows very fast, with lots of black listing
> > messages, and sometimes the GC just seems to manage, sometimes it
> > doesn't and things explode while issuing the famous "need to
> > allocate large block etc" repeatedly.  From what Hans suggested and
> > from what I see in the logs, the GC is under very heavy stress,
> > right from the beginning.  It doesn't get a fair chance so to say.
> 
> I don't think so.
> 
> > My minimal program is now this:
> >
> > int main(int argc, char *argv[]) {
> >     _Jv_InitGC();
> > }
> >
> > It starts out with:
> >
> > roots: 7,072 kB
> > heap: 64 kB
> > free: 64 kB
> > blacklisted: 15/16
> > blacklist messages: 991
> >
> > Any real program produces so much blacklist messages that it hardly
> > runs.  I'd like to investigate this or am I on the wrong track
> > completely?
> 
> I think you are.  The heap is small in this simple test case, so there
> are no real problems.
> 
> You need to find out what the real problem is.  Find just one of those
> "need to allocate large block" messages, and find out why it is being
> called.  I suspect that there is an actual bug that is causing the
> explosion and it can be found.  Forget about 991 blacklist messages:
> not useful.
> 
> I'd have a look myself, but there is no way to duplicate your problem.
> 
> BTW, is this on a 32-bit or 64-bit platform?
> 
> Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-04  9:48                   ` Andrew Haley
  2011-04-05  4:44                     ` Boehm, Hans
@ 2011-04-05  6:50                     ` Erik Groeneveld
  2011-04-05  9:02                       ` Andrew Haley
  1 sibling, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-05  6:50 UTC (permalink / raw)
  To: Andrew Haley; +Cc: java

[...]
>> Any real program produces so much blacklist messages that it hardly
>> runs.  I'd like to investigate this or am I on the wrong track
>> completely?
>
> I think you are.  The heap is small in this simple test case, so there
> are no real problems.

I will ignore this for now then.

> You need to find out what the real problem is.  Find just one of those
> "need to allocate large block" messages, and find out why it is being
> called.  I suspect that there is an actual bug that is causing the
> explosion and it can be found.  Forget about 991 blacklist messages:
> not useful.

I have done many tests, with different programs, which all run
flawlessly on OpenJDK, but explode on GCJ.  I have run some test last
night, and I'll see from the logs that the heap is 1 GB, while about
700 MB of it is free.  Also it seems that the finalization table keeps
growing.  I am running now, but later this day I'll post the log. (And
search the mail archives with a new keyword: finalization ;-)

> I'd have a look myself, but there is no way to duplicate your problem.
> BTW, is this on a 32-bit or 64-bit platform?

It is on 32-bit. On 64 bit, the blacklisting is not happening.  But
the heap keeps exploding, so you are right, the problem probably lies
elsewhere.
(Although I still feel sorry for the poor GC on 32 bit systems ;-)

Thanks a lot.
Erik

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-05  4:44                     ` Boehm, Hans
@ 2011-04-05  8:58                       ` Andrew Haley
  0 siblings, 0 replies; 42+ messages in thread
From: Andrew Haley @ 2011-04-05  8:58 UTC (permalink / raw)
  To: java

On 05/04/11 05:41, Boehm, Hans wrote:

> I'm still concerned about the amount of blacklisting here.  Can you
> track down some of those messages or calls to
> GC_add_to_black_list_normal, and find out where those bogus
> pointer-like bit patterns are coming from?
> 
> Is there any way to get the reflection information (and exception
> information? or was that fixed?) into read-only segments, so that
> the collector can know not to scan them?

It's not easy.  That's already been done with indirect dispatch, but
it doesn't work with the core library.  The easiest way to get it
working for all classes may be to get CNI working wthe indirect
dispatch.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-05  6:50                     ` Erik Groeneveld
@ 2011-04-05  9:02                       ` Andrew Haley
  2011-04-05 12:02                         ` Erik Groeneveld
  0 siblings, 1 reply; 42+ messages in thread
From: Andrew Haley @ 2011-04-05  9:02 UTC (permalink / raw)
  To: Erik Groeneveld; +Cc: java

On 05/04/11 07:49, Erik Groeneveld wrote:
> [...]
>>> Any real program produces so much blacklist messages that it hardly
>>> runs.  I'd like to investigate this or am I on the wrong track
>>> completely?
>>
>> I think you are.  The heap is small in this simple test case, so there
>> are no real problems.
> 
> I will ignore this for now then.
> 
>> You need to find out what the real problem is.  Find just one of those
>> "need to allocate large block" messages, and find out why it is being
>> called.  I suspect that there is an actual bug that is causing the
>> explosion and it can be found.  Forget about 991 blacklist messages:
>> not useful.
> 
> I have done many tests, with different programs, which all run
> flawlessly on OpenJDK, but explode on GCJ.  I have run some test last
> night, and I'll see from the logs that the heap is 1 GB, while about
> 700 MB of it is free.

That sounds like it's working perfectly, then.  What is the problem?

> Also it seems that the finalization table keeps growing.  I am
> running now, but later this day I'll post the log. (And search the
> mail archives with a new keyword: finalization ;-)

There is a known problem with finalization and weak references, but I
don't know the details.  Maybe someone else can remeber.

>> I'd have a look myself, but there is no way to duplicate your problem.
>> BTW, is this on a 32-bit or 64-bit platform?
> 
> It is on 32-bit. On 64 bit, the blacklisting is not happening.  But
> the heap keeps exploding, so you are right, the problem probably lies
> elsewhere.

OK, so you can concentrate on the 64-bit system, and forget about all
the blacklisting noise.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-05  9:02                       ` Andrew Haley
@ 2011-04-05 12:02                         ` Erik Groeneveld
  2011-04-05 12:55                           ` Andrew Haley
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-05 12:02 UTC (permalink / raw)
  To: Andrew Haley; +Cc: java

>>
>> I have done many tests, with different programs, which all run
>> flawlessly on OpenJDK, but explode on GCJ.  I have run some test last
>> night, and I'll see from the logs that the heap is 1 GB, while about
>> 700 MB of it is free.
>
> That sounds like it's working perfectly, then.  What is the problem?

The problem is that the tests don't use much memory at all, but the
heap keeps expanding, the collection cycles become less frequent and
take longer and longer.  Even on a system with 64GB, it just fills it
up.

This particular test repeats 13 million times the same code before it
gets killed by OOM killer.  The logs says, just before the kill (sizes
in MB by me):

-----------------------begin log-----------------------
 ***Finalization statistics:
 55286 finalization table entries; 55 disappearing links
 0 objects are eligible for immediate finalization

 ***Static roots:
 From 0x603000 to 0x603480  (temporary)
 From 0x7f81d5a00000 to 0x7f81d6349d48  (temporary)
 From 0x7f81d3069000 to 0x7f81d31b3c80  (temporary)
 Total size: 11095624 (10 MB)

 ***Heap sections:
 Total heap size: 959184896 (914 MB)
 Section 0 from 0x7f81d1503000 to 0x7f81d1513000 0/16 blacklisted
 Section 1 from 0x7f81d14e3000 to 0x7f81d14f3000 0/16 blacklisted
 // more, only few blacklisted
 Section 130 from 0x7f81957ce000 to 0x7f8195fce000 0/2048 blacklisted
 Section 131 from 0x7f8194f8e000 to 0x7f819578e000 0/2048 blacklisted

 ***Free blocks:
 Free list 2 (Total size 8192):
     0x7f81a9f94000 size 8192 not black listed
 Free list 5 (Total size 10444800):
     0x7f81a98c8000 size 20480 not black listed
     0x7f81a9aee000 size 20480 not black listed
     // many, many more
 Free list 60 (Total size 5775360):
     0x7f8195358000 size 4415488 not black listed
     0x7f81a306b000 size 1359872 partially black listed
 Total of 672296960 bytes on free list  (641 MB)

 ***Blocks in use:
     // cut
 blocks = 67177, bytes = 286887936  (273 MB)

 ***Finalization statistics:
 38070 finalization table entries; 55 disappearing links
 0 objects are eligible for immediate finalization
-----------------------end log-----------------------

The tests work fine on OpenJDK.  What could cause GCJ to grow the heap
infinitely?

Erik

PS Even compressed, the complete log is over the mailing-list allowed
attachment size, so I gave an extract.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-05 12:02                         ` Erik Groeneveld
@ 2011-04-05 12:55                           ` Andrew Haley
  2011-04-06 14:30                             ` Erik Groeneveld
  0 siblings, 1 reply; 42+ messages in thread
From: Andrew Haley @ 2011-04-05 12:55 UTC (permalink / raw)
  To: Erik Groeneveld; +Cc: java

On 04/05/2011 01:02 PM, Erik Groeneveld wrote:
>>>
>>> I have done many tests, with different programs, which all run
>>> flawlessly on OpenJDK, but explode on GCJ.  I have run some test last
>>> night, and I'll see from the logs that the heap is 1 GB, while about
>>> 700 MB of it is free.
>>
>> That sounds like it's working perfectly, then.  What is the problem?
> 
> The problem is that the tests don't use much memory at all, but the
> heap keeps expanding, the collection cycles become less frequent and
> take longer and longer.  Even on a system with 64GB, it just fills it
> up.
> 
> The tests work fine on OpenJDK.  What could cause GCJ to grow the heap
> infinitely?

I don't know because you won't show me the tests.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-05 12:55                           ` Andrew Haley
@ 2011-04-06 14:30                             ` Erik Groeneveld
  2011-04-06 18:33                               ` Andrew Haley
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-06 14:30 UTC (permalink / raw)
  To: Andrew Haley; +Cc: java

[-- Attachment #1: Type: text/plain, Size: 2230 bytes --]

>> The tests work fine on OpenJDK.  What could cause GCJ to grow the heap
>> infinitely?
>
> I don't know because you won't show me the tests.

Yeah, of course, sorry.  I forgot to tell a few things.

This problem has been bothering me quite some time now, and I decided
to solve it once and forever.  I have written many different test, but
I am still not able to pinpoint a simple program to demonstrate the
results.  The problem occurred initially while using Lucene, but later
on also with Owlim.  Having written all kinds of test programs with
Lucene, from doing almost nothing to fully fletched indexing, I can
make no definite conclusions yet.

I am now circling around the problem, trying to enclose it from
different sides and I seek your help for giving me hints on what to
look for.  It is not lightly that I decided to bring it into this
mailing-list, knowing that it would claim many peoples time.

Now the test I am running is attached.  It indexes a very simple
document with a unique id each, first assuring is it deleted.  And
each loop, it reopens the index-reader and searcher.  This test starts
to get in trouble above 10,000,000 loops (documents).  The problem is
that when I remove code (I tested systematically), it only takes
longer for the heap to explode. The only test that ran properly was
when I only created Documents and not index them.  So perhaps it has
to do something with I/O.

I built the lucene dso with (full script attached):

gcj -shared -fPIC $JARFILE -o liblucene-core.so -Lgccinstall/lib -lgcj
-findirect-dispatch -fno-indirect-classes

and the test with (full scipt attached):

g++ -O0 -fPIC -g -o test test.cpp \
    -I../include/lucene \
    -L../gccinstall/lib64 \
    -lgcj \
    -L.. \
    -llucene-core \
    -findirect-dispatch \

GCC being the 4.6 branch from SVN, build with:

../gcc-4_6-branch/configure --prefix=$installdir --disable-multilib

I understand that this is a high-level approach, and you'll like a
smaller test that demonstrates the problem.  But I don't have that
yet.  Any suggestions you can come up with at this level are more than
welcome.

Meanwhile, I dive deeper into analysis of the heap.

Erik

[-- Attachment #2: test.cpp --]
[-- Type: text/x-c++src, Size: 4187 bytes --]


#include <gcj/cni.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fstream>
#include <iostream>

#include "java/lang/Throwable.h"
#include "java/lang/Integer.h"
#include "java/lang/Boolean.h"
#include "java/util/Collection.h"
#include "java/util/ArrayList.h"
#include "java/io/File.h"

#include "org/apache/lucene/index/Term.h"
#include "org/apache/lucene/index/IndexReader.h"
#include "org/apache/lucene/index/IndexReader$FieldOption.h"
#include "org/apache/lucene/index/IndexWriter.h"
#include "org/apache/lucene/index/IndexWriter$MaxFieldLength.h"
#include "org/apache/lucene/search/IndexSearcher.h"
#include "org/apache/lucene/document/Document.h" 
#include "org/apache/lucene/document/Field.h" 
#include "org/apache/lucene/document/Field$Index.h" 
#include "org/apache/lucene/document/Field$Store.h" 
#include "org/apache/lucene/document/Fieldable.h" 
#include "org/apache/lucene/analysis/standard/StandardAnalyzer.h" 
#include "org/apache/lucene/analysis/Analyzer.h" 
#include "org/apache/lucene/util/Version.h"
#include "org/apache/lucene/store/Directory.h"
#include "org/apache/lucene/store/LockFactory.h"
#include "org/apache/lucene/store/SimpleFSDirectory.h"
#include "org/apache/lucene/store/SimpleFSLockFactory.h"

using namespace org::apache::lucene;
using namespace org::apache::lucene::util;
using namespace org::apache::lucene::index;
using namespace org::apache::lucene::document;
using namespace org::apache::lucene::search;
using namespace org::apache::lucene::store;
using namespace java::lang;
using namespace java::util;

Directory* makeDirectory(String* path) {
    LockFactory* lockFactory = new SimpleFSLockFactory();
    return (Directory*) new SimpleFSDirectory(new java::io::File(path), lockFactory);
}

void _Jv_RunGC(void);
long _Jv_GCTotalMemory (void);

int main(int argc, char *argv[]) {
    JvCreateJavaVM(NULL);
    JvAttachCurrentThread(NULL, NULL);
    
    JvInitClass(&Integer::class$);
    JvInitClass(&IndexReader::class$);
    JvInitClass(&IndexWriter::class$);
    JvInitClass(&IndexWriter$MaxFieldLength::class$);
    JvInitClass(&IndexReader$FieldOption::class$);
    JvInitClass(&Field::class$);
    JvInitClass(&Field$Store::class$);
    JvInitClass(&Field$Index::class$);
    JvInitClass(&Version::class$);

    store::Directory* indexDirectory = makeDirectory(JvNewStringUTF("index2"));
    String* fieldName = JvNewStringUTF("field");
    String* term = JvNewStringUTF("term");
    String* idFieldName = JvNewStringUTF("id");

    index::IndexWriter* writer = NULL;
    try { 
        analysis::Analyzer* analyzer = new analysis::standard::StandardAnalyzer(util::Version::LUCENE_30);
        writer = new IndexWriter(indexDirectory, analyzer, true, IndexWriter$MaxFieldLength::UNLIMITED);
        writer->close();
        writer = new IndexWriter(indexDirectory, analyzer, true, IndexWriter$MaxFieldLength::UNLIMITED);
        IndexReader* reader = IndexReader::open(indexDirectory);
        IndexSearcher* searcher = new IndexSearcher(reader);
        while ( true ) {
            static int i = 0;
            if (i++ % 1000 == 0) {
                printf("%d %d\n", i, _Jv_GCTotalMemory());
                fflush(stdout);
            }
            Document* doc = new Document();
            Fieldable* field = (Fieldable*) new Field(fieldName, term, Field$Store::YES, Field$Index::ANALYZED);
            doc->add(field);
            String* id = Integer::toString(i);
            Fieldable* idField = (Fieldable*) new Field(idFieldName, id, Field$Store::YES, Field$Index::ANALYZED);
            doc->add(idField);
            Term* term = new Term(idFieldName, id);
            writer->deleteDocuments(term);
            writer->addDocument(doc);
            searcher->close();
            reader->close();
            reader = IndexReader::open(indexDirectory);
            Collection* fields = reader->getFieldNames(IndexReader$FieldOption::ALL);
            searcher = new IndexSearcher(reader);
        }
    }
    catch (Throwable* e) {
        e->printStackTrace();
        if (writer != NULL) {
            writer->close();
        }
        return 1;
    }
    writer->close();
    
    return 0;
}


[-- Attachment #3: buildlucenelib.sh --]
[-- Type: application/x-sh, Size: 2022 bytes --]

[-- Attachment #4: buildAndRun.sh --]
[-- Type: application/x-sh, Size: 350 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-06 14:30                             ` Erik Groeneveld
@ 2011-04-06 18:33                               ` Andrew Haley
  2011-04-06 18:39                                 ` David Daney
  2011-04-07 17:43                                 ` Erik Groeneveld
  0 siblings, 2 replies; 42+ messages in thread
From: Andrew Haley @ 2011-04-06 18:33 UTC (permalink / raw)
  To: Erik Groeneveld; +Cc: java

On 04/06/2011 03:29 PM, Erik Groeneveld wrote:
>>> The tests work fine on OpenJDK.  What could cause GCJ to grow the heap
>>> infinitely?
>>
>> I don't know because you won't show me the tests.
> 
> Yeah, of course, sorry.  I forgot to tell a few things.
> 
> This problem has been bothering me quite some time now, and I decided
> to solve it once and forever.  I have written many different test, but
> I am still not able to pinpoint a simple program to demonstrate the
> results.  The problem occurred initially while using Lucene, but later
> on also with Owlim.  Having written all kinds of test programs with
> Lucene, from doing almost nothing to fully fletched indexing, I can
> make no definite conclusions yet.
> 
> I am now circling around the problem, trying to enclose it from
> different sides and I seek your help for giving me hints on what to
> look for.  It is not lightly that I decided to bring it into this
> mailing-list, knowing that it would claim many peoples time.
> 
> Now the test I am running is attached.  It indexes a very simple
> document with a unique id each, first assuring is it deleted.  And
> each loop, it reopens the index-reader and searcher.  This test starts
> to get in trouble above 10,000,000 loops (documents).  The problem is
> that when I remove code (I tested systematically), it only takes
> longer for the heap to explode. The only test that ran properly was
> when I only created Documents and not index them.  So perhaps it has
> to do something with I/O.

Just as a clue: there are thousands of unclosed FileInputStreams and
FileDescriptors.  At a mad guess, someone is not closing their files but
hoping that finalization will do it instead.

I was using:

	gnu::gcj::util::GCInfo::enumerate(JvNewStringUTF("LuceneDump"));

and

 /usr/local/bin/gc-analyze  TestDump001

Eventually, I get a

Exception in thread "main" java.lang.IndexOutOfBoundsException
   at java.nio.Buffer.checkIndex(Buffer.java:331)

from gc-analyze.  I may be able to find out what is causing that.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-06 18:33                               ` Andrew Haley
@ 2011-04-06 18:39                                 ` David Daney
  2011-04-07 17:43                                 ` Erik Groeneveld
  1 sibling, 0 replies; 42+ messages in thread
From: David Daney @ 2011-04-06 18:39 UTC (permalink / raw)
  To: Andrew Haley; +Cc: Erik Groeneveld, java

On 04/06/2011 11:33 AM, Andrew Haley wrote:
> On 04/06/2011 03:29 PM, Erik Groeneveld wrote:
>>>> The tests work fine on OpenJDK.  What could cause GCJ to grow the heap
>>>> infinitely?
>>>
>>> I don't know because you won't show me the tests.
>>
>> Yeah, of course, sorry.  I forgot to tell a few things.
>>
>> This problem has been bothering me quite some time now, and I decided
>> to solve it once and forever.  I have written many different test, but
>> I am still not able to pinpoint a simple program to demonstrate the
>> results.  The problem occurred initially while using Lucene, but later
>> on also with Owlim.  Having written all kinds of test programs with
>> Lucene, from doing almost nothing to fully fletched indexing, I can
>> make no definite conclusions yet.
>>
>> I am now circling around the problem, trying to enclose it from
>> different sides and I seek your help for giving me hints on what to
>> look for.  It is not lightly that I decided to bring it into this
>> mailing-list, knowing that it would claim many peoples time.
>>
>> Now the test I am running is attached.  It indexes a very simple
>> document with a unique id each, first assuring is it deleted.  And
>> each loop, it reopens the index-reader and searcher.  This test starts
>> to get in trouble above 10,000,000 loops (documents).  The problem is
>> that when I remove code (I tested systematically), it only takes
>> longer for the heap to explode. The only test that ran properly was
>> when I only created Documents and not index them.  So perhaps it has
>> to do something with I/O.
>
> Just as a clue: there are thousands of unclosed FileInputStreams and
> FileDescriptors.  At a mad guess, someone is not closing their files but
> hoping that finalization will do it instead.
>
> I was using:
>
> 	gnu::gcj::util::GCInfo::enumerate(JvNewStringUTF("LuceneDump"));
>
> and
>
>   /usr/local/bin/gc-analyze  TestDump001
>
> Eventually, I get a
>
> Exception in thread "main" java.lang.IndexOutOfBoundsException
>     at java.nio.Buffer.checkIndex(Buffer.java:331)
>
> from gc-analyze.  I may be able to find out what is causing that.
>

:-(

Sorry about that.  It shouldn't crash, but unfortunately I don't have 
the time to fix it right now.

David Daney.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-06 18:33                               ` Andrew Haley
  2011-04-06 18:39                                 ` David Daney
@ 2011-04-07 17:43                                 ` Erik Groeneveld
  2011-04-08  8:12                                   ` Erik Groeneveld
  2011-04-08 13:56                                   ` Andrew Haley
  1 sibling, 2 replies; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-07 17:43 UTC (permalink / raw)
  To: Andrew Haley; +Cc: java

[-- Attachment #1: Type: text/plain, Size: 3095 bytes --]

>> Now the test I am running is attached.  It indexes a very simple
>> document with a unique id each, first assuring is it deleted.  And
>> each loop, it reopens the index-reader and searcher.  This test starts
>> to get in trouble above 10,000,000 loops (documents).  The problem is
>> that when I remove code (I tested systematically), it only takes
>> longer for the heap to explode. The only test that ran properly was
>> when I only created Documents and not index them.  So perhaps it has
>> to do something with I/O.
>
> Just as a clue: there are thousands of unclosed FileInputStreams and
> FileDescriptors.

Thanks for trying.

The last good dump I have from the test after 12 million cycles (it
then got killed) has nothing like File stuff at all.  A also saw other
suspicious objects, but they all disappeared later on.  The collecter
really works wel!

See dump extract below (full dump attached).

What can you suggest from this?
What does (Java) mean?

*** Memory Usage Sorted by Total Size ***

  Total Size       Count       Size    Description
--------------     -----    --------   -----------------------------------
 17% 3,958,024 =  70,679 *        56 - (Java)
 15% 3,426,048 =  71,376 *        48 - GC_PTRFREE
  9% 2,097,152 =       1 * 2,097,152 - GC_NORMAL
  9% 2,085,160 =       7 *   297,880 - [I
  8% 1,908,240 =  79,510 *        24 - (Java)
  6% 1,376,928 =      42 *    32,784 - [C
  5% 1,279,104 =  79,944 *        16 - (Java)
  4% 1,048,592 =       1 * 1,048,592 - [I
  4%   954,480 =  19,885 *        48 - GC_NORMAL
  4%   917,952 =      28 *    32,784 - [B
  2%   642,896 =       2 *   321,448 - [I
  2%   622,896 =      19 *    32,784 - [I
  1%   355,840 =   8,896 *        40 - (Java)

> At a mad guess, someone is not closing their files but
> hoping that finalization will do it instead.

It crossed my mind also, but I see no traces of that.

Next hypothesis:
From analyzing graphs from the logs and comparing them to those of the
OpenJDK, I get the feeling that the collector looses control by not
collecting often enough.

The heap is quite unused/free, and remains so during the process.  It
seems that at some point, the heap fills up very quickly, and then the
collector decides to expand the heap instead of collecting (the
algorithm for deciding this seems rather complicated).  However, a
larger heap also causes the collector to collect less frequently.  So
the next time the heap fills up rapidly, it again decides to expand
the heap, again causing less frequent collections.  And so on.  I´ll
post the graph data in a separate post if you want it.

And the next hypothesis:
Perhaps the program allocates many different (possibly large) sizes,
which remain on the free list, but cannot be used because the next
objects requested are slightly bigger.  I have to study this somewhat
more.

Just two questions:
1. What is a reasonable number of heap sections?  I have 131 here.
2. What is a reasonable number of free lists?  I have 60, which have
13,000+ entries.

Erik

[-- Attachment #2: TestDump012.analyze.tgz --]
[-- Type: application/x-gzip, Size: 83623 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-07 17:43                                 ` Erik Groeneveld
@ 2011-04-08  8:12                                   ` Erik Groeneveld
  2011-04-08 13:56                                   ` Andrew Haley
  1 sibling, 0 replies; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-08  8:12 UTC (permalink / raw)
  To: Andrew Haley; +Cc: java

> Perhaps the program allocates many different (possibly large) sizes,
> which remain on the free list, but cannot be used because the next
> objects requested are slightly bigger.  I have to study this somewhat
> more.

This program has similar behavior, but without Lucene.  Although it
never allocates a block bigger than 10 MB, the heap keeps growing. It
was 70 MB before the program terminated (normally).

#include <stdio.h>
#include <cstdlib>
#include <math.h>

extern "C" void* _Jv_AllocBytes(int);
extern void _Jv_InitGC(void);
extern long _Jv_GCTotalMemory (void);
extern long _Jv_GCFreeMemory(void);

int main(int argc, char *argv[]) {
    _Jv_InitGC();
    for(int n=4096; n<10000000; n++) {
        int size = (rand() % n/4096 +1) * 4096; // allocate many different sizes
        void* p = _Jv_AllocBytes(size);
        if(n % 1000 == 0) {
            printf("%d %d %d %d\n", n, size, _Jv_GCTotalMemory(),
_Jv_GCFreeMemory());
            fflush(stdout);
        }
    }
}

When making a graph of the printed data, it's shape look familiar to
what I see with Lucene.  While the heap is mostly empty, the collector
keeps growing it.
The question is, will it grow forever, or is there an upper bound?
What will it do when the free lists become really large?

Erik

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-07 17:43                                 ` Erik Groeneveld
  2011-04-08  8:12                                   ` Erik Groeneveld
@ 2011-04-08 13:56                                   ` Andrew Haley
  2011-04-08 15:35                                     ` David Daney
                                                       ` (2 more replies)
  1 sibling, 3 replies; 42+ messages in thread
From: Andrew Haley @ 2011-04-08 13:56 UTC (permalink / raw)
  To: Erik Groeneveld; +Cc: java, Boehm, Hans

On 04/07/2011 06:42 PM, Erik Groeneveld wrote:

> Thanks for trying.
> 
> The last good dump I have from the test after 12 million cycles (it
> then got killed) has nothing like File stuff at all.  A also saw other
> suspicious objects, but they all disappeared later on.  The collecter
> really works wel!
> 
> See dump extract below (full dump attached).
> 
> What can you suggest from this?
> What does (Java) mean?

I'm not exactly sure.  This will take a bit of digging.

> *** Memory Usage Sorted by Total Size ***
> 
>   Total Size       Count       Size    Description
> --------------     -----    --------   -----------------------------------
>  17% 3,958,024 =  70,679 *        56 - (Java)
>  15% 3,426,048 =  71,376 *        48 - GC_PTRFREE
>   9% 2,097,152 =       1 * 2,097,152 - GC_NORMAL
>   9% 2,085,160 =       7 *   297,880 - [I
>   8% 1,908,240 =  79,510 *        24 - (Java)
>   6% 1,376,928 =      42 *    32,784 - [C
>   5% 1,279,104 =  79,944 *        16 - (Java)
>   4% 1,048,592 =       1 * 1,048,592 - [I
>   4%   954,480 =  19,885 *        48 - GC_NORMAL
>   4%   917,952 =      28 *    32,784 - [B
>   2%   642,896 =       2 *   321,448 - [I
>   2%   622,896 =      19 *    32,784 - [I
>   1%   355,840 =   8,896 *        40 - (Java)
> 
>> At a mad guess, someone is not closing their files but
>> hoping that finalization will do it instead.
> 
> It crossed my mind also, but I see no traces of that.
> 
> Next hypothesis:
> From analyzing graphs from the logs and comparing them to those of the
> OpenJDK, I get the feeling that the collector looses control by not
> collecting often enough.
> 
> The heap is quite unused/free, and remains so during the process.  It
> seems that at some point, the heap fills up very quickly, and then the
> collector decides to expand the heap instead of collecting (the
> algorithm for deciding this seems rather complicated).  However, a
> larger heap also causes the collector to collect less frequently.  So
> the next time the heap fills up rapidly, it again decides to expand
> the heap, again causing less frequent collections.  And so on.  I´ll
> post the graph data in a separate post if you want it.

That makes sense as an explanation.

It looks, then, as though there isn't a leak at all.  The collector
does what it's supposed to do.  There is always the risk of this with
any non-compacting dynamic memory allocator.

> And the next hypothesis:
> Perhaps the program allocates many different (possibly large) sizes,
> which remain on the free list, but cannot be used because the next
> objects requested are slightly bigger.  I have to study this somewhat
> more.

I wonder.

> Just two questions:
> 1. What is a reasonable number of heap sections?  I have 131 here.
> 2. What is a reasonable number of free lists?  I have 60, which have
> 13,000+ entries.

Paging Hans Boehm.  can you suggest ways to get the system to GC more
frequently?  Would doing so avoid this scenario?

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-08 13:56                                   ` Andrew Haley
@ 2011-04-08 15:35                                     ` David Daney
  2011-04-08 15:53                                       ` Erik Groeneveld
  2011-04-08 15:48                                     ` Erik Groeneveld
  2011-04-09  1:17                                     ` Boehm, Hans
  2 siblings, 1 reply; 42+ messages in thread
From: David Daney @ 2011-04-08 15:35 UTC (permalink / raw)
  To: Andrew Haley; +Cc: Erik Groeneveld, java, Boehm, Hans

On 04/08/2011 06:55 AM, Andrew Haley wrote:
> On 04/07/2011 06:42 PM, Erik Groeneveld wrote:
> > I wonder.
>
>> Just two questions:
>> 1. What is a reasonable number of heap sections?  I have 131 here.
>> 2. What is a reasonable number of free lists?  I have 60, which have
>> 13,000+ entries.
>
> Paging Hans Boehm.  can you suggest ways to get the system to GC more
> frequently?  Would doing so avoid this scenario?
>

Call _Jv_SetGCFreeSpaceDivisor().  I forget what we used to do, but 
bigger numbers cause it to collect more often.  The default may be 5, 
but if you make it 20 or 25 you really reduce heap growth.



David Daney

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-08 13:56                                   ` Andrew Haley
  2011-04-08 15:35                                     ` David Daney
@ 2011-04-08 15:48                                     ` Erik Groeneveld
  2011-04-09  1:17                                     ` Boehm, Hans
  2 siblings, 0 replies; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-08 15:48 UTC (permalink / raw)
  To: Andrew Haley; +Cc: java, Boehm, Hans

>> The last good dump I have from the test after 12 million cycles (it
>> then got killed) has nothing like File stuff at all.  A also saw other
>> suspicious objects, but they all disappeared later on.  The collecter
>> really works wel!
>>
>> Next hypothesis:
>> From analyzing graphs from the logs and comparing them to those of the
>> OpenJDK, I get the feeling that the collector looses control by not
>> collecting often enough.
>>
>> The heap is quite unused/free, and remains so during the process.  It
>> seems that at some point, the heap fills up very quickly, and then the
>> collector decides to expand the heap instead of collecting (the
>> algorithm for deciding this seems rather complicated).  However, a
>> larger heap also causes the collector to collect less frequently.  So
>> the next time the heap fills up rapidly, it again decides to expand
>> the heap, again causing less frequent collections.  And so on.  I´ll
>> post the graph data in a separate post if you want it.
>
> That makes sense as an explanation.
>
> It looks, then, as though there isn't a leak at all.  The collector
> does what it's supposed to do.  There is always the risk of this with
> any non-compacting dynamic memory allocator.

Yes, a non-moving collector has its limits, but I have still hope for
improvement.

>> And the next hypothesis:
>> Perhaps the program allocates many different (possibly large) sizes,
>> which remain on the free list, but cannot be used because the next
>> objects requested are slightly bigger.  I have to study this somewhat
>> more.
>
> I wonder.
>
>> Just two questions:
>> 1. What is a reasonable number of heap sections?  I have 131 here.
>> 2. What is a reasonable number of free lists?  I have 60, which have
>> 13,000+ entries.

I found from the code that 60 is the upper bound.  And 13,000 seems to
be a lot to me.  Altogether, the free space is about 700 Mb, of a 900
Mb heap.

Hans, is there a limit on the length of the free lists?
Adjacent blocks of the same size are coalesced; is there also
coalescing of blocks of different sizes at some point?

I tried a build with with USE_MUNMAP, hoping that would help the
collector clean up free-list, but I am still testing.

> Paging Hans Boehm.  can you suggest ways to get the system to GC more
> frequently?  Would doing so avoid this scenario?
>
> Andrew.
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-08 15:35                                     ` David Daney
@ 2011-04-08 15:53                                       ` Erik Groeneveld
  2011-04-08 15:57                                         ` Andrew Haley
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-08 15:53 UTC (permalink / raw)
  To: David Daney; +Cc: Andrew Haley, java, Boehm, Hans

>> Paging Hans Boehm.  can you suggest ways to get the system to GC more
>> frequently?  Would doing so avoid this scenario?
>>
>
> Call _Jv_SetGCFreeSpaceDivisor().  I forget what we used to do, but bigger
> numbers cause it to collect more often.  The default may be 5, but if you
> make it 20 or 25 you really reduce heap growth.

Yes, I saw this in the code and I already tried that with 30.  Did not help. ;-(
Also tried explicit GC_gcollect.  Did also not help.

I'll sleep another night; new clues will come... ;-)

Erik

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-08 15:53                                       ` Erik Groeneveld
@ 2011-04-08 15:57                                         ` Andrew Haley
  0 siblings, 0 replies; 42+ messages in thread
From: Andrew Haley @ 2011-04-08 15:57 UTC (permalink / raw)
  To: java

On 04/08/2011 04:52 PM, Erik Groeneveld wrote:
>>> Paging Hans Boehm.  can you suggest ways to get the system to GC more
>>> frequently?  Would doing so avoid this scenario?
>>>
>>
>> Call _Jv_SetGCFreeSpaceDivisor().  I forget what we used to do, but bigger
>> numbers cause it to collect more often.  The default may be 5, but if you
>> make it 20 or 25 you really reduce heap growth.
> 
> Yes, I saw this in the code and I already tried that with 30.  Did not help. ;-(
> Also tried explicit GC_gcollect.  Did also not help.
> 
> I'll sleep another night; new clues will come... ;-)

I think you should add some code to detect a high watermark and
then trap to a breakpoint.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* RE: GC leaks debugging
  2011-04-08 13:56                                   ` Andrew Haley
  2011-04-08 15:35                                     ` David Daney
  2011-04-08 15:48                                     ` Erik Groeneveld
@ 2011-04-09  1:17                                     ` Boehm, Hans
  2011-04-09  8:47                                       ` Andrew Haley
  2011-04-09 10:56                                       ` Erik Groeneveld
  2 siblings, 2 replies; 42+ messages in thread
From: Boehm, Hans @ 2011-04-09  1:17 UTC (permalink / raw)
  To: Andrew Haley, Erik Groeneveld; +Cc: java

Finally getting around to looking at this thread:

It still looks to me like the 32-bit version had serious blacklisting issues, possibly in addition to other problems.  It looks to me like the root sets it's tracing are too large, and they contain too much essentially random data that ends up looking like pointers.  That may not be the only, or even dominant, problem.

But let's continue with the 64-bit version that doesn't share the problem, and should be fine.

What does the GC log look like for the C++ test program you posted?  How much live data is the GC finding?  That test program is pushing the envelope in a couple of different ways:

1. It potentially causes lots of fragmentation if some of those objects are not immediately reclaimed.  This garbage collector doesn't like large objects very much.  I would expect that even some copying collectors would run into trouble with this, in that copying such large objects is expensive, and they may try to avoid compacting them.  Others would probably do OK.

2. The heap expansion heuristic in the GC versions < 7 is not very robust in cases like this.  It tries to allocate a fixed fraction of the heap between collections, and grows the heap if that fails.  With large fragmentation, it may never succeed.  Version 7+ fixes this by trying to allocate a fixed fraction of live data, not the overall heap size.  That involves tracking live data, at least approximately.  Setting a hard limit on the heap size (GC_MAXIMUM_HEAP_SIZE environment variable) might be a workaround.

> From: Andrew Haley [mailto:aph@redhat.com]
> 
> > Just two questions:
> > 1. What is a reasonable number of heap sections?  I have 131 here.
> > 2. What is a reasonable number of free lists?  I have 60, which have
> > 13,000+ entries.
> 
> Paging Hans Boehm.  can you suggest ways to get the system to GC more
> frequently?  Would doing so avoid this scenario?
> 
> Andrew.
There is a static, configuration-dependent limit on the number of heap sections.  So long as you're not running out, I wouldn't worry about it.

What kind of free lists are we talking about?  If these are the large block lists printed by GC_dump(), then 13000+ entries sounds really high.  If so, could you post an appropriate excerpt of what's in these free lists?  For small object free lists, that's probably OK.

Hans

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-09  1:17                                     ` Boehm, Hans
@ 2011-04-09  8:47                                       ` Andrew Haley
  2011-04-09 10:56                                       ` Erik Groeneveld
  1 sibling, 0 replies; 42+ messages in thread
From: Andrew Haley @ 2011-04-09  8:47 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Erik Groeneveld, java

On 09/04/11 02:14, Boehm, Hans wrote:
> Finally getting around to looking at this thread:
>
>
> 2. The heap expansion heuristic in the GC versions < 7 is not very
> robust in cases like this.  It tries to allocate a fixed fraction of
> the heap between collections, and grows the heap if that fails.

What would cause it to fail?

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-09  1:17                                     ` Boehm, Hans
  2011-04-09  8:47                                       ` Andrew Haley
@ 2011-04-09 10:56                                       ` Erik Groeneveld
  2011-04-10 11:03                                         ` Erik Groeneveld
  1 sibling, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-09 10:56 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Andrew Haley, java

[-- Attachment #1: Type: text/plain, Size: 3555 bytes --]

On Sat, Apr 9, 2011 at 3:14 AM, Boehm, Hans <hans.boehm@hp.com> wrote:
> Finally getting around to looking at this thread:

It is very well appreciated.

> It still looks to me like the 32-bit version had serious blacklisting issues, possibly in addition to other problems.  It looks to me like the root sets it's tracing are too large, and they contain too much essentially random data that ends up looking like pointers.  That may not be the only, or even dominant, problem.

Problems seldom come alone...

> But let's continue with the 64-bit version that doesn't share the problem, and should be fine.
>
> What does the GC log look like for the C++ test program you posted?  How much live data is the GC finding?

The log I attached earlier is TestDump012.analyze.tgz (from
gc-analyze) contains the live blocks.  (The TreeMap nodes are not a
problem; they disappear completely later on.)

I only kept the last dump of GC_DUMP_REGULARLY because they became GBs
big.  It is attached.

> That test program is pushing the envelope in a couple of different ways:
>
> 1. It potentially causes lots of fragmentation if some of those objects are not immediately reclaimed.  This garbage collector doesn't like large objects very much.  I would expect that even some copying collectors would run into trouble with this, in that copying such large objects is expensive, and they may try to avoid compacting them.  Others would probably do OK.

I am beginning to understand this.

The OpenJDK collector manages it.  The heap stays around 60 Mb.  I

> 2. The heap expansion heuristic in the GC versions < 7 is not very robust in cases like this.  It tries to allocate a fixed fraction of the heap between collections, and grows the heap if that fails.  With large fragmentation, it may never succeed.  Version 7+ fixes this by trying to allocate a fixed fraction of live data, not the overall heap size.  That involves tracking live data, at least approximately.  Setting a hard limit on the heap size (GC_MAXIMUM_HEAP_SIZE environment variable) might be a workaround.

I tried different values for GC_MAXIMUM_HEAP_SIZE, but it just runs
until it gives OOM.

I also tried compiling the GC with USE_MUNMAP, as I understand that
that would be an ideal solution if fragmentation is the problem.
However, the program crashes after some time with:

java.lang.ClassCastException: java.util.TreeMap$Node cannot be cast to
java.lang.ref.WeakReference
   at org.apache.lucene.util.CloseableThreadLocal.get(liblucene-core.so)
   at org.apache.lucene.analysis.Analyzer.getPreviousTokenStream(liblucene-core.so)


>> From: Andrew Haley [mailto:aph@redhat.com]
>>
>> > Just two questions:
>> > 1. What is a reasonable number of heap sections?  I have 131 here.
>> > 2. What is a reasonable number of free lists?  I have 60, which have
>> > 13,000+ entries.
>>
>> Paging Hans Boehm.  can you suggest ways to get the system to GC more
>> frequently?  Would doing so avoid this scenario?
>>
>> Andrew.
> There is a static, configuration-dependent limit on the number of heap sections.  So long as you're not running out, I wouldn't worry about it.

Ok.

> What kind of free lists are we talking about?  If these are the large block lists printed by GC_dump(), then 13000+ entries sounds really high.  If so, could you post an appropriate excerpt of what's in these free lists?  For small object free lists, that's probably OK.

Yes, I refer to the free lists from GC_dump(), see attachment.

Erik

[-- Attachment #2: run.lastdump.tar.bz2 --]
[-- Type: application/x-bzip2, Size: 102207 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-09 10:56                                       ` Erik Groeneveld
@ 2011-04-10 11:03                                         ` Erik Groeneveld
  2011-04-12 18:43                                           ` Erik Groeneveld
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-10 11:03 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Andrew Haley, java

On Sat, Apr 9, 2011 at 12:55 PM, Erik Groeneveld <erik@cq2.nl> wrote:
> On Sat, Apr 9, 2011 at 3:14 AM, Boehm, Hans <hans.boehm@hp.com> wrote:
>
> I only kept the last dump of GC_DUMP_REGULARLY because they became GBs
> big.  It is attached.
>
>> That test program is pushing the envelope in a couple of different ways:
>>
>> 1. It potentially causes lots of fragmentation if some of those objects are not immediately reclaimed.

I wrote a little Python script to analyze the heap dump from GC_dump
(attached earlier).  It finds all large blocks like:

 0x7f8198277000 size 180224 not black listed

then sorts them on the address, and scans it all to see if they are adjacent.

None of the 13,000+ blocks are.

It also calculates the gaps between the blocks and adds them.  It
leaves out the 13 largest gaps as they are likely to represent
different heap sections and not GC allocated space.  It adds up to 296
Mb, roughly the amount of space in use by the program according to
GC_dump, 273 Mb.

So indeed there is a lot fragmentation...

Any ideas about how to fight this?

I believe that I am now able to revisit my earlier suspicion about the
GC losing control by not collecting enough.  If it did collect more
often, it could avoid fragmentation.  (Is that true?) But now, when
fragmentation grows, it grows the heap and collects even less
frequently, thus allowing for more fragmentation.  And so on.  It this
a possible scenario?

Erik

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-10 11:03                                         ` Erik Groeneveld
@ 2011-04-12 18:43                                           ` Erik Groeneveld
  2011-04-13  8:11                                             ` Andrew Haley
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-12 18:43 UTC (permalink / raw)
  To: Boehm, Hans; +Cc: Andrew Haley, java

Hans, Andrew,

Having concluded for myself that fragmentation causes the heap to grow
indefinitely, it tried to find a workaround.  Because changing
different (environment-, build-, runtime-) variables didn't help, I
started looking at the code itself.

I found that all memory allocation calls from GCJ eventually come down
to GC_allochblk(), so I started gathering some statistics about it.
It turned out that it wasn't called that often at all, so I just added
a forced collect to see if my assumptions were right, risking much
slower runtime of course.  I tried:

@@ -50,6 +52,13 @@
     /* Do our share of marking work */
         if(GC_incremental && !GC_dont_gc)
            GC_collect_a_little_inner((int)n_blocks);
+
+    if (n_blocks >= 8) { // 32 kB and bigger often occur in fragmented heaps
+           GC_gcollect_inner();
+           printf(">>> forced collect <<<\n");
+    }
+
     h = GC_allochblk(lw, k, flags);
 #   ifdef USE_MUNMAP
        if (0 == h) {

I ran my test, and ignored it slowness (only noticing that it was not
so much slower).  But it works:

Before: 29,000,000 docs, 820 MB heap, OOM.
After: 67,000,000 docs, 490 MB heap.  Disk full ;-(

So frequent collection can certainly avoid fragmentation in this case.

Now the most curious of all: it is even faster as before:

Before: 1306 docs/second
After: 1582 docs/second

Apparently, it is better to collect a small heap more often than a
large heap less often.

Now this hack helped me to assert my assumptions, but it also works
well enough that I am going to try it to relieve some of the stress
that has been plaguing some productions systems for quite some time
now.

Meanwhile, I'd like to pursue a better solution - less of a hack.  Any
interest in helping out?

Erik

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-12 18:43                                           ` Erik Groeneveld
@ 2011-04-13  8:11                                             ` Andrew Haley
  2011-04-13 12:11                                               ` Bryce McKinlay
  2011-04-14  8:36                                               ` Erik Groeneveld
  0 siblings, 2 replies; 42+ messages in thread
From: Andrew Haley @ 2011-04-13  8:11 UTC (permalink / raw)
  To: Erik Groeneveld; +Cc: Boehm, Hans, java

On 12/04/11 19:42, Erik Groeneveld wrote:
> Hans, Andrew,
>
> Having concluded for myself that fragmentation causes the heap to grow
> indefinitely, it tried to find a workaround.  Because changing
> different (environment-, build-, runtime-) variables didn't help, I
> started looking at the code itself.
>
> I found that all memory allocation calls from GCJ eventually come down
> to GC_allochblk(), so I started gathering some statistics about it.
> It turned out that it wasn't called that often at all, so I just added
> a forced collect to see if my assumptions were right, risking much
> slower runtime of course.  I tried:
>
> @@ -50,6 +52,13 @@
>      /* Do our share of marking work */
>          if(GC_incremental && !GC_dont_gc)
>             GC_collect_a_little_inner((int)n_blocks);
> +
> +    if (n_blocks >= 8) { // 32 kB and bigger often occur in fragmented heaps
> +           GC_gcollect_inner();
> +           printf(">>> forced collect <<<\n");
> +    }
> +
>      h = GC_allochblk(lw, k, flags);
>  #   ifdef USE_MUNMAP
>         if (0 == h) {
>
> I ran my test, and ignored it slowness (only noticing that it was not
> so much slower).  But it works:
>
> Before: 29,000,000 docs, 820 MB heap, OOM.
> After: 67,000,000 docs, 490 MB heap.  Disk full ;-(
>
> So frequent collection can certainly avoid fragmentation in this case.
>
> Now the most curious of all: it is even faster as before:
>
> Before: 1306 docs/second
> After: 1582 docs/second
>
> Apparently, it is better to collect a small heap more often than a
> large heap less often.

This is very interesting.

> Now this hack helped me to assert my assumptions, but it also works
> well enough that I am going to try it to relieve some of the stress
> that has been plaguing some productions systems for quite some time
> now.
>
> Meanwhile, I'd like to pursue a better solution - less of a hack.  Any
> interest in helping out?

Before you go any further, it's worth remembering that you're using an
old version of the GC.  I've been told that "Of course, [the new gc]
will require modification of boehm.cc in GCJ" but not why.

It looks like you're making progress, but I urge you to move to the
new gc or your time may be wasted on the old one.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-13  8:11                                             ` Andrew Haley
@ 2011-04-13 12:11                                               ` Bryce McKinlay
  2011-04-13 14:27                                                 ` Andrew Haley
  2011-04-14  8:36                                               ` Erik Groeneveld
  1 sibling, 1 reply; 42+ messages in thread
From: Bryce McKinlay @ 2011-04-13 12:11 UTC (permalink / raw)
  To: Andrew Haley; +Cc: Erik Groeneveld, Boehm, Hans, java

On Wed, Apr 13, 2011 at 9:11 AM, Andrew Haley <aph@redhat.com> wrote:
> On 12/04/11 19:42, Erik Groeneveld wrote:

>> Meanwhile, I'd like to pursue a better solution - less of a hack.  Any
>> interest in helping out?
>
> Before you go any further, it's worth remembering that you're using an
> old version of the GC.  I've been told that "Of course, [the new gc]
> will require modification of boehm.cc in GCJ" but not why.
>
> It looks like you're making progress, but I urge you to move to the
> new gc or your time may be wasted on the old one.

Now would certainly be a good time to work on a boehm-gc import, with
GCC in stage 1.

Note that Kai Tietz has recently expressed interest in working on this, too:
http://gcc.gnu.org/ml/gcc/2011-04/msg00006.html

"svn import" and "svn merge" are your friends here. The unmodified
upstream sources from the last time we merged are tagged as GC_6_6.
IIRC, the local modifications at that time were fairly minimal, mostly
just configure changes. If you import the current sources with a
similar tag, it'll make it easy to get a diff of the GCC tree's
current divergences, which can then be submitted upstream if
appropriate.

Bryce

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-13 12:11                                               ` Bryce McKinlay
@ 2011-04-13 14:27                                                 ` Andrew Haley
  0 siblings, 0 replies; 42+ messages in thread
From: Andrew Haley @ 2011-04-13 14:27 UTC (permalink / raw)
  To: java

On 04/13/2011 01:11 PM, Bryce McKinlay wrote:
> On Wed, Apr 13, 2011 at 9:11 AM, Andrew Haley <aph@redhat.com> wrote:
>> On 12/04/11 19:42, Erik Groeneveld wrote:
> 
>>> Meanwhile, I'd like to pursue a better solution - less of a hack.  Any
>>> interest in helping out?
>>
>> Before you go any further, it's worth remembering that you're using an
>> old version of the GC.  I've been told that "Of course, [the new gc]
>> will require modification of boehm.cc in GCJ" but not why.
>>
>> It looks like you're making progress, but I urge you to move to the
>> new gc or your time may be wasted on the old one.
> 
> Now would certainly be a good time to work on a boehm-gc import, with
> GCC in stage 1.
> 
> Note that Kai Tietz has recently expressed interest in working on this, too:
> http://gcc.gnu.org/ml/gcc/2011-04/msg00006.html
> 
> "svn import" and "svn merge" are your friends here. The unmodified
> upstream sources from the last time we merged are tagged as GC_6_6.
> IIRC, the local modifications at that time were fairly minimal, mostly
> just configure changes. If you import the current sources with a
> similar tag, it'll make it easy to get a diff of the GCC tree's
> current divergences, which can then be submitted upstream if
> appropriate.

I think we're a bit further down the road than that, judging by the
traffic on the gc list.  Ivan Madanski said

> I've reviewed the patches from gcc/boem-gc. The following ones I'm
> unable to process (and some at GCC side should have a look at, may
> be the original patch authors could adopt their ones for GC v7+):
>
> bgc-167681 - darwin-specific (probably this one no longer needed for v7+);
> bgc-171516 - testsuite-specific;
> bgc-144045 bgc-150269 bgc-151013 bgc-151627 bgc-166028 - scripts-specific ones;
> bgc-114869, bgc-124081 - specific to thread suspension.

So these are the only patches that require special handling by someone
on the gcc side.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-13  8:11                                             ` Andrew Haley
  2011-04-13 12:11                                               ` Bryce McKinlay
@ 2011-04-14  8:36                                               ` Erik Groeneveld
  2011-04-14  8:43                                                 ` Andrew Haley
  1 sibling, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-14  8:36 UTC (permalink / raw)
  To: Andrew Haley; +Cc: Boehm, Hans, java

On Wed, Apr 13, 2011 at 10:11 AM, Andrew Haley <aph@redhat.com> wrote:
> Before you go any further, it's worth remembering that you're using an
> old version of the GC.  I've been told that "Of course, [the new gc]
> will require modification of boehm.cc in GCJ" but not why.

Yes, that is true, but most production systems run even older
versions, and I want a patch that works on those versions. I am now
building packages for Debian lenny (gcc 4.3.2) and Redhat 5 (even
older).  I see this patch as a work around that gives me time for the
next step.

> It looks like you're making progress, but I urge you to move to the
> new gc or your time may be wasted on the old one.

The next step is to upgrade to that latest GCC and the latest GC and
the latest Lucene and have that running on Debian Testing.

I will be looking into getting USE_MMAP to work, as I think that would
be a more definite solution.  However if I enable it, it segvs.

Are there any specific obstacles for USE_MUNMAP when used in GCJ, or
should it just work?

Erik

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-14  8:36                                               ` Erik Groeneveld
@ 2011-04-14  8:43                                                 ` Andrew Haley
  2011-04-14 10:02                                                   ` Erik Groeneveld
  0 siblings, 1 reply; 42+ messages in thread
From: Andrew Haley @ 2011-04-14  8:43 UTC (permalink / raw)
  To: Erik Groeneveld; +Cc: Boehm, Hans, java

On 14/04/11 09:35, Erik Groeneveld wrote:
> On Wed, Apr 13, 2011 at 10:11 AM, Andrew Haley <aph@redhat.com> wrote:
>> It looks like you're making progress, but I urge you to move to the
>> new gc or your time may be wasted on the old one.
>
> The next step is to upgrade to that latest GCC and the latest GC and
> the latest Lucene and have that running on Debian Testing.
>
> I will be looking into getting USE_MMAP to work, as I think that would
> be a more definite solution.  However if I enable it, it segvs.
>
> Are there any specific obstacles for USE_MUNMAP when used in GCJ, or
> should it just work?

MUNMAP, hmm.  I don't know; it should work but there's no experience.
Why do you want to munmap, anyway?  Are you running out of swap space?

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-14  8:43                                                 ` Andrew Haley
@ 2011-04-14 10:02                                                   ` Erik Groeneveld
  2011-04-14 10:50                                                     ` Andrew Haley
  0 siblings, 1 reply; 42+ messages in thread
From: Erik Groeneveld @ 2011-04-14 10:02 UTC (permalink / raw)
  To: Andrew Haley; +Cc: Boehm, Hans, java

On Thu, Apr 14, 2011 at 10:43 AM, Andrew Haley <aph@redhat.com> wrote:
>> I will be looking into getting USE_MMAP to work, as I think that would
>> be a more definite solution.  However if I enable it, it segvs.
>>
>> Are there any specific obstacles for USE_MUNMAP when used in GCJ, or
>> should it just work?
>
> MUNMAP, hmm.  I don't know; it should work but there's no experience.

Ok, that is important information if I start debugging: it is perhaps
only a bug, not a fundamental problem per se.

> Why do you want to munmap, anyway?  Are you running out of swap space?

Well, I assume that if the GC unmaps a page (hblk), it can always be
mapped at any other location when a new block is needed, effectively
circumventing fragmentation completely.  However, I did not dive into
the exact usage of mmap and I assumed a straightforward way of
utilization of mmap by the GC.  I might be wrong....

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-14 10:02                                                   ` Erik Groeneveld
@ 2011-04-14 10:50                                                     ` Andrew Haley
  2011-04-15  7:32                                                       ` Erik J Groeneveld
  0 siblings, 1 reply; 42+ messages in thread
From: Andrew Haley @ 2011-04-14 10:50 UTC (permalink / raw)
  To: java

On 04/14/2011 11:01 AM, Erik Groeneveld wrote:
> On Thu, Apr 14, 2011 at 10:43 AM, Andrew Haley <aph@redhat.com> wrote:
>>> I will be looking into getting USE_MMAP to work, as I think that would
>>> be a more definite solution.  However if I enable it, it segvs.
>>>
>>> Are there any specific obstacles for USE_MUNMAP when used in GCJ, or
>>> should it just work?
>>
>> MUNMAP, hmm.  I don't know; it should work but there's no experience.
> 
> Ok, that is important information if I start debugging: it is perhaps
> only a bug, not a fundamental problem per se.

Maybe.

>> Why do you want to munmap, anyway?  Are you running out of swap space?
> 
> Well, I assume that if the GC unmaps a page (hblk), it can always be
> mapped at any other location when a new block is needed, effectively
> circumventing fragmentation completely.

AFAIK it just returns the memory to the OS; I don't think it affects
anything else.

Andrew.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: GC leaks debugging
  2011-04-14 10:50                                                     ` Andrew Haley
@ 2011-04-15  7:32                                                       ` Erik J Groeneveld
  0 siblings, 0 replies; 42+ messages in thread
From: Erik J Groeneveld @ 2011-04-15  7:32 UTC (permalink / raw)
  To: Andrew Haley; +Cc: java



Op 14 apr. 2011 om 12:49 heeft Andrew Haley <aph@redhat.com> het volgende geschreven:

>>> Why do you want to munmap, anyway?  Are you running out of swap space?
>> 
>> Well, I assume that if the GC unmaps a page (hblk), it can always be
>> mapped at any other location when a new block is needed, effectively
>> circumventing fragmentation completely.
> 
> AFAIK it just returns the memory to the OS;

It turns out it doesn't. It remaps it as not accessible. And it is only doing so after the block hasn't been used for a while. 
And there is some complicated code that seems to remap all such blocks into a larger block.

> I don't think it affects
> anything else.

I am still hoping it does. ;-)

Erik


^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2011-04-15  7:32 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-04-01  8:39 GC leaks debugging Erik Groeneveld
2011-04-01  8:45 ` Andrew Haley
2011-04-01  9:03   ` Erik Groeneveld
2011-04-01  9:34     ` Andrew Haley
2011-04-02  0:27       ` Boehm, Hans
2011-04-02  9:39         ` Erik Groeneveld
2011-04-03 17:15           ` Erik Groeneveld
2011-04-03 18:00             ` Erik Groeneveld
2011-04-04  8:13               ` Andrew Haley
2011-04-04  8:53                 ` Erik Groeneveld
2011-04-04  9:48                   ` Andrew Haley
2011-04-05  4:44                     ` Boehm, Hans
2011-04-05  8:58                       ` Andrew Haley
2011-04-05  6:50                     ` Erik Groeneveld
2011-04-05  9:02                       ` Andrew Haley
2011-04-05 12:02                         ` Erik Groeneveld
2011-04-05 12:55                           ` Andrew Haley
2011-04-06 14:30                             ` Erik Groeneveld
2011-04-06 18:33                               ` Andrew Haley
2011-04-06 18:39                                 ` David Daney
2011-04-07 17:43                                 ` Erik Groeneveld
2011-04-08  8:12                                   ` Erik Groeneveld
2011-04-08 13:56                                   ` Andrew Haley
2011-04-08 15:35                                     ` David Daney
2011-04-08 15:53                                       ` Erik Groeneveld
2011-04-08 15:57                                         ` Andrew Haley
2011-04-08 15:48                                     ` Erik Groeneveld
2011-04-09  1:17                                     ` Boehm, Hans
2011-04-09  8:47                                       ` Andrew Haley
2011-04-09 10:56                                       ` Erik Groeneveld
2011-04-10 11:03                                         ` Erik Groeneveld
2011-04-12 18:43                                           ` Erik Groeneveld
2011-04-13  8:11                                             ` Andrew Haley
2011-04-13 12:11                                               ` Bryce McKinlay
2011-04-13 14:27                                                 ` Andrew Haley
2011-04-14  8:36                                               ` Erik Groeneveld
2011-04-14  8:43                                                 ` Andrew Haley
2011-04-14 10:02                                                   ` Erik Groeneveld
2011-04-14 10:50                                                     ` Andrew Haley
2011-04-15  7:32                                                       ` Erik J Groeneveld
2011-04-01 17:41 ` David Daney
2011-04-02 16:21   ` Erik Groeneveld

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).