>> Now the test I am running is attached.  It indexes a very simple >> document with a unique id each, first assuring is it deleted.  And >> each loop, it reopens the index-reader and searcher.  This test starts >> to get in trouble above 10,000,000 loops (documents).  The problem is >> that when I remove code (I tested systematically), it only takes >> longer for the heap to explode. The only test that ran properly was >> when I only created Documents and not index them.  So perhaps it has >> to do something with I/O. > > Just as a clue: there are thousands of unclosed FileInputStreams and > FileDescriptors. Thanks for trying. The last good dump I have from the test after 12 million cycles (it then got killed) has nothing like File stuff at all. A also saw other suspicious objects, but they all disappeared later on. The collecter really works wel! See dump extract below (full dump attached). What can you suggest from this? What does (Java) mean? *** Memory Usage Sorted by Total Size *** Total Size Count Size Description -------------- ----- -------- ----------------------------------- 17% 3,958,024 = 70,679 * 56 - (Java) 15% 3,426,048 = 71,376 * 48 - GC_PTRFREE 9% 2,097,152 = 1 * 2,097,152 - GC_NORMAL 9% 2,085,160 = 7 * 297,880 - [I 8% 1,908,240 = 79,510 * 24 - (Java) 6% 1,376,928 = 42 * 32,784 - [C 5% 1,279,104 = 79,944 * 16 - (Java) 4% 1,048,592 = 1 * 1,048,592 - [I 4% 954,480 = 19,885 * 48 - GC_NORMAL 4% 917,952 = 28 * 32,784 - [B 2% 642,896 = 2 * 321,448 - [I 2% 622,896 = 19 * 32,784 - [I 1% 355,840 = 8,896 * 40 - (Java) > At a mad guess, someone is not closing their files but > hoping that finalization will do it instead. It crossed my mind also, but I see no traces of that. Next hypothesis: From analyzing graphs from the logs and comparing them to those of the OpenJDK, I get the feeling that the collector looses control by not collecting often enough. The heap is quite unused/free, and remains so during the process. It seems that at some point, the heap fills up very quickly, and then the collector decides to expand the heap instead of collecting (the algorithm for deciding this seems rather complicated). However, a larger heap also causes the collector to collect less frequently. So the next time the heap fills up rapidly, it again decides to expand the heap, again causing less frequent collections. And so on. I´ll post the graph data in a separate post if you want it. And the next hypothesis: Perhaps the program allocates many different (possibly large) sizes, which remain on the free list, but cannot be used because the next objects requested are slightly bigger. I have to study this somewhat more. Just two questions: 1. What is a reasonable number of heap sections? I have 131 here. 2. What is a reasonable number of free lists? I have 60, which have 13,000+ entries. Erik