GC leaks debugging
Fri Apr 8 13:56:00 GMT 2011
On 04/07/2011 06:42 PM, Erik Groeneveld wrote:
> Thanks for trying.
> The last good dump I have from the test after 12 million cycles (it
> then got killed) has nothing like File stuff at all. A also saw other
> suspicious objects, but they all disappeared later on. The collecter
> really works wel!
> See dump extract below (full dump attached).
> What can you suggest from this?
> What does (Java) mean?
I'm not exactly sure. This will take a bit of digging.
> *** Memory Usage Sorted by Total Size ***
> Total Size Count Size Description
> -------------- ----- -------- -----------------------------------
> 17% 3,958,024 = 70,679 * 56 - (Java)
> 15% 3,426,048 = 71,376 * 48 - GC_PTRFREE
> 9% 2,097,152 = 1 * 2,097,152 - GC_NORMAL
> 9% 2,085,160 = 7 * 297,880 - [I
> 8% 1,908,240 = 79,510 * 24 - (Java)
> 6% 1,376,928 = 42 * 32,784 - [C
> 5% 1,279,104 = 79,944 * 16 - (Java)
> 4% 1,048,592 = 1 * 1,048,592 - [I
> 4% 954,480 = 19,885 * 48 - GC_NORMAL
> 4% 917,952 = 28 * 32,784 - [B
> 2% 642,896 = 2 * 321,448 - [I
> 2% 622,896 = 19 * 32,784 - [I
> 1% 355,840 = 8,896 * 40 - (Java)
>> At a mad guess, someone is not closing their files but
>> hoping that finalization will do it instead.
> It crossed my mind also, but I see no traces of that.
> Next hypothesis:
> From analyzing graphs from the logs and comparing them to those of the
> OpenJDK, I get the feeling that the collector looses control by not
> collecting often enough.
> The heap is quite unused/free, and remains so during the process. It
> seems that at some point, the heap fills up very quickly, and then the
> collector decides to expand the heap instead of collecting (the
> algorithm for deciding this seems rather complicated). However, a
> larger heap also causes the collector to collect less frequently. So
> the next time the heap fills up rapidly, it again decides to expand
> the heap, again causing less frequent collections. And so on. IÂ´ll
> post the graph data in a separate post if you want it.
That makes sense as an explanation.
It looks, then, as though there isn't a leak at all. The collector
does what it's supposed to do. There is always the risk of this with
any non-compacting dynamic memory allocator.
> And the next hypothesis:
> Perhaps the program allocates many different (possibly large) sizes,
> which remain on the free list, but cannot be used because the next
> objects requested are slightly bigger. I have to study this somewhat
> Just two questions:
> 1. What is a reasonable number of heap sections? I have 131 here.
> 2. What is a reasonable number of free lists? I have 60, which have
> 13,000+ entries.
Paging Hans Boehm. can you suggest ways to get the system to GC more
frequently? Would doing so avoid this scenario?
More information about the Java