This is the mail archive of the java@gcc.gnu.org mailing list for the Java project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Heap fragmentation (Was: Debugging "Leaks" With Boehm-GC)


Hans Boehm wrote:

On Sat, 14 Jan 2006, David Daney wrote:

The larger the divisor, the more time spent in GC, but the less likely you are to end up in the pathological situation where there is plenty of free memory, but it is all in pools for objects of a size other than you are trying to allocate.

I think the default value is probably appropiate for cases where there
is no upper bound on memory size.  For bounded memory size, we have
found that a larger divisor is needed.

David Daney


If the issue here is really fragentation, it would be nice to understand it better.

I already perfectly understand what was happening to me WRT this issue.


A call to GC_dump() or setting the GC_DUMP_REGULARLY environment
variable should tell you what's in the heap.

And GC_dump was instrumental in achieving the understanding.


 Really fragmentation per
se can only occur if either:

1) The application drastically changes the object size mix it needs for
different phases, and some other things go wrong.  And even then, things
shouldn't get too bad.  Or


For my this was the problem.


For certain definitions of 'drastically' and for heaps of certain size limits using the default divisor: "shouldn't get too bad" is equivilent to "out of memory".

Now I don't know if you would call it classical memory fragmentation, but what ends up happening is that the memory pools for small sized objects grow to such a point that allocation of objects of a newly seen size fails. There is plenty of space for objects for which there is a large pool.

I forget the actual algorythm the GC uses, but IIRC it only considers overall memory usage when deciding when to run a GC cycle. For limited memory situations, it might be better to also consider the pool size in relation to the total available memory and not led the pools take too much memory. Perhaps if a certain pool is larger than 5-10% of the total available memory always run GC instead of expanding the pool. In effect a dynamic divisor based on pool size.

I should also note that all my research into these problems was with libgcj-3.3.1. Since then we have always used a larger divisor and never had any more problems.

David Daney



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]