Hi again,
Now I'm up for a new question:
If I continue allocating chunks of memory - say, like "new
byte[10000]" - and of course keep references to the allocated
arrays - I will at some time run out of "free memory" in the sense of
"Runtime#freeMemory()". Hence, some mechanism in GCJ allocates some
more memory so that "freeMemory()" increases, and same goes for
"Runtime#totalMemory()".
That's fine...
However, the amount of memory claimed every time this
"low-memory"-boundary is near, seems to be proportional with the
previous total-memory-incrementation.
This may in many cases be the most simple/plausible algorithm. But
when I hit the roof while allocating the next chunk - say going from
20megs total to what would have been 26megs of total (according to
Runtime#totalMemory()) - my application terminates because there
simply isn't that amount available from the system:
*** MEM CHUNK TAKEN: 98304
*** MEM CHUNK TAKEN: 131072
*** MEM CHUNK TAKEN: 176128
*** MEM CHUNK TAKEN: 233472
*** MEM CHUNK TAKEN: 311296
*** MEM CHUNK TAKEN: 417792
*** MEM CHUNK TAKEN: 557056
*** MEM CHUNK TAKEN: 741376
*** MEM CHUNK TAKEN: 987136
*** MEM CHUNK TAKEN: 1318912
*** MEM CHUNK TAKEN: 1757184
*** MEM CHUNK TAKEN: 2342912
*** MEM CHUNK TAKEN: 3125248
*** MEM CHUNK TAKEN: 4165632
Terminated
Then I'd expected one of two situation - but neither occur:
1) The memory allocated is adjusted to the largest possible - so that
the increase in the above example may be from 20mb to 24mb.
2) I'm thrown an OutOfMemoryError error.
But neither happens - my application gets terminated by the Linux
kernel...
Now, the real question: Is there a way to circumvent this? Can I
configure the chunk-eating-mechanism so that it does not allocate
chunks larger than, say, 100kb at a time...