This is the mail archive of the java@gcc.gnu.org mailing list for the Java project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: GC Trouble on Embedded System


I'm in the process of building a current gcc3.4 tree so that I can test this,
as well as try some other patches.

The next debugging step is generally to look at GC_dump() output at several
points during the heap growth.

To be honest, if your application's allocation pattern looks like your
test program's, I doubt you will be happy with the current garbage collector,
or any garbage collector that doesn't compact.  And you may find compacting
collectors atrociously slow on this kind of test.

There are two issues here that make this very hard, especially for a nonmoving collector:

1) The average object size if 750K.  This is several orders of magnitude
higher than most applications.  Any garbage collector will basically require
per object allocation + GC time that's proportional to the average object size.
Thus no collector is likely to process many allocations per second with this
kind of object size.  A conservative collector often runs into trouble with this
kind of object size, since it becomes hard to find contiguous space that is
not "referenced" by false pointers.

2) The object sizes and lifetimes are random.  Garbage collectors generally
try to gain performance by taking advantage of regularity in object sizes and
lifetimes.

For any nonmoving allocator (garbage collector or not), the worst-case fragmentation
overhead is on the order of at least a factor of log(max_obj_size/min_obj_size).
(This is a 1971 theorem due to Robson.)  The optimum space overhead in your case
is on the order of a factor of 10.  For any realistic nonmoving garbage collector,
it is probably unrealistic to expect less than a factor of 30 or so in the worst case.
(For malloc/free you might be able to get that down to a factor of 20.)  Thus
I wouldn't expect this to work absolutely reliably in less than about 200MB of memory.

There is some argument that if you perform random allocation operations for long
enough, you will eventually hit the worst case.  I'm a little surprised it gets large
that quickly, and there may be other problems affecting this.

Having said that, I have never seen a real program with this kind of allocation behavior.
For real behavior, fragmentation overhead is typically less than a factor of two, and
in my experience good nonmoving collectors typically end up using less space than a
simple two space copying collector.

Hans

> -----Original Message-----
> From: Craig A. Vanderborgh [mailto:craigv@voxware.com]
> Sent: Sunday, July 20, 2003 12:42 PM
> To: java@gcc.gnu.org
> Subject: GC Trouble on Embedded System
> 
> 
> Hello Everyone,
> 
> We are working on a port of GCJ/libgcj to the arm-wince-pe 
> platform.  We
> have the port working very well and it is now very close to being able
> to run our application.
> 
> However, we have run into big problems with the Boehm GC.  We have
> created a very small but very nasty test of garbage collection that
> isolates the problem we're having in the application to a half-page of
> very simple Java code.  This code is the GCTest class 
> presented below. 
> What it does to do some allocations of random length, keep 
> references to
> the allocations around in a Vector, and then release references to
> allocations to keep the grand total of allocations below some
> command-line-defined value.  Here is the GCTest:
> 
> import java.util.Vector;
> 
> public class GCTest {
> 
>   public static int maxNumAllocations = 10000;
>   public static int maxAllocationSize = 1000000;
>   public static int maxTotalAllocation = 10000000;
> 
>   public static void main(String[] argv) {
> 
>     Vector allocations = new Vector();
>     int    grandTotalAllocated = 0;
> 
>     if (argv.length > 0) {
>       maxNumAllocations = Integer.parseInt(argv[0]);
>       if (argv.length > 1) {
>         maxAllocationSize = Integer.parseInt(argv[1]);
>         if (argv.length > 2) maxTotalAllocation = 
> Integer.parseInt(argv[2]);
>       }
>     }
> 
>     int numAllocations = 0;
>     int totalAllocation = 0;
>     while (numAllocations < maxNumAllocations) {
>       long allocationSize = Math.round(Math.random() * 
> (double)maxAllocationSize);
>       totalAllocation += allocationSize;
>       while (totalAllocation > maxTotalAllocation) {
>         int index = (int)(allocations.size() * Math.random());
>         byte[] deallocation = (byte[])allocations.elementAt(index);
>         totalAllocation -= deallocation.length;
>         allocations.removeElementAt(index);
>         System.out.println("Deallocated " + 
> deallocation.length + " bytes, leaving " + totalAllocation + 
> " bytes allocated in " + allocations.size() + " blocks");
>       }
>       byte[] allocation = new byte[allocationSize];
>       allocations.addElement(allocation);
>       numAllocations++;
>       System.out.println("[" + numAllocations + "]: Allocated 
> " + allocationSize + " bytes, for a running total of " + 
> grandTotalAllocated + " bytes");
>       grandTotalAllocated += allocationSize;
>     }
>   }
> }
> 
> For the current discussion, I am running the test as:
> 
> GCTest 100000 1500000 6000000
> 
> The 3 arguments are:
> 1. iterations
> 2. max individual allocation permitted
> 3. grand total (sum) allocation limit
> 
> Running on Windows CE 3.0 on my ipaq 3765, this test fails gracefully
> after about 65K iterations with a java.lang.OutOfMemoryError.  Leading
> up to this failure, I can see that the Java heap is growing without
> bound.  Why would the above use of Java memory allocation cause
> unbounded heap growth?  This seems wrong.
> 
> On other platforms, notably Xscale-based CE machines (instead of
> StrongARM), the above test is producing "prefetch aborts" and "data
> aborts".  In other words, low-level memory management faults that kill
> the process, or in many cases cause a system crash.
> 
> We are embarking on a project to debug the Boehm GC with a view toward
> achieving operation that is robust enough to handle a large 
> application
> like ours.  Any ideas on how to proceed would be greatly appreciated. 
> The first question is obviously - should the above GCTest work without
> heap growth?  And, if not, why would there be the kind of heap growth
> we're experiencing.  Fragmentation?
> 
> TIA,
> craig vanderborgh
> voxware incorporated
> 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]