Porting Boehm-gc to embedded m68k environment
Wed Nov 5 13:22:00 GMT 2003
On Wed, 5 Nov 2003, John Neil wrote:
> 1. The GC_alloc function(s) calls a clear stack function just prior to
> returning. I assume that this function is used to clear old addresses from
> the unused partition of the stack however as there seems to be no bounds on
> range cleared, this function overwrote the end of the thread stacks which
> are relatively small stack (~2k). I was wonder if this stack clearing is
> really necessary if the correct stack bounds (top to current) are registered
> for scanning via the GC_push_stack function.
It isn't necessary but is sometimes beneficial. Even if you know the
exact top of stack for each thread, portions of the stack within that
range are frequently uninitialized, so the GC takes the opportunity to
clear some of the stack when it can.
> I have replaced this function
> internally and was wondering if it this should be provided via the
> "os_dep.c" file.
That's a reasonable suggestion. There is already special handling for
> 2. Are the memory pages returned via the GET_MEM function automatically
> added to the list of ranges to be scanned for addresses, or is it necessary
> to manually add these, via add-roots or push all when the stacks are pushed
> for scanning.
No. Those memory pages are only traced when referenced from a live
pointer. (If you did register all of your heap as roots, nothing could
ever be collected.)
> 3. Why do the Jv_realloc and Jv_malloc functions in prims.cc us
> realloc/malloc directly rather than the corresponding GC function's
> GC_malloc/GC_alloc. Is this for efficiency reasons, to reduce the amount of
> memory the GC has to scan during a collection.
I think these are for uncollectable memory. There may be no good reason
for not using the GC equivalents. (At one time the libgcj designers were
careful to avoid locking in to a single GC implementation, however, no
real alternative to Boehm's collector has ever been widely used.)
> 4. As garbage collection takes considerable time (1.5 sections for 6M) I
> was looking at limiting the amount of memory scanned. I noticed that static
> final primitive fields (int, byte, ...) are stored in the .data section,
> with all other static fields stored in the .bss section. I figured that
> static final fields which are java primitives should actually be stored in
> read-only data sections (.text/.rodata), and static primitive fields which
> are initialized (by constants) should be stored in the .data section (and
> initialized at load time) rather than the .bss section (and initialized at
> runtime). Is there any reason why initialized static primitive fields are
> not stored in the .data section.
None that I can think of, except that moving fields from .bss to .data
will increase the binary size slightly.
> As no class pointers are stored in the .data section, is it true that only
> the section .bss needs be scanned by the garbage collector.
Definitely not. The .data section also contains GC-allocated pointers.
To a large extent .data is pointer-free, but the collector needs help to
discern this. If a large continugous region is known to be pointer-free
it can be excluded from roots by an API call. However the way libgcj is
currently organized, most pointer-free regions are small and littered
Class metadata is the large contributor to libgcj's root size. The
best plan may be avoiding scanning any classes that are not yet
More information about the Java