This is the mail archive of the
java-patches@gcc.gnu.org
mailing list for the Java project.
GC / GCJ / and binutils (ld)
- From: "Peter Blemel" <pblemel at hotmail dot com>
- To: java-patches at gcc dot gnu dot org
- Date: Fri, 03 Sep 2004 14:43:01 -0600
- Subject: GC / GCJ / and binutils (ld)
- Bcc:
Hello World,
Right now my GCJ port uses newlib's malloc (plus an sbrk that I've
supplied). Since garbage collection is a Good Thing (tm) I've been working
with the boehm-gc port. I've found a couple of issues. I'll put the one
that I care about most first :
The binutils elf ld script creates a symbol for the end of the data segment
(_edata), and I'm currently configured to find the beginning of the data
segment by walking backwards through VirtualQuery. This seems to be
something of a hack as it looks like this results in walking back until
VirtualQuery returns 0. In my RTOS I *think* that this means the code
space winds up being added as a root (it is a windows-like
Virtual{Query|Alloc|Free} but it's a limited subset). It seems like this
will just slow down GC.
It seems to me that it would be a lot easier if I simply changed the linker
script to emit a symbol at the beginning of the data segment. This assumes
that I know what I am doing (which I don't :-) The linker script currently
has
/* We want the small data sections together, so single-instruction offsets
can access them all, and initialized data all before uninitialized, so
we can shorten the on-disk segment size. */
.sdata : { *(.sdata) }
_edata = .;
PROVIDE (edata = .);
PROVIDE (__edata = .);
Do I simply want to emit the _data symbol before the sdata? I.e. :
_data = .;
PROVIDE (data = .);
PROVIDE (__data = .);
.sdata : { *(.sdata) }
_edata = .;
PROVIDE (edata = .);
PROVIDE (__edata = .);
so the data segment will be bounded by the _data and _edata symbols?
Is there perhaps a better place to ask this question?
The next issue is somewhat trivial, but it causes a determinism problem.
The clock() function used by the GC (in general) returns an approximation of
processor time used by the program. POSIX requires that CLOCKS_PER_SEC
equals 1000000 independent of the actual resolution. This has implications
in GCJ because on a 32bit system where CLOCKS_PER_SEC equals 1000000 this
function will return the same value approximately every 72 minutes.
The macros in GCJ (specifically MS_TIME_DIFF) seem to ignore the fact that a
DWORD tick count even expressed in millis will wrap around every 49.7 days
(E.g. MS Windows). This is a problem if garbage collection is running
right about that time, because attempts to limit the time used by GC will
not be successful if the clock 'rolls over' during GC. It's not terribly
unusual for an embedded application to run for this long, and in the
(unlikely) event that this occurs, the application will stop until gc
completes.
Thanks and Regards,
Peter
_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today - it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/