This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: a .NET alternative (GJC et al)


On 09-Aug-2001, Florian Weimer <Florian.Weimer@RUS.Uni-Stuttgart.DE> wrote:
> Fergus Henderson <fjh@cs.mu.oz.au> writes:
> 
> > If there *is* a collector, however, the back-end can't optimize *away*
> > the stack marking, because the collector gets passed (via the
> > global variable) a linked list of all the stack frames, and
> > it traverses them.
> 
> This won't work in multi-threading environment where the collector
> might be completely invisible to the compiler.

OK, I wasn't thinking about multi-threaded environments.

However, I think multi-thread environments can be handled without too
much difficulty.  One standard approach is to advance each thread to a
GC-safe point before performing GC.

This can be done by having each heap allocation check whether a GC is
pending, and if so, to suspend until the GC is complete.  You also need
to insert "GC pending" checks (e.g. `(void) allocate(0)') into any loops
that don't perform any heap allocations.

The "GC pending" check can be combined with the "heap overflow" check,
which you typically need anyway, by having the thread which invokes a
GC just temporarily reset the heap_max of the other threads:

	__THREADLOCAL__ void *heap_ptr;
	__THREADLOCAL__ volatile void *heap_max; /* This one is the only
						    `volatile' needed! */

	inline void *allocate(size_t bytes) {
		bytes = round_up_to_alignment_boundary(bytes);
		if (heap_ptr + bytes < heap_max) {
			void *p = heap_ptr;
			heap_ptr += bytes;
			return p;
		}
		return handle_overflow(bytes);
	}

Here handle_overflow() will check if heap_max is set to a special value
e.g. the start of the heap; if so, that means that the heap didn't
really overflow, it's just that another thread has scheduled garbage
collection, so this thread should just suspend until the GC has finished.
(Or alternatively, if you have a multi-threaded collector, it can do
some of the work of collection... but that is a side issue.)

Calls to handle_overflow() are our GC-safe points.

Now, in our case, the only thing we need to do to ensure that the back-end
doesn't optimize things too much at GC-safe points is just to make
those points call an external routine, e.g. by writing handle_overflow
in assembler (maybe just as a jump to a C routine -- all we need to do
is to ensure that the back-end compiler won't be able to analyze the
contents of handle_overflow()).  After all, an external routine might do
the GC itself; the back-end compiler can't tell the difference between
an external routine doing the GC itself and that routine suspending
while a routine in another thread does the GC.

So the only cooperation with the back-end that you need is that the back-end
needs to support calls to external routines whose bodies will not be
analyzed for the purpose of optimization.  In GCC this can be achieved
in a number of ways, e.g. by writing those routines in a separate compilation,
by writing them in assembler, or just by inserting a do-nothing inline
assembler fragment at the start of the routines in question:

	void *handle_overflow(size_t bytes) {
		asm volatile ("" :: memory);
		...
	}

You're right that *some* cooperation is required, but it is close to
the most minimal degree of cooperation that I can imagine, and is
satisfied by every C compiler I've ever heard of.

The only extra overhead is the GC-pending checks in loops that don't have
any heap allocations.  I would expect such loops to be relatively rare
(except in toy benchmarks), so the overhead should be small;
these loops can be unrolled if necessary to reduce the overhead.

-- 
Fergus Henderson <fjh@cs.mu.oz.au>  |  "I have always known that the pursuit
The University of Melbourne         |  of excellence is a lethal habit"
WWW: <http://www.cs.mu.oz.au/~fjh>  |     -- the last words of T. S. Garp.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]