The problems with StringBuilders...

Bryce McKinlay
Thu Oct 6 20:31:00 GMT 2005

Andrew Haley wrote:

> > This is a problem with the GCJ approach.  We have this nice binary 
> > compatibility feature, but you have to disallow a wide range of 
> > optimizations in order to use it.  The JIT compilers (like Sun's 
> > HotSpot) can do these things because they know everything at run/compile 
> > time (as runtime and compile time are the same thing).
>No, that really isn't true.
>In a JIT compiler, when you devirtualize and inline you have to assume
>that you know everything about the program, in particular that a
>method you just inlined isn't about to be overridden by a subclass.
>But you don't, because at any time a ClassLoader can come along and
>create a new subclass of one of the classes you just inlined, and your
>assumptions are no longer true.  So what do you do?  The ClassLoader
>has to signal the runtime that the world has changed, and you have to
>fall back to the non-inlined version of the code.  This is *exactly*
>the same with ahead of time compilation.  We can do optimistic
>optimization in gcj and fall back if we have to, just like a JIT.

Its not quite the same. One difference is that in an AOT compiler, we 
have nothing to fall back on but the interpreter. A JIT isn't going to 
fall back to interpreted code in this case, at least if the method in 
question is called frequently or contains loops. If it is worth the time 
to optimize, a JIT can simply recompile the method without the 
devirtualization optimization.

In an AOT, however, we are flying blind - there is no reliable knowledge 
of what classes will be encountered at runtime, so there's a high chance 
than any given devirtualization optimization will have to be thrown away 
at runtime. We could avoid the interpreter-fallback by compiling methods 
twice - once with and once without virtual inlining and 
devirtualization, but this is wasteful in space, and application 
performance would degrade over time as dependent libraries get updated, 
etc. Profile-directed optimization feedback might help, but its no 
substitute for the knowledge a JIT has.

Also, while devirtualization & inlining of virtual methods are 
optimizations that can be invalidated after compilation time even in a 
JIT, there are many other optimizations that can be trivially done at 
runtime but not at all ahead-of-time. This includes inlining of finals 
and static methods, various cases of "direct" dispatch, eliminating 
class initialization checks where you know a given class is already 
initialized at compilation time, etc.

There are of course many advantages that an AOT should have over a JIT 
as well - eg startup performance. The "ideal" approach probably consists 
of a combination of a pre-compiled "cache", which is well-optimized 
while retaining binary compatibility, combined with an optimizing JIT 
that can recompile select methods - ie the "hotspots" - to take 
advantage of runtime optimizations, in addition to compiling dynamically 
loaded bytecode.


More information about the Java mailing list