This is the mail archive of the
mailing list for the GCC project.
RE: robustness vs. conservative GC
- To: "'Tom Lord'" <lord at regexps dot com>, gcc at gcc dot gnu dot org
- Subject: RE: robustness vs. conservative GC
- From: "Boehm, Hans" <hans_boehm at hp dot com>
- Date: Fri, 10 Aug 2001 13:59:32 -0700
- Cc: "Boehm, Hans" <hans_boehm at hp dot com>
> From: Tom Lord [mailto:email@example.com]
> > what would prevent [hidden pointer] bugs from biting GCC or
> > various Java implementations?
> The fact that the Java collector always treats interior
> pointers from the
> stack as valid,
> You push in here and it pops out there. This design would seem to to
> raise the probability of incorrectly failing to collect some genuinely
> dead objects, which is an unpredictable and uncontrollable source of
> arbitrarilly catastrophic failure, sufficiently improbable to escape
> most testing, and sufficiently serious that it ought not be ignored or
> swept under the rug with vague (and inaccurate) replies such as "the
> resulting leaks only waste a little memory" (a specific reply I do not
> attribute to Hans).
I usually distinguish between correctness issues (which can potentially
cause byzantine failures), and running out of resources (which can't cause
byzantine failures, and can sometimes be worked around by obtaining more
resources). Eventhough they can both cause systems to fail, I think it's
not a completely meaningless distinction. Gcc + glibc strives to be correct
in the first sense. I have never seen any correctness claims for the second
on general purpose platforms, other than something along the lines of "it
shouldn't be too much worse than anything else". Conservative collectors
should be and can be correct in the first sense. They can also make at best
very weak claims for the second.
In fact, the strongest space usage claims one can make for C with
malloc/free are also pretty weak. I'm not sure about the worst case
behavior of Doug Lea's malloc. I suspect that if you bound object sizes to
around 1 MB, you get a worst-case ratio of <maximum heap used>/<maximum live
memory> of at least around 20 or so. Conservative root scanning adds a
bigger worst-case factor than that. But for most applications (basically
anything that doesn't manipulate lazy infinite data structures) it's still
bounded. I suspect many applications would fail if they encountered either
Once we get back to probabilities, there are other techniques you can use to
reduce leakage probabilities. I suspect the leakage probability for the gcj
collector is very similar to what it was for Guile when I last looked at the
code many years ago (which is of course an unfair comparison).
> That's misleading. One practical question is whether the programmer
> has useful pre-conditions that assure an object has been collected.
> Keeping track of object references isn't "hard to predict" for all
> objects; on the contrary -- it's part of how good programmers use
> good collectors.
I generally don't worry about whether a particular object gets collected and
when. The nice thing about a collector is that I usually don't have to. (A
bad thing about a garbage collector is that managing objects for which I do
have to worry about this tends to be very expensive.) A generational
collector typically also gives you no bound on the time to collection for a
particular object. I do worry about the total resource overhead.
Conservative collection seriously weakens the worst-case bounds, which is
admittedly a cost.
> [Precise, compiler-independent collectors cause slow
> calling conventions.]
> That's a definate maybe. It depends, of course, on how calls are
> implemented -- you and Fergus are taking a narrow view of that, but
> presenting that view as if it were universal.
> Regardless, even sticking to the particular implementations you
> have in mind, the size of the performance hit is unknown, and
> has to be weighed against robustness and correctness.
Agreed. And reducing that cost would certainly be interesting.