Locality problems caused by size based allocation?

Daniel Berlin dberlin@dberlin.org
Mon Dec 16 11:19:00 GMT 2002

In search of the real cause of the locality problems, I built a copying 
collector based on ggc-page. It specifically makes sure that we don't 
reuse  pages we've just freed when copying (to avoid non-contiguousness 
as much as possible).

It ends up, subtracting gc time, being just as fast/slow as ggc-page 
normally is. In fact, it's slightly slower if i make it not reuse pages 
than it is if i do.

This leads me to believe that our locality problems might actually be 
caused by the fact that we use size based allocation, which has 
basically no locality whatsoever in gcc.

I tested this theory (by changing the mark and sweep ggc-page to not 
size allocate), and it appears to be correct, since ggc-page with mark 
and sweep became just about as fast as my original copying collector 
tests (for large files, the fragmentation started affecting us 

What was the rationale behind size based allocation?

I also, based on a offhand suggestion by zack, made a ggc-boehm.
It works (though it fails if you have it collect whenever ggc_collect 
is called, even if i tell it it's not safe to collect without me 
explicitly calling the collection function. It also fails in a few 
cases. We must be hiding pointers somehow), and is faster than ggc-page 
with size allocation, running the same speed as the copying collector 
and the non-size allocating ggc-page (garbage collection times are much 
faster, too).

It obviously doesn't collect as much garbage as the accurate collectors 
(roughly half as much, in fact).
But the fact that it was as fast lends credence to the theory that it's 
size based allocation that is doing us in.


More information about the Gcc mailing list