This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Faster compilation speed


On Sat, 10 Aug 2002, Robert Dewar wrote:
>
> <<Well, at some point space "optimizations" do actually become functional
> requirements. When you need to have a gigabyte of real memory in order to
> compile some things in a reasonable timeframe, it has definitely become
> functional ;)
> >>
> 
> Interesting example, because this is just on the edge. We are just on the point
> where cheap machines have less than a gigabyte, but not by much (my notebook
> has a gigabyte of real memory). In two years time, a gigabyte of real memory
> will sound small.

Careful. 

That's an extremely slipperly slope, as I'm sure you are well aware.

Yes, all the machines I work on daily have a gigabyte of RAM these days,
and usually at least two CPU's. So it should be ok to have a compiler use
it up, assuming that the end result of the compilation is a really well-
optimized program, right?

Well, even if you could assume that machines have gigabytes of RAM (and
I'll give you that you probably _can_ assume it in another few years, and
not just on the kinds of machines I play with), it takes quite a while to
access that gigabyte. 

Yeah, the machine I'm working on gets memory throughputs of 1.5GB/s right
now. That's assuming good access patterns, though - it's a lot less if you
chase pointers and only use a few bytes per 128-byte cacheline loaded. 

The difference between cache access times and memory access times are
already on the order of a factor of 200, and likely to go up. But since
nobody can expect hot gcc data to fit in the L1, it's probably fairer to
compare L2 times to main memory, which is "only" a factor of 20 or so. 

And quite frankly, I _would_ expect gcc data to fit in a reasonable L2 in 
the same timeframe that you can sanely assume that machines have at least 
a gigabyte of memory.

So if we're talking about performance, I still say that gcc should aim at
fitting in the L2 (and maybe the TLB) of any reasonable CPU. Right now
that means that you want to try to fit the real working set in half a meg
or so, to reliably get the 20-times increase in performance.

			Linus


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]