This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: reduce compilation times?

Andrew Haley schrieb:
Sven Eschenberg writes:

 > Anyway, what I meant: Compiling a package like firefox, glibc
 > etc. with ccache gives you some speed increase, but it is small
 > compared to uncompressing the source directly into a ram disk and
 > build everything in there.

That sounds pretty surprising to me. How is a RAM disk going to be so
much faster than. say, /tmp? I suppose there's no overhead of writing
the files back to disk after updating them, but thet's usually done in
the background anyway. "make -jN" is usually enough to swallow up any
rotational latency. But when I'm compling, all CPU cores are usually
at 90% plus; the compiler certainly isn't waiting for disk. That RAM
disk is going to get me 10% more at best.
I assume this all depends on the usage scenario etc. . If /tmp is on disk (which it often is, because it can grow pretty big), you
save quite some io, ccache needs to do disk-IO too, to access it's caching data. I guess the major effect is bypassing the filesystem's
caching strategies - read the source package (i.e. 50MB = 1-2 sec) after that, all IO is in RAM, every sourcefile is read from ram, objects are
put into ram and reread from there etc. - though disk IO can use DMA, it still needs to wait during reading, if the data is not yet there.
(which would be avoided with some read-ahead strategies).

As I said, certainly the combination of ccache and keeping ccache data and the build in ram might be the fastet way.

Of course, if /tmp and your ccache data is on a RAID5, with a controller, that has it's own 1-2 Gb RAM, things look diferently from a notebook, which
only carries a 5400 RPM drive, I assume.
The question is, if ccache can read the cached preprocessed source faster from disk, than gcc (resp.cpp) the source from ram and preproces it,
which certainly depends on the way the sources look like (factoring), disk IO speed, processing speed etc.
> Combining both didn't seem to give additional reproduceable
> benefit, but I gotta admit, never tried to put ccache's data into a
> ramdisk too, since I don't have enough ram for that on sufficently
> big enough packages. If -j2 speeds things, it's mostly because of
> the kernel's scheduling, I assume.
> > The only box I got left, which is Uniprocessore and doesn't have
> HT/Multiple cores didn't really compile faster with -j2 - Then
> again it is a server, which has a certain minor load anyway all the
> time, that's why I assume -j2 on Uniprocessor only benefits from
> scheduling strategies.

The main purpose of -j2 on a uniprocessor is to absorb any disk
latency: when one process blocks because a file is not ready, another
process has something useful to do.  It's not a huge win when building
gcc, but it is significant.  It is very usefule when building on an
NFS-mounted drive.

Ah okay, I forgot the disk IO, but this makes perfect sense ...



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]