This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: compile server design document
- From: Michael Matz <matz at suse dot de>
- To: Per Bothner <per at bothner dot com>
- Cc: <gcc at gcc dot gnu dot org>
- Date: Tue, 4 Mar 2003 23:03:44 +0100 (CET)
- Subject: Re: compile server design document
Hi,
On Tue, 4 Mar 2003, Per Bothner wrote:
> > Simply install/run a daemon, and be done with it.
>
> So why not write a user-space daemon on the user side that can
> ship header files as the compilation daemon requests it?
Sure, sure. This all could be done. If someone does it ;-) (btw. a
poor-mans version of such daemon exists: gcc -E 1/2 ;-) )
> > Anyway, if you say, that using preprocessed files is increasing network
> > load, the same happens for NFS/AFS based systems. There is no big
> > difference if the content for the preprocessed contents comes in directly
> > as preprocessed file, or from header files over NFS, at least not in the
> > direction you indicate.
>
> If you ship preprocessed files you need to copy the header
> file contents for *each* file you compile. If you use a
> remote compile server it can (and will) cache the contents
> of the header files between compilation requests. It seems
> much more efficient to me.
It is. But it also adds a whole new level of complexity. Namely a
consistency protocol. Think about headers which are generated by the
build process, maybe shadowing global headers. You would also need to add
such a thing, and then you have the problem to explain, why you didn't
simply use a (caching like AFS) networking file system, because _those_
already do have means for consistency. This is the basic difference:
either you do the combining client-side (on the caller) --> simple
maintenable system
or you do it server-side --> complex system.
Given the expected (from experience) performance advantage of the complex
system (namely not much), while compiling C++ code, I would choose the
simple system. Yes, I constrained the use to one language, in fact one,
where compiling takes _much_ longer than preprocessing, or tranfering
files over network. But this is indeed the most usefull/realistic case.
Ciao,
Michael.