This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Faster compilation speed


dewar@gnat.com said:
> I seriously doubt that incremental compilation can help. Usually it is
> far better to aim at the simplest fastest possible compilation path
> without bothering with the extra bookkeeping needed for IC.

I'm not convinced that the bookkeeping should be that expensive 
(difficult to set write possibly but expensive ??).
 
> Historically the fastest compilers have not been incremental, and IC
> has only been used to make painfully slow compilers a little less
> painful 

History has its value and must be considered. Unfortunately, often 
only the conclusions are kept and not all the premisses that led to 
it. I have often seen a progress coming from realizing that some 
historical rule that no-one was questionning was no longer true...

Now, you may probably be right in this case, you certainly know more 
than I do. Are you sure though that the quality of the codes generated by 
these compilers were equal ?!? I suppose so, but just asking a 
confirmation.

IC just looks to go one step beyond PCH and it looks like that PCH is 
nowadays an often used technique to speed up compilation. But (see 
below), I agree that this (IC or PCH or whatever else) should not be
done at any cost...

dewar@gnat.com said:
> Actually COBOL programs are FAR FAR larger than C or C++ programs in
> practice. In particular, single files of hundreds of thousands of
> lines are common, and complete systems of millions of lines are
> common. That's why there is so much legacy COBOL around :-)

> My point is that a factor of ten is relative. 

File size is not the only parameter. Modern languages do more 
complicated thing than the average Cobol compiler I suppose.... 

> My point is that a factor of ten is relative. 
> If you have a million lines COBOL program and it takes 10 hours to
> compile, then cutting it down to 1 hour is a real win. If it takes 10
> minutes to compile, then cutting it down to 1 minute is a much smaller
> win in practice. 

Relativity is a strange beast...
Of course, your argument is sensible, but the problem is that, we 
humans, often do not work like this;

- Something that is faster than our reaction time, is considered as 
  zero cost.

- Something that is slow enough to bore us, is considered 
  unacceptably slow. The limit is somewhat fuzzy, but for some tasks,
  people will always tend to push towards this limit.

And, it also depends on what the nine minutes you gained allow you 
to do on your computer.... If the nine minutes can be used to do what 
the average user considers to be a very important task, then nine 
minutes is a lot !!!

> My point is that if you embark on a big project that will take you two
> years to complete successfully, that speeds up the compiler by a
> factor of two, then it probably will seem not that worth while when it
> is finished. 

Well, I tend to slightly disagree. If you made your algorithm/compiler
faster, that is always a net gain for the future. Computer are 
faster, but eventually also require more complex techniques for code 
generation, so that what is gained in terms of raw speed from the 
processor might be lost in terms of the more expensive algorithms 
that are needed for extracting all the possible power out of the 
newest beasts. In some way, this is what happened to gcc, a lot of 
good things have been added (adding more reliability or better 
optimisation, ...) but somehow that seems to have counter-balanced
the increase in computing power (at least since 2.95 and possibly even
for previous releases).

At the same time, people are getting new machines and expect their 
programs to compile faster... nad not to mention that the "average 
source code" (to be defined by someone ;-) ) is also probably growing
in size and complexity...

Now, I agree that, whatever is done, it has to be done in the proper 
way, so that it is maintainable and reliable (that's the first 
concern) AND that the speed improvement is there for good or at least
an amount of time much larger than the amount of development/
debugging/maintainance.

We should see any speed improvement as a possibility to add 
more functionnality into the compiler without changing much the 
increase of speed the user expects to see. Even though for the time 
being (and given the current state of gcc compared to the competition),
it looks like a lot of people just want to see the compiler go faster...


--------------------------------------------------------------------
Theodore Papadopoulo
Email: Theodore.Papadopoulo@sophia.inria.fr Tel: (33) 04 92 38 76 01
 --------------------------------------------------------------------



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]