This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: gcc 3.5 integration branch proposal


Nick Burrett wrote:

We should probably require developers to use slow machines with small
rams.  Semi ;-)


There's no harm in that. I have a port of GCC 3.3.3 running on a 200MHz StrongARM that takes over 6 minutes to compile the following:

I would distinguish between speed and space here. It really is quite reasonable for gcc to require more memory as it goes along, since one can indeed reasonably assume that modern developers machines have reasonable memory. That's a technology change that is reasonable to take advantage of.

The original Realia COBOL ran on a 4.77MHz PC1 and compiled about
10,000 lines/minute on that machine in 640K bytes (the compiler was
about 100K bytes, with the remaining 540K used for file caching. That
was for a full featured COBOL compiler that could compile programs of
any size at all with no non-linear behavior. The code it generated was
good but not great (although at the time, in the 80's it was the best
compiler for any language around on the PC in terms of generated code).

Now these days, the raw speed of the compiler is still significant,
because people compile HUGE COBOL programs and anyway it is always
welcome to have rapid response on compiling a big program. In the
case of the compiler itself, written in COBOL, about 100,000 lines
it was always most welcome on a fast 386 to be able to bootstrap
in a couple of minutes. I don't know how fast that compiler is on
a modern PC, I assume a bootstrap takes seconds these days.

But the fact that the compiler can operate in very small memory is
simply not interesting today.

So in the quote above, I would agree with the "slow machines", but
not the "small RAM". I think it is just fine to trade off larger memory
requirements against better code. Given a world in which people are used
to the idea of a PC where 128 megabytes is marginal, using substantially
more memory is not an issue.

But trading off compilation *speed* against better code quality is
more dubious. Yes, machines get faster, but that's also an argument
that says that getting a few percent more efficiency from generated
code is no longer so critical.

You are far more likely to have someone compiling a million line
program where 20% performance is not an issue, but compilation
times of hours is a real issue that impedes development.

It would be interesting to have a set of benchmarks showing the compile
time performance of GCC over a period of years, and comparing that with
other compilers.

Robert Dewar



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]