This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Of Bounties and Mercenaries


Compiler speed *should not* be an issue. If gcc 3.5 is ten times
slower compiling programs at -O2, that's a good thing: it means
that it does more to procude better code. A program is compiled
once but executed 100000 times. So compile-time is *not* an issue
and it would be wrong if gcc developers dropped features because
they slow compile speed.

This is an extreme position which is untenable for most application developers.

Unfortunatelly this doesn't work. A progam that works at -O0 may
reveal bugs at -O2. If you could guarantee that ``if something is
ok at -O0 is also ok in -O2'' that'd be a real win.

That would be a *far* more severe restraint on generated code. Indeed if you mean by "ok" that the code happens to "work" on at least some test cases, it is likely that the constraint you give above is total, i.e. that no optimization can be done at all. Most likely you are not very familiar with compiler technology or with machine code, or you would not make such a suggestion :-)

Apart from that, i think it's up to the application progammer
to write code that does not require -O2, can be tested at -O0
and use valgrind on it.

Major testing is required under the conditions of delivery, so the idea of doing all testing at -O0 and delivery at -O2 is flawed. Obviously it is non-decidable whether a program is correct, so the idea that programs like valgrind can guarantee this correctness is naive.

gcc developers *should* completely ignore comments about the speed
of gcc. Do you get paid to improve the speed? No. Application
programmers should write good code.

Well if this can be achieved merely by your exortation, why not have these application programmers write optimal assembly code in the first place, thus removing dependencies on the compiler entirely :-) In fact writing code that is 100% correct according to the language standard and thus impervious to optimization effets is very difficult, and no tool can check that you have achieved this goal.

If a big corp has incompetent
people and expects from gcc to get faster at -O2, they can hire
a filthy bounty hunter ;)

Well there is no danger of the gcc development community paying too much attention to this extreme advice :-)

As always the balance between code quality and compile speed is
a trade off. And not an easy one, since requirements definitely
differ. But if you are working on applications with millions
of lines of code where maximum performance is required, then
you ask for best possible code and best possible compile speed.
Of course you can't have both, so what you really ask for is
a reasonable trade off.

Actually in my experience, many/most optimizations are disappointing
and do not generate the improvements that are hoped for. Even if some
benchmarks show substantial improvement, the effect on real large
application programs is more limited.

What this says is that if you install an optimization that takes
significant compile time, then you need to be sure it is really
worth while.

It is true that machines are getting faster, but this is something
that weakens the need for both high performance and high compile
speed, so it does not necessarily change the balance that much.
On the other hand, compile speed is something that is measured
by human time, there is a big difference between a system that
requires 10 minutes to recompile and one that takes 10 hours,
or even 10 days. It changes the entire approach to working on
such a system.

A factor of 10 degradation in compile time would be entirely
unacceptable to the majority of the community. That's clear
from past discussions. Some degradation is acceptable if it
really pays off.

I find the balance of gcc fairly reasonable. A diagnostic of
this is that we find a few customers who complain about
compile time, a few who complain about performance of generated
code, but for the most part the balance meets the needs of
our (our = Ada Core Technologies here) customers. On the other
hand, we have frequently heard from Apple that the balance is
not so good for their customer base, and that they really need
better compile time.

I do suspect that gcc compilation time could be speeded up without
noticeable loss of performance. Indeed we have found a few places
where relatively simple fixes to gcc speed up compilation hugely
for certain selected programs, with little or no loss in code
performance, and we should all be looking for such opportunities.

Part of the trouble is that it's more interesting to work on
new algorithms and optimizations, than to work on profiling
and tuning old ones :-)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]