This is the mail archive of the
mailing list for the GCC project.
RE: Benchmarking theory
- To: dek_ml at konerding dot com, jsm28 at cam dot ac dot uk, znmeb at aracnet dot com
- Subject: RE: Benchmarking theory
- From: dewar at gnat dot com
- Date: Sun, 27 May 2001 23:21:29 -0400 (EDT)
- Cc: gcc at gcc dot gnu dot org, toon at moene dot indiv dot nluug dot nl
<<I'm not sure I want to jump into *this* thread, although I did reply to the
post with actual numbers. As I said there, for all practical purposes, on
that benchmark set, the two compilers have identical performance. That said,
there is plenty of theory behind benchmarking, both mathematical and
computer science. I personally think compiler writers waste a lot of effort
on fancy optimizers at the cost of hideously complex compilers and time to
market (if that concept means anything with free software :-). I didn't say
any performance payoff in the data presented here.
Well that means you don't have critical applications. We have several
large customers who are right on the edge performance-wise, and the project
(more speicfically the use of gcc on their project) rises or falls with the
efficiency of the compiler). And don't tell them to get new hardware. In one
case it is a aerospace app where the hardware was set long ago, and in another
case they have a giant network of existing non-upgradable machines.
Now of course everyone would agree that spending time on fancy optimziations
that do nothing for the performance is not a good idea, and indeed I would
agree that compiler writers often waste time on meaningless optinmizations.
But before you make that conclusion, you need tests on a wide variety of
benchmarks on a wide variety of architectures.
Robert Dewar (we here = Ada Core Technologies)