This is the mail archive of the
mailing list for the GCC project.
Re: Benchmarking theory
- To: "Joseph S. Myers" <jsm28 at cam dot ac dot uk>
- Subject: Re: Benchmarking theory
- From: Toon Moene <toon at moene dot indiv dot nluug dot nl>
- Date: Sun, 27 May 2001 01:01:38 +0200
- CC: gcc at gcc dot gnu dot org
- Organization: Moene Computational Physics, Maartensdijk, The Netherlands
- References: <Pine.LNX.firstname.lastname@example.org>
"Joseph S. Myers" wrote:
> Benchmark results seem to get posted to the gcc list as single figures for
> a test and old and new compilers, with assertions that results seem
> significant or are consistent between runs. Why are benchmarks done on
> this basis rather than using actual statistical significance tests?
Perhaps because we haven't included specific benchmarking tests into our
release criteria ?
> Could someone point me to appropriate references on the theory of
> benchmarking that explain this?
Tsk. My theory of benchmarking is:
1. Take you own application.
2. Constuct a sample self-contained application out of it.
3. Ship it to prospective hardware sellers.
4. Rank results.
OK - simplistic, but it works.
Toon Moene - mailto:email@example.com - phoneto: +31 346 214290
Saturnushof 14, 3738 XG Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
Join GNU Fortran 95: http://g95.sourceforge.net/ (under construction)