This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Performance regression testing?


On Mon, Nov 28, 2005 at 04:38:58PM -0800, Mark Mitchell wrote:
> Clearly, performance testing is harder than correctness testing;
> correctness is binary, while performance is a continuum.  Machine load
> affects performance numbers.  It's reasonable to strive for no
> correctness regressions, but introducing new optimizations is often
> (always?) going to cause some code to perform worse.  If an optimization
> was unsafe, then correctness concerns may require that we generate
> inferior code.  So, it's hard problem.

It would be possible to detect performance regression after fact, but
soon enough to look at reverting patches.  For example, given multiple
machines doing SPEC benchmark runs every night, the alarm could be raised
if a significant performance regression is detected.  To guard against
noise from machine hiccups, two different machines would have to report
a regression to raise the alarm.  But the big problem is the non-freeness
of SPEC; ideally there would be a benchmark that ...

... everyone can download and run
... is reasonably fast
... is non-trivial

> As a strawman, perhaps we could add a small integer program (bzip?) and
> a small floating-point program to the testsuite, and have DejaGNU print
> out the number of iterations of each that run in 10 seconds.

Would that really catch much?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]