This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: cc1 hog


I have been meaning to bring this up for a long time.

The current testing framework is good for catching regressions that cause
miscompilations and compiler crashes.  But regressions in code quality will
not be tested for.

I have seen countless examples over the years when a certain optimization is
not effective because some change more or less completely disabled it.  I
actually believe GCC would become much better if we spent a larger fraction
of our time studying a set of small code samples to make sure they give
reasonable code.  But perhaps we could make something semi-automatic?

I don't think trying to set tight time limits for the c-torture/execute
tests would work in practice.  That would be unmanagable.  And as Jeff
points out, most tests are tiny and take zero time.

Instead we could introduce a new test category, c-torture/speed.  Either we
could maintain a database of timing results, or, perhaps run these tests
using two compilers.  One `old' compiler and one `new'.  The test framework
would flag whenever the new compiler generates worse code than the old.
Simple and maintenance-free!

The only problem with the latter approach would be accurate-enough timing.
Some CPUs have great features for cycle-exact timing (alpha, perhaps
pentium, and sparcv9 but just under linux since slowaris hides the
register), while on other systems we would have to stick to `getrusage' or
`clock'.

Torbjorn


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]