This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: GCC Benchmarks (coybench), AMD64 and i686, 14 August 2004


Joe Buck wrote:
There's an issue here.  I hesitate to call it "ethics", but it is
borderline.

The issue is whether the compiler should specifically detect certain
transformation opportunities in benchmarks that are unlikely to exist in
real code, to get an artificially high score on that benchmark that does
not reflect what the user can expect on code that the compiler developers
have not seen before.

A classic example was the Whetstone benchmark, where many compiler
developers started specifically recognizing a particular expression
involving transcendental functions and applying a mathematical identity to
speed it up.  This particular transformation is useless for anything
except speeding up Whetstone, but it slows down the compiler a bit for all
programs.

A good point. I wrote my first benchmark article for Micro Cornucopia back in 1988, and ran into exactly the problem above. When I wrote a review of Fortran compilers for Computer Language in about '90, we had trouble with compilers built for specific benchmarks, and vendors who complained loudly if we did not use the specific benchmarks they were best at optimizing. And today we have chicanery involving drivers for various video cards -- ugh.


To combat this, I use non-standard benchmarks, often reducing larger "real world" programs to find something easily understandable yet complex enough to stress compilers' code generation.

In this case, a transformation could be added as part of -ffast-math that
would specifically recognize

inline static double dv(double x)
{
    return 2.0 * sin(((x < PI2) ? x : PI2)) * cos(((x < PI2) ? x :
PI2));
}

and transform it into

inline static double dv(double x)
{
    return x >= PI2 ? 0.0 : sin(2.0 * x);
}

just to get a high score on mole.  But that strikes me as close to
immoral; any tranformation should fall out of some generally useful
transformation sequence.

Consider that my "mole" benchmark is *not* a well-known benchmark program, and I published it *after* the release of the Intel compiler I'm using. I don't see how they could have anticipated my benchmark, unless another, better-known benchmark includes a similar function.


While I know Intel has taken some personal interest in my benchmarks, I sincerely doubt they have had the time or desire to make optimizations specific to my rather eclectic and obscure set of tests. I'm certain they spend much more time trying to co-opt SPEC, for example.

inline static double dv(double x)
{
   double arg = x < PI2 ? x : PI2;
   return 2.0 * sin(arg) * cos(arg);
}

This might then be turned into

inline static double dv(double x)
{
   if (x < PI2)
       return 2.0 * sin(x) * cos(x);
   else
       return 0.0;
}

and if ICC does it this way, I would not consider this "cheating" at all.
That could be justified.  The trig identity that 2*sin(x)*cos(x) is
sin(2*x) is more iffy; we'd waste time trying to apply such
transformations to every tree, and it's pretty much exactly what caused
such controversy when everyone did it to Whetstone.

I agree with your analysis; a compiler should not need to "know" trigonometric identities. I do find it interesting that Intel's compiler can make some very interesting "optimizations" of this sort, rather quickly.


Also, how deep should such an analysis go? Would the compiler need to recognize various transformations of the identity in order to replace it? The number of special cases is rather daunting; I think it better to rely on programmers for reducing such expressions.

..Scott

--
Scott Robert Ladd
Coyote Gulch Productions (http://www.coyotegulch.com)
Software Invention for High-Performance Computing


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]