This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: GCC beaten by ICC in stupid trig test!

On Thu, 25 Mar 2004, Robert Dewar wrote:
> > int foo(double a, double b, double c)
> > {
> >   return (a+b)+c == a+(b+c);
> > }
> If it is really true that icc treats this as true at compile
> time in default mode, that's simply appalling in my view, and
> not something gcc should copy!

It really is true.  icc performs two additions and then checks the
result to identify NaNs.  GCC, even with -ffast-math, performs four
additions, and then the comparison.

So my next experiment was to search the internet to see if anyone had
ever complained about the floating point accuracy of Intel's icc
compilers.  After an hour or two, I was unable to find a single report;
SPECcpu2000's fp self-tests all pass, POV-ray images are bit for bit
identical, etc...

However, what I did find was review and benchmark, after benchmark,
after benchmark, after benchmark showing that icc consistently
outperformed GCC at floating point math and trigonometry.  More annoying
personally, is that most reviewers/comparisons initially never used/tried
-ffast-math until it was pointed out...

A typical example of what can be found is:

So although we can all construct non-portable floating point cases where
with a particular floating point representation on a particular target we
can cause reassociation to return a different result, but pragmatically
these effects have almost no impact in the wild.   Changing from double to
float, or double to long-double, moving from VAX to alpha, using IA-32,
or even causing an extra register spill can cause numeric codes that rely
on reassociation order differences to fail.  Any code that depends on the
result of "foo" above is already poorly written.

Many "serious" numerical codes make heavy use of matrix algebra via
libraries such as BLAS, LINPACK, ATLAS, EISPACK, etc..., but the fact that
these libraries don't require/specify the order in which inner terms must
be multiplied and added, would seem to support that in-real-life
reassociation is very well tolerated in numerical codes.  Indeed one of
the major reasons for using BLAS in numerical codes, is to take advantage
of hard-coded reassociation with different CPUs/cache sizes performing
multiplications and additions in dramatically different orders.

I've heard it argued that people who are serious about floating point
don't use -ffast-math.  I consider myself serious, and make a very nice
living from selling software to solve finite-difference Poison-Boltzmann
electrostatic calculations on regular grids, and molecular minimizations
using quasi-newtonian numerical optimizers.  Toon does numerical weather
forecasting, and he seems happy with -ffast-math.  Laurent performs large
scale Monte-Carlo simulations, and he also seems happy with it.

Another common myth is that anyone serious about floating point doesn't
use the IA-32 architecture for numerical calculations, due to the excess
precision in floating point calculations.   But then its a complete
mystery why this so many of the top500 supercomputers are now Intel/AMD

Whilst I don't deny that there is a tiny population of GCC users whose
results depend upon the specific representation of their floating point
formats, whose "discretization" to a fixed number of bits is a requirement
rather than a unfortunate feature/side-effect of current hardware
limitations, it does seem very unfair to handicap GCC for the vast

I completely disagree that reassociation is "not something gcc should
copy".  But perhaps one could argue that the reason GCC shouldn't ever
perform reassociation even with -ffast-math, whilst icc performs it by
default, is because there's no overlap between our intended user bases
or that Intel's superior performance is not something GCC's users want?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]