This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: What is acceptable for -ffast-math? A numerical viewpoint


<<with a "clear set of criteria", whatever that means - and there will
always someone to show up with something that will be badly broken by
the optimization.)
>>

The fact that some particular program breaks with -ffast-math is not
by itself decisive. You can't eliminate surprises completely if you
start playing this game. If someone comes up with a response like

"Hey, I turned on optimization xxx in my program and the results are wrong"

Then that conveys no information without further analysis. What we need in
that case is to understand *why* the results are wrong. It might be for
example the case that the computation in question is unstable, and any
change, even legitimate changes could discombobulate the resuls.

An example. I would definitely think that -ffast-math should allow extra
precision at any point (you can also argue this should be on by default,
it is certainly allowed to be on by default in Ada, and I believe that in
at least some cases, it is on by default in GNU C, but it is definitely
NOT on by default in many other compilers, e.g. IBM avoids this on power
architevctures unless a special switch is set).

But there are certainly algorithms which blow up with extra precision.
A simple example of the extra precision causing a loss of performance
would be if you program some iteration which you know from your analysis
is stable to equality, but now the equality is between higher precision
numbers, and either your analysis did not cover this case, and the
computation no longer converges, or it takes much longer to converge.

A specific example is using Simpson's rule for integration. Especially
with truncating arithmetic, you get a behavior where the result converges
as you reduce the internal, then starts to diverge. Changing the precision
can greatly change this curve of convergence.

So the input from the "group 2" folks, who program in fpt but don't know
or care enough to do careful analyses, or simply don't have the time, must
be considered carefully. We are talking about optimizations that definitely
have the potential for upsetting results when the optimization is turned on,
so the fact that this indeed happens is not by itself an absolute argument
against the optimization.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]