This is the mail archive of the
mailing list for the GCC project.
Re: Draft "Unsafe fp optimizations" project description.
- To: gcc at gcc dot gnu dot org, toon at moene dot indiv dot nluug dot nl
- Subject: Re: Draft "Unsafe fp optimizations" project description.
- From: dewar at gnat dot com
- Date: Sat, 4 Aug 2001 09:45:35 -0400 (EDT)
- Cc: dewar at gnat dot com, lucier at math dot purdue dot edu
Can't we have this in plain text, HTML is such a pain in the neck!
"does not introduce surprises"
is stronger than
"all of its numerical effects are well-documented"
And although subjective, I think it is an important criterion that must
be considered. Documenting surprises does not make them unsurprising :-)
So for example, giving a result of 0.0 for all divisions meets all
your three criteria, but is obviously unacceptable? why? because the
result would indeed be surprising.
There are also some claims that are quite wrong, I will try to go
over this document further.
First, be careful not to imply that the only issue is being a few
ULP off, we know that is not always the case.
Second, your comment about rounding modes is wrong, I can easily
see applications which work find with rounding, even if some liberties
are taken with some rearrangements, but where going to truncating
arithmetic would introduce biases that would make the results
meaningless. It would be interesting to see if the well known
case of computing orbits of Pluto (where the difference between
biased and unbiased rounding made a difference -- much more
surprising) would have been robust wrt to some of the transformations
Rearrangements with no effects should not be mentioned or discussed, they
have nothing to do with the issue at hand, such "rearrangements" are simply
possible choices of a code generator, and a decent code generator should
choose the most efficient one of them all the time.
However, -A + B => B - A is not such a rearrangement, since it can result
in differences in the sign of 0.0 which can be highly significant in some
A better example is /2.0 => * 0.5
Rearrangements whose only effect is a loss of accuracy
An example is cases of multiplication by the reciprocal instead of
division in cases where overflow is not possible.
Another good examle is a*a*a*a => (a*a)**2 which can lose accuracy but
that's the only downside.
> Example: A/B/C -> A/(B*C). Will overflow for about half of the possible choices for B and C for
> which the original didn't overflow.
That's quite wrong, you have to be close to max_real, half is way way
overstating the case.
One thing about this *document* is that even if it is going to be used
by people who are not floating-point experts, it had better be written
or at least thoroughly reviewed by someone who is :-)
What I wrote above by the way is definitely NOT a thorough review, just
some observations from a quickscan through HTML junk!