This is the mail archive of the
`gcc@gcc.gnu.org`
mailing list for the GCC project.

# Re: Draft "Unsafe fp optimizations" project description.

*To*: dewar at gnat dot com, toon at moene dot indiv dot nluug dot nl
*Subject*: Re: Draft "Unsafe fp optimizations" project description.
*From*: dewar at gnat dot com
*Date*: Sun, 5 Aug 2001 10:36:40 -0400 (EDT)
*Cc*: gcc at gcc dot gnu dot org, ross dot s at ihug dot co dot nz

<<However, if we just look at B*C then this product will overflow or
underflow when the the sum of the exponents of B and C is higher than +N
or lower than -N. If we take B and C randomly from the FP-values-sack,
that will be true for half of the exponent values. Because there are
the same number of fp values for every exponent value, it follows that
the conclusion extends to "half of the fp values".
>>
AH, OK, I see what you are driving at now, and yes, that is true. I thought
you were talking about the proportion of cases in which the result would
be different. Remember that in many cases, if B*C overflows, then the
result is going to be zero anyway, and if B*C is infinity, that's the
result you get in any case.
Of course, in practice the distribution of floating-point numbers seen
in real programs is nothing like uniform.
In addition, in many architectures, and in particular on the x86, you
are using extra precision and range for intermediate computations, and
in this case, the number of overflow cases is drastically reduced.
So I think the characterization in your original note that half the
values cause trouble just seems far too pessimistic in practice (and
is enough to scare even the most unexpert programmer excessively :-) :-)