This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Fourth Draft "Unsafe fp optimizations" project description.


And Stephen L Moshier writes:
 - 
 - IEEE tries to improve on that situation, and the point was that in
 - flush-to-zero arithmetic comparison of very small numbers is somewhat
 - problematic.

The real problem comes from two different views of floating-
point.  Some people want a floating point number to represent 
a small range of numbers.  For example, some people think that 
a number represents all those numbers within half an ulp 
around it (in round-to-nearest).  Treating floating-point 
numbers as implicit intervals makes equality fuzzy, among other 
things.  It would also make inequalities fuzzy, but they don't 
seem to mind.

An IEEE floating-point number represents that particular number
and nothing else.  More specifically, any finite IEEE floating-
point number takes the form
	               n
	(-1)^s 2^E ---------
	            2^(p-1)
where s is a sign bit, E is a signed integer exponent, n is
an unsigned integer significand, and p is the maximum number 
of bits allowed in the significand.  Note that only p-1 bits 
are stored in the basic formats.  The sign is algebraic.

All operations governed by 754 return the number of that form
closest to the true result.  Closeness is determined by the
rounding mode; 99.99% of the time it's round-to-nearest.  In 
this regime, equality _is_ meaningful.  The only 'surprise'
is that -0.0 == 0.0, but that derives from the algebraic nature 
of the sign bit.

Also, if you're using equality to avoid exceptional cases,
you will be sorely disappointed with flush-to-zero.  Unequal
entities may produce zero denominators, raising division-by-
zero and producing Inf.  So you can no longer trust
  z = Q / (x - y);
  if (z == INF)
    report_error ("Equal quantities should never occur.");

Now you need something like the following:
  z = Q / (x - y);
  if (z == INF)
    report_maybe_an_error 
      ("Have a zero denominator.  Quantities may or may not be equal.");

Another reason people mistrust equality is that they are
often comparing with a decimal literal, and conversion to
binary is not exact.  In this case, one solution that would
preserve most users' expectations would be to convert the 
binary number into decimal using Gay's algorithm, then 
compare in decimal.  However, that can be painfully slow.  

I don't know of a reasonable solution here.  A compiler
warning will be ignored, as often the program will still
be correct.  Giving users more control over decimal<->binary
conversions won't help; users just want it to do the right
thing.  Decimal arithmetic in hardware _would_ help, but,
well, I'm not holding my breath.  An automatic translation
into an interval expression slightly scares me.  Ideas are 
welcome.

 - Whatever the IEEE 754 committee may say about it, however, the fact is
 - that vendors are going to continue to offer flush to zero machines
 - because there are good engineering reasons to do so.  

And those reasons are?  You're assuming common implementations 
of arithmetic.  There are other implementations where gradual 
underflow occurs naturally.  One requires two extra bits on
the exponent and a few tag bits; it may well be suitable for
vector registers.

 - It would be most helpful if the committee would seek to determine  
 - and codify the industry practice.

Flush to zero _will not_ be in 754r.  We have already made
that decision.  We will not be codifying rules that make 
programs more painful to write, slower, and less reliable.  
Gradual underflow keeps relative error constant around zero.  
Flush to zero makes it _grow_.  

To compensate for that, you find yourself scaling quantities 
to avoid the rare case of underflow being important.  The
scaling adds code, and that invariably introduces other bugs
and slows everything down.  For explicit examples, see James
Demmel's "Underflow and the Reliability of Numerical Software."

Besides, if everyone's already doing it in the same way (as far 
as anyone can see) what's the point of standardizing?  It'd be
a standard just for standard's sake.  Yuck.

The funny thing is that gradual underflow often makes underflow 
unimportant, and that makes it appear as if flush-to-zero won't 
have an effect.  Gradual underflow handles the rare cases for 
you, so you don't have to duplicate similar code all over the 
place, probably introducing bugs in half of it and slowing 
everything down.

Most of 754 is dedicated to making simple solutions work 
correctly.  754 allows for experts to do whatever, but it tries 
to support non-experts who don't even know it exists.  New ideas
should be tested against that metric.  Flush-to-zero fails.

Jason, who claimed to be a 754-head...  ;)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]