This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Reorganize -ffast-math code.


Linus Torvalds wrote:

> In article <200103080023.TAA32688@zakon.math.purdue.edu> you write:
> >
> >But this is irrelevant to what I said; my point stands.  The current
> >GCC will, behind the user's back, stuff an object into a slot with
> >less precision and less dynamic range.
> 
> Your entire argument hinges on this, and it's NOT TRUE.
> 
> What is true is that gcc may end up using an intermediate representation
> that has _more_ precision and _more_ dynamic range than the user asked
> for.  The user asked for a "double" which is normally a 64-bit IEEE
> double, yet most of the math ends up being done in 80-bit arithmetic. 
> Or the user might ask for a 32-bit float, and the arithmetic might be
> done in either doubles or the 80-bit extended form. 
> 
> Is it IEEE-conforming? No.  The x86 makes true IEEE uncomfortably hard
> to achieve. 

The IEEE 754 Standard says the following:

<standard>

4. Rounding
...

4.3 Rounding Precision.  Normally a result is rounded to the precision
of its destination.  However, some systems deliver results only to
double or extended destinations.  On such a system the user, which may
be a high-level language compiler, shall be able to specify that a
result be rounded instead to single precision, though it may be stored
in the double or extended format with its wider exponent range.[4]
Similary, a system that delivers results only to double extended
destinations shall permit the user to specify rounding to single or
double precision.  Note that to meet the specifications in 4.1, the
result cannot suffer more than one rounding error.

Footnote [4]: Control of rounding precision is intended to allow
systems whose destinations are always double or extended to mimic,
in the absence of over/underflow, the precisions of systems with
single and double destinations.  An implementation should not
provide operations that combine double or extended operands to
produce a single result, nor operations that combine double
extended operands to produce a double result, with only one
rounding.

</standard>

So, you're wrong.  Default x86 behavior is IEEE 754 conforming;
the destinations to which the standard refers are the entries
in the register stack, which are all extended precision, and
the hardware permits the compiler (the "user" in this case) to
set the rounding precision.  Kahan, who had a big part in both
writing the standard and in designing the 8087 arithmetic, made
sure of it.

> Do some people expect IEEE? Yes.  But not many care about the exact
> rounding, and most people who _really_ depend on IEEE rounding etc had
> better know a lot about the problems they are working with, and
> hopefully can be aware of things like the strange x86 rules. 

This is a straw man argument, unless you're claiming that I am one
of those people.

> Your point? It doesn't seem to stand up at all. 

Perhaps not, but not because of a falsehood and a straw man.

> Floating point is dangerous.  It's not infinite-precision exact math. 
> People don't always know that, or even when they know, they sometimes
> forget.  So people are sometimes surprised by a result that they _think_
> should be exactly 10 being calculated as 9.99999999999 and ending up
> rounded to 9. 

Again, are you trying to imply that this has something to do with
what I said?  I don't think so.

> But your point that gcc silently truncates values is bogus.  It
> sometimes silently uses _more_ precision, and that can be problematic. 
> But as people have tried to explain on this list, it's really hard to
> avoid on x86 (people have pointed out errors in the original suggestion
> of defaulting to a 53-bit mantissa rounding - it still doesn't give IEEE
> even for "double" due to exponent size issues, much less than for
> "float").
>
> If you know what you're doing, you should set the rounding mode by hand. 
> It's nasty, yes.  The compiler cannot do a very good job of it
> automatically on x86, at least not until everybody uses the new FP
> extensions that don't exist on most PCs at this writing.. 

Here is how gcc can mimic what you seem to think of as "true" IEEE
conforming single precision arithmetic on any x86 floating-point
processor. I'll give the examples for x+y and x*y; one first
sets the rounding mode to single. Let the magic number
M = 126 - 16382 + 23 - 64.  (It's not so magic, it's just the
sum of the differences between the minimum exponents and the
number of explicit bits in the single and double extended.)

x+y: Load x into a register.  Multiply x by 2^M to give x1.
     Load y into a register.  Multiply y by 2^M to give y1.
     Add x1 and y1 to give z1.
     Multiply z1 by 2^{-M} to give z2.
     Store z2 into the single z.

x*y:  Load x into a register.  Multiply x by 2^M to give x1.
      Load y into a register.
      Multiply x1 by y to give z1.
      Multiply z1 by 2^{-M} to give z2.
      Store z2 into the single z.

This works for normalized and denormalized numbers, +-0, +-inf, and 
quiet Nans.  This also preserves all flags. I don't see what's so
mysterious about it, and gcc could do it right now if it wanted to,
even without the new FP extensions.

I suggest we stop talking about 100 * .1 not being equal to 10 and what
the Fortran 77 standard says (or rather doesn't say) about precision and
accuracy of floating-point arithmetic.  We don't have to talk about how
gcc stores extended double temporaries into double stack slots without
telling the programmer, either, if you're tired of the topic after that
big argument nearly two years ago.  

Brad


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]