This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: Reorganize -ffast-math code.
- To: toon at moene dot indiv dot nluug dot nl (Toon Moene)
- Subject: Re: Reorganize -ffast-math code.
- From: Brad Lucier <lucier at math dot purdue dot edu>
- Date: Wed, 7 Mar 2001 19:23:58 -0500 (EST)
- Cc: lucier at math dot purdue dot edu (Brad Lucier), rth at redhat dot com (Richard Henderson), gcc-patches at gcc dot gnu dot org
Toon Moene wrote:
>
> Brad Lucier wrote:
>
> > The essence of the problem is as you said. Now, this does not
> > differ *one iota* from the fact that gcc on i386 spills extended
> > precision registers to double precision on the stack. In doing
> > so, it stores objects into slots that have less precision and
> > less range than the objects themselves. This is *exactly* what
> > happened with Ariane. And gcc does it in a way that (a) does
> > not notify the programmer that it is being done and (b) the
> > programmer cannot avoid even if he/she examines the generated
> > assembley code. So, on the 386, I believe
> >
> > int flag_Ariane_class_disaster_math_optimizations = 1;
> >
> > is the default. And there should be a way to set this flag
> > to zero.
>
> Although I appreciate your comments, I think this one is mistaken,
> because the problem is language dependent. As this "discussion" raged
> over our mailing lists in the summer of '99, I'll just write this one
> `response' - if you want to discuss this further, it's better taken off
> the list ...
Toon:
I appreciate your concern, but don't reply and then suggest that I take
any response I might have "off list". I, too, recall the previous
discussion, which is why my first reference to the real problem
was in the form of a joke.
> If I write in Fortran (which is my bread-and-butter):
>
> REAL X, Y, Z
> X = 2.71828
> Y = 3.14159
> Z = X + Y
>
> the `processor' (compiler + run-time library in Fortran-speak) is free
> to use whatever arithmetic is necessary to arrive at the value of Z.
> The only restriction the Standard "enforces" is that it is an
> "approximation" to the value of X + Y.
>
> If the `processor' deems it expedient to use the ix86's 80-bit
> intermediate values it's free to do so. It could also use 64-bit
> intermediates, or 32-bit ones, or a particularly clever decimal
> representation.
>
> Remember that Fortran is derived from the actions of *human* computers.
> In essence it allows what those people did (or their trade-unions
> fought to be allowed). So far, so good.
Yes, I have a well-thumbed copy of the Fortran 77 standard on my
shelf, and, yes, a "Fortran processor" would be allowed to return
-1.0 for Z if the users of that "Fortran processor" agreed that
this is a good enough approximation. Precision and accuracy in the
Fortran standard are political concepts, not scientific concepts.
But this is irrelevant to what I said; my point stands. The current
GCC will, behind the user's back, stuff an object into a slot with
less precision and less dynamic range. And that can make whatever
algorithm the use wrote, even one as simple as
z = sqrt (x*x + y*y)
return an incorrect answer. (Or, in the "Fortran processor"
parlance, return +inf. as an approximation to an otherwise
finite value of z.)
> Now that's a whole different kettle of fish as to the question what the
> intermediate language of a multi-lingual compiler suite like GCC is
> supposed to do.
The "intermediate language" for gcc is fine; it's what is done with
it that is the problem.
> I'm the first to agree (with Stephen Moshier) that the current
> implementation doesn't even give the front-end implementer the
> *opportunity* to use exact IEEE arithmetic.
>
> However, that's a different problem.
Brad