This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.

Index Nav: Message Nav: [Date Index] [Subject Index] [Author Index] [Thread Index] [Date Prev] [Date Next] [Thread Prev] [Thread Next]

# Re: Second Draft "Unsafe fp optimizations" project description.

• To: Toon Moene <toon at moene dot indiv dot nluug dot nl>
• Subject: Re: Second Draft "Unsafe fp optimizations" project description.
• From: Stephen L Moshier <moshier at mediaone dot net>
• Date: Sun, 5 Aug 2001 20:44:18 -0400 (EDT)
• cc: gcc at gcc dot gnu dot org
• Reply-To: moshier at moshier dot ne dot mediaone dot net

```
> Attached is the second draft of the proposed description of the
> "Unsafe floating point optimizations" project.

I think it will be very useful to improve the documentation of what the
fast-math transformations do.  If you are going to attempt a
motivational tutorial, it ought to be fairly balanced, however, and
that seems hard to achieve.  Even if you include all the points of
application raised so far, you will be omitting many others.

If you stick to documentation, I think you can offer some useful
education nevertheless along with the dry facts.  Anyone familiar with
the "scientific notation" for numbers can easily appreciate the
various floating-point effects.  I suggest it would help the
non-experts if you include concrete numerical examples, something like this:

A * B + A * C  is not the same as  A * (B + C)

Example (in decimal scientific notation, with 3-place decimal arithmetic):

A = 3.00e-01
B = 1.00e+00
C = 5.00e-03

First Case:          Second Case:

A * B = 3.00e-1       B  1.000
+ A * C = 1.50e-3     + C  0.005
------          -----
3.015e-1         1.005
rounds to 1.00e0
rounds to 3.02e-1        hence A * (B + C) = 3.00e-1

... and give a concrete example of your overflow case as well.

Although you say your purpose is to provide a "classification of
rearrangements," much of the discussion so far reads as no more than
assertions and rehashing of various people's parochial opinions about
what is important or not important.  I am not persuaded by any of it
that there needs to be even one fast-math category, never mind two or
more of them.  I suspect the real reason for fast-math is to get better
scores on some benchmark program.  That may be a legitimate business
reason, but it does not count as any sort of technical reason to be
supported by technical analysis.

There are some technically legitimate reasons for a programmer to make
associative law transfomations, for example in the effort to keep a
pipeline filled or to do vectorizing.  These tend to be both
machine-specific and algorithm-specific and I think that trusting the