This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: What is acceptable for -ffast-math? (Was: associative law in combine)


dewar@gnat.com wrote:

> By the way, I said I would be shocked to find a Fortran compiler that did
> associative redistribution in the absence of parens. I am somewhat surprised
> that no one stepped forward with a counter-example, but I suspect in fact that
> there may not be any shocking Fortran implementations around.

Well, that might be because we're on a public list and I, certainly, am
not going to name names of competitors.  Granted, I do not have an
actual proof of a Fortran compiler that substituted (a+b)*c for a*b+a*c,
but I have seen those that:

1. Included an option to ignore parentheses in the source program.

2. Included an option to use floating point induction variables, i.e.,
   change:

      DO I = 1, N
         A(I) = I * 0.1
      ENDDO

   into:

      T = 0.0
      DO I = 1, N
         T = T + 0.1
         A(I) = T
      ENDDO

> It is an old argument, the one that says that fpt is approximate, so why bother
> to be persnickety about it. Seymour Cray always tool this viewpoint, and it
> did not bother him that 81.0/3.0 did not give exactly 27.0 on the CDC 6000
> class machines.

In one of the papers I read during the 1999 discussion, I recall seeing
a quote by Kahan along the lines (paraphrased, because I can't find it
anymore): Yes, there are floating point codes that are robust against
Cray's floating point implementation ...

The discussion actually boils down to: "What is the answer we want to
obtain using these calculations ?".

In physics, the computational model in itself is often already a
compromise - if I just confine myself to weather forecasting, which I'm
involved with now for about a decade:

1. Spherical Earth, rotating with constant angular velocity,
2. Ignoring compressibility of the air, assuming hydrostatic equilibrium
   at all time.
3. Ignoring the vertical component of the Coriolis force,
4. Constant acceleration of gravity, independent of height.

Now the compromises to make this actually *computable*:

1. Finite difference discretisation of the continuous equations,
2. Filters to damp numerical noise (both in space and time dimensions),
3. Approximations to compute the compound effect of sub-grid scale
   physics (radiation, phase changes of water, etc.).

Now let's look at the question above, again: 

"it did not bother [Cray] that 81.0/3.0 did not give exactly 27.0 on 
 the CDC 6000 class machines."

In the light of the above, does this observation of "Cray floating
point" matter ?  It depends, of course:

1. Help, on the CDC's 81.0 / 3.0 is zero !  Yes, *that* would be a 
   problem.
2. Hmmm, CDC 6000's seem to get as much as 5 ULP's wrong on division
....
   Well, I wouldn't care.

So, I wouldn't mind a "-fcray-fp" option in the latter case.  Sure, the
answers from our weather forecast model would be different under that
floating point model than under the IEEE-754 one.  However, I'm not much
interested in the *computational* answer of the model, I'm interested in
the weather forecast I can derive from it.  So if one answer is 10.2
degrees Celsius tomorrow morning at 6 a.m. and the other is 10.1 degrees
Celsius, which is the "correct" one ?  The actual temperature will
probably be somewhere between 11 and 12 ... not bad.

"Predicting is hard, especially as far as the future is concerned"

Again, the issue is: In my use of these computational cycles to solve a
physics problem, what constitutes an answer ?  *Is* there only one,
correct, answer ?  Is an answer in the vicinity of the computationally
correct answer really "incorrect" ?  If no, what is "vicinity" in this
case ?  Do I have a method to determine "vicinity" based on the
computational compromises I made ?  On the physical compromises ?

Fortunately, due to the physical nature of the problem I'm trying to
solve, I can actually use a measurement to determine the "success" of my
endeavor.  So what's the accuracy of the measurement device ?  How does
the fact that the computational model deals with masses of air 20km x
20km x 100 meter instead of the few cubic meters of air the instrument
deals with disturb my "verification" ?  *Is* the difference I measure
due to this effect, or due to the poor physics approximation, or due to
the computational approximation, ... or due to the 5 ULP's of the Crayzy
division ?

-- 
Toon Moene - mailto:toon@moene.indiv.nluug.nl - phoneto: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
Join GNU Fortran 95: http://g95.sourceforge.net/ (under construction)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]