This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: What is acceptable for -ffast-math? A numerical viewpoint



On 1 Aug 2001, Gabriel Dos Reis wrote:
> | Well, the claim was that the result is not _degraded_, but just _altered_.
> 
> Must we quibble?

I was honest: degraded only with respect to a certain set of definition on
which transformations a program may undergo during compilation. See, if
you discretize a partial differential equation you often have an error
(compared to the analytical solution of the problem) of 1-10% anyway, due
to the discretization. Where is the point in solving the discretized
problem to absurd accuracies, rather than doing it quickly with an error
that is below the discretization error anyway. Using -mieee and
-ffast-math, you only get two solutions, both being roughly of the same
accuracy with respect to the analytical solution. That's not degraded,
it's just different.


> | Solving a linear system of equations with IEEE is just one way to get at
> | an approximate of the true inverse. It would be interesting to see whether
> | the residual   || A * A^-1 ||  is smaller if you compute an approximate
> | inverse A^-1 using IEEE or -fast-math, using the same algorithm. I'd claim
> | it does not make much of a difference.
> 
> Firstly, no competent programmer would ever dream of using -ffast-math
> for doing serious numerical computations.  And that point is already
> understood in this discussion.

Maybe you think so, yet I do numerical computations for a living and still
use -ffast-math. So do the other ~20 people in our workgroup, and I
certainly know of more. They do so for the reasons stated above: the
accuracy of our programs is in any case lower than that of the
discretization, or of the model by which we describe reality.


> Secondly, Linear Algebra is just a tiny part of numerical
> computations, and matrix residuals are even a much smaller part; thus
> the point you're trying to make is unclear.

Then let me state it differently: in our work inverting matrices makes up
probably more than 50% of the computing time. We don't actually do it
using LU decompositions, or Gauss eliminations, but iteratively, such as
using a Conjugate Gradient method. These methods are inherently stable,
i.e. they can cope with slight inaccuracies, at the worst at the price of
one or two more iterations, but equally likely by gaining one or two. If
these iterations can be made faster, then that's a worthy goal.


> Finally, as a concrete example, look at the classical problem of
> approximating the zeros of a univariate polynomial.  There you have
> true degragation.  Just take the Wilkinson polynomial of 20th degree,
> and make slight pertubation to its cofficients (preferably the three
> leading cofficients) and see what happens. 

That's understood, but it is in an entirely different part of
computational mathematics. There you obviously need high and known
precision because the problems are unstable. PDE solvers, OTOH, usually
use stable methods. It is only for the latter applications that I said it
would be useful to have more aggressive/dubious optimizations.

Regards
  Wolfgang

-------------------------------------------------------------------------
Wolfgang Bangerth          email: wolfgang.bangerth@iwr.uni-heidelberg.de
                             www: http://gaia.iwr.uni-heidelberg.de/~wolf



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]