This is the mail archive of the
`gcc@gcc.gnu.org`
mailing list for the GCC project.

Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|

Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |

*To*: Wolfgang Bangerth <wolfgang dot bangerth at iwr dot uni-heidelberg dot de>*Subject*: Re: What is acceptable for -ffast-math? A numerical viewpoint*From*: Gabriel Dos Reis <gdr at codesourcery dot com>*Date*: 01 Aug 2001 16:44:43 +0200*Cc*: Gabriel Dos Reis <gdr at codesourcery dot com>, dewar at gnat dot com, gcc at gcc dot gnu dot org*Organization*: CodeSourcery, LLC*References*: <Pine.SOL.4.10.10108011602310.29695-100000@eros>

Wolfgang Bangerth <wolfgang.bangerth@iwr.uni-heidelberg.de> writes: | > Secondly, Linear Algebra is just a tiny part of numerical | > computations, and matrix residuals are even a much smaller part; thus | > the point you're trying to make is unclear. | | Then let me state it differently: in our work inverting matrices makes up | probably more than 50% of the computing time. We don't actually do it | using LU decompositions, or Gauss eliminations, but iteratively, such as | using a Conjugate Gradient method. These methods are inherently stable, | i.e. they can cope with slight inaccuracies, at the worst at the price of | one or two more iterations, but equally likely by gaining one or two. If | these iterations can be made faster, then that's a worthy goal. | | | > Finally, as a concrete example, look at the classical problem of | > approximating the zeros of a univariate polynomial. There you have | > true degragation. Just take the Wilkinson polynomial of 20th degree, | > and make slight pertubation to its cofficients (preferably the three | > leading cofficients) and see what happens. | | That's understood, but it is in an entirely different part of | computational mathematics. Just as inverting matrices is. | There you obviously need high and known precision because the | problems are unstable. PDE solvers, OTOH, usually use stable | methods. It is only for the latter applications that I said it | would be useful to have more aggressive/dubious optimizations. You seem to beleive that we were using unstable methods to approximate polynomial roots. That is untrue. We are using stable methods, combined with separation algorithms. The trouble is not really the methods but the problems at hand: Polynomial roots are very sensitive to pertubation to coefficients. My point was bring in a concrete conter-example to the claim that, it doesn't matter how dubious are the transformations, it suffices to use stable algorithms. -- Gaby

**Follow-Ups**:**Re: What is acceptable for -ffast-math? A numerical viewpoint***From:*Wolfgang Bangerth

**References**:**Re: What is acceptable for -ffast-math? A numerical viewpoint***From:*Wolfgang Bangerth

Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|

Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |