This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: numerical results differ after irrelevant code change


On 5/8/2011 8:25 AM, Michael D. Berger wrote:

-----Original Message-----
From: Robert Dewar [mailto:dewar@adacore.com]
Sent: Sunday, May 08, 2011 11:13
To: Michael D. Berger
Cc: gcc@gcc.gnu.org
Subject: Re: numerical results differ after irrelevant code change

[...]

This kind of result is quite expected on an x86 using the old style (default) floating-point (becauae of extra precision in intermediate results).


How does the extra precision lead to the variable result? Also, is there a way to prevent it? It is a pain in regression testing.

If you don't need to support CPUs over 10 years old, consider -march=pentium4 -mfpmath=sse or use the 64-bit OS and gcc.
Note the resemblance of your quoted differences to DBL_EPSILON from <float.h>. That's 1 ULP relative to 1.0. I have a hard time imagining the nature of real applications which don't need to tolerate differences of 1 ULP.


--
Tim Prince


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]