This is the mail archive of the gcc-help@gcc.gnu.org mailing list for the GCC project.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
Other format: | [Raw text] |
I have a problem figuring out the precision of floating point operations. Basically, DBL_EPSILON seems to give a wrong value.[snip]
Consider this small C program which tries to detect the smallest possible value of a double that still makes a difference in a "1 + x > 1" floating point comparison (some people refer to that value as "machine epsilon"):
Compiling and running this on a variety of GCC versions (ranging from 4.3.x to 4.5.x versions), on a variety of systems (32bit, 64bit, multilib) and with -m64 with or without optimization, or -m32 *with* optimization results in the following output:
epsilon = 2.220446e-16 DBL_EPSILON: 2.220446e-16
The detected value and the value provided by DBL_EPSILON match.
However, compiling with -m32 and *without* optimization (-O0) always results in:
epsilon = 1.084202e-19 DBL_EPSILON: 2.220446e-16
The values don't match and DBL_EPSILON gives a much bigger value then the detected one. Why is that? It would seem that compiling on 32bit with -O0 yields higher precision. Is this a result of the FPU being used (or not used)?
-- Marc Glisse
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |