Floating point: just 20 digits of precision

Elias Gabriel Amaral da Silva tolkiendili@gmail.com
Fri Nov 26 14:39:00 GMT 2010


Hello,

This simple program

#include <stdio.h>
int main()
{
       long double c, d, e;
       c = 1.0/7.0;
       d = 1.0; d /= 7.0;
       e = 1.0L/7.0L;
       printf("sizeof(7.0) = %u, sizeof(7.0L) = %u\n",sizeof(7.0),sizeof(7.0L));
       printf("%3.60Lf %3.60Lf\n", c, c*7.0);
       printf("%3.60Lf %3.60Lf\n", d, d*7.0);
       printf("%3.60Lf %3.60Lf\n", e, e*7.0);

       return 0;
}

Seems to yield the same precision for long double on both ia32 and
amd64 - that is, just 20 digits. This seems too few, looking there:

http://en.wikipedia.org/wiki/IEEE_754-2008#Basic_formats

Shouldn't I expect at least more than 30 digits, on amd64?

This doesn't seem to vary with 387 or sse FP either. Here's how I'm compiling

$ gcc -mfpmath=387 prec.c; ./a.out
sizeof(7.0) = 8, sizeof(7.0L) = 12
0.142857142857142849212692681248881854116916656494140625000000 0.999999999999999
944488848768742172978818416595458984375000000
0.142857142857142857140921067549133027796415262855589389801025 1.000000000000000
000000000000000000000000000000000000000000000
0.142857142857142857140921067549133027796415262855589389801025 1.000000000000000
000000000000000000000000000000000000000000000
$ gcc -mfpmath=sse -msse2 prec.c; ./a.out
sizeof(7.0) = 8, sizeof(7.0L) = 12
0.142857142857142849212692681248881854116916656494140625000000 0.999999999999999
944488848768742172978818416595458984375000000
0.142857142857142857140921067549133027796415262855589389801025 1.000000000000000
000000000000000000000000000000000000000000000
0.142857142857142857140921067549133027796415262855589389801025 1.000000000000000
000000000000000000000000000000000000000000000

I'm checking the result with:

$ echo 'scale=100; 1/7' | bc -l
.1428571428571428571428571428571428571428571428571428571428571428571\
428571428571428571428571428571428

And here is my compiler:

$ gcc --version
gcc (Gentoo 4.4.3-r2 p1.2) 4.4.3


I think my expectation of precision may be unreasonable. But, why? Is
it because the compiler is running the FP math itself (in order to
optimize a constant expression)? Or maybe some other thing?

What is the trick for having 1/7 (or something else) with a precision
closer to double's 52 digits?

(PS: I'm aware of FP limitations, such as
http://gcc.gnu.org/bugs/#nonbugs_general - I just don't know why that
result is limited to 20 digits, while double has 52 digits available)



More information about the Gcc-help mailing list