The following program, when compiled by GCC 3.2.2, produces unexpected results on a Pentium (running RedHat Linux 9): double func() { return 100.001; } main () { double result; double y; y = 1000.0004; result = (func() + y) - (func() + y); printf ("%g\n", result); printf ("%08X %08X\n", ((int *)&result)[0], ((int *)&result)[1]); } The output of the first line is 8.52651e-14, not 0 as expected. The important instructions are: 0x0804832b <func+3>: fldl 0x8048448 0x08048351 <main+30>: call 0x8048328 <func> 0x08048356 <main+35>: faddl 0xfffffff0(%ebp) 0x08048359 <main+38>: fstpl 0xffffffe8(%ebp) 0x0804835c <main+41>: call 0x8048328 <func> 0x08048361 <main+46>: faddl 0xfffffff0(%ebp) 0x08048364 <main+49>: fsubrl 0xffffffe8(%ebp) 0x08048367 <main+52>: fstpl 0xfffffff8(%ebp) The problem is that the first "fstpl" instruction stores an 80-bit FPU register into a 64-bit temporary, causing some significance to be lost; when the same value is later computed in the 80-bit FPU register, and the 64-bit temporary is subtracted, that extra significance that was lost is the result of the subtraction. I don't consider this an "inherent limitation of the floating point types" (as discussed in the Non-bugs section of your web site), since this problem is avoidable by generating code that does not lose significance (e.g. by using 80-bit temporaries). gcc -v output: Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/specs Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --disable-checking --with-system-zlib --enable-__cxa_atexit --host=i386-redhat-linux Thread model: posix gcc version 3.2.2 20030222 (Red Hat Linux 3.2.2-5) No special options were used when compiling: gcc test2.c -o test2
Read http://gcc.gnu.org/bugs.html#nonbugs_general. Note This is expected behavior in GCC. *** This bug has been marked as a duplicate of 323 ***
I do not understand why this bug, and similar bugs, are simply dismissed as "expected behavior" when it is possible to reasonably generate code to make things work correctly. In this case, the problem is the fstpl instruction that stores an 80-bit float into a 64-bit temporary; since the Pentium has (as far as I can tell from the processor manual) instructions to load and store 80-bit floats form/to memory without rounding, there appears to be no good reason why the compiler *must* generate the code that rounds; thus this seems to me to be a bug, not simply a consequence of "excess precision in the FPU" or an "inherent limitation of floating-point types" or the like.
Subject: Re: Incorrect floating-point result due to loss of significance adam at irvine dot com wrote: > I do not understand why this bug, and similar bugs, are simply dismissed ... Yes it is a bug, but it is a complicated one. FP support is a historical weakness of gcc. There have never been many of us that cared enough about FP support to work on it. Thus gcc has poor FP performance, and some FP bugs like this one. In the 10+ years that this problem has been known about, no one has ever volunteered to try fix it, or to pay someone else to fix it. The bug will remain until that situation changes.
Or until this particular platform finally goes away and we are left with FP implementations that don't use different precisions for internal and external variables...