In some conditions a member initializer of double data is handled in single precision. Attached is a quite fragile minimized test program. In it, the bug only surfaces in inlined code, i.e. compiled with -O. At least GCC 3.3, 3.3.3 and 3.4.1 are affected. A test on GCC 2.95.4 on i686-pc- linux-gnu didn't show the bug. It might be x86 only, as the test program doesn't show it with GCC 3.3 on sparc-sun-solaris2.8. For example, "gcc -v" of the 3.3.3: Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.3.3/specs Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/ usr/share/info --enable-shared --enable-threads=posix --disable-checking -- disable-libunwind-exceptions --with-system-zlib --enable-__cxa_atexit -- host=i386-redhat-linux Thread model: posix gcc version 3.3.3 20040412 (Red Hat Linux 3.3.3-7) /* $ g++ -Wall -O gccbug.cpp && ./a.out * -1.19209e-07 0 * 0 1.19209e-07 * $ g++ -Wall -O0 gccbug.cpp && ./a.out * 0 1.19209e-07 * 0 1.19209e-07 */ int printf(const char*, ...); struct C { double x; C(double x_) : x(x_) { printf(" "); printf("%g %g\n", x - x_, x - float(x_)); } }; void f(float x) { const double y = x * .9f; if (x) { new C(y); new C(y); } } int main() { f(3); return 0; }
This is a dup of bug 323. *** This bug has been marked as a duplicate of 323 ***
(In reply to comment #1) > This is a dup of bug 323. No it's not. Please read the code and the example output.
Yes it is, read the whole bug next time. *** This bug has been marked as a duplicate of 323 ***
(In reply to comment #3) > Yes it is, read the whole bug next time. > *** This bug has been marked as a duplicate of 323 *** I'm sorry but I must insist on the opposite. I hope you'll spare me yet some more of your time and explain what I am thinking wrong, if this still doesn't make sense. I've read all of http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323 a few times already, and agree that "bug" 323 (about x != y in floating point) is clearly a case of rounding. This bug has nothing to do with that. You can see that my program prints two lines, each from an identical call to new C(y); In the flawed case (compiled with -O), the first printout is completely different from the second one which is correct. Between calls (and also in relation to x_ in the wrong case), the value of x changes by the printed value of ~1e-7 which is very much more than the precision of double x_ which has a value of ~2.7. The effect of rounding would be of the same magnitude as the double epsilon, less than 1e-15. You should notice that there are no floats used after the initialization of const double y. There's no way the value in y or x_ could fluctuate more than the precision of a double. I understand that your time is limited and seeing identical bug reports for non- bugs is frustrating. I should have initially documented better why I think the result is invalid and not from rounding. If all this is wrong, I wholeheartedly apologize and will stop reopening the bug. :)
To bring up the error in compilation clearly, here's a part of the original program's assembly, for essentially this part of code: struct C { double x; C(double x_) : x(x_) { } }; void f(float x) { const double y = x * .9f; new C(y); } The above code alone doesn't compile that way, though. This is just to illustrate what I left to show in the assembly. _Z1ff: ; snip stack frame setup flds 8(%ebp) fld %st(0) fmuls .LC0 ; .LC0 is ".9f", obviously fsts -28(%ebp) ; float version saved for some reason (*bug*) fstpl -24(%ebp) ; this is later used in other uses of y ; snip code for the "if" which is in the original program pushl $8 call _Znwj movl %eax, %ebx flds -28(%ebp) ; the float version loaded (*bug*) fstpl (%eax) ; and saved as double x I forgot to reopen the bug last time, doing it now.
I'm taking it back now. (skip to --- if you're in a hurry) It took so long to understand, now I feel ashamed. I'm not a floating point guru, which you might have guessed. I have read about the extra precision within FPU, but didn't understand the same might apply between float and double /and/ that grammatically, in double = float * float the right side is actually of the type float, and so the result /may or may not/ contain extra precision relative to that float. To add to the matter, it obviously also can both have the extra precision and not have it in the same run. It's quite weird how the compiler only in this case goes through extra hoops (with a minor performance loss, too) to lose the extra precision. If the program is modified in any great way, and in all other uses of y even this way, it uses the value with extra precision. This came up in such a complicated program spun across function calls that I just couldn't imagine the cause being so simple. Sorry for overlooking the possibility so long. --- I'm still wondering if the standards allow this extra precision to happen across function calls, albait inlined by the compiler. If it does, feel free to once again mark this as a duplicate of report 323, forget this issue, and I'll go repent and not bother you any more. I'm sorry I didn't find an authoritative source and must still speculate. I don't have the standard or know where else to look. I've learned from this in any case. The lesson, sadly, is that even within the insides of one function, I can't rely on a double argument to hold it's value to more precision than that of a float. I'm still hoping this is not the case, so a confirmation either way would be greatly appreciated.
The C standard is actually quite clear that *none* of the problems covered by bug 323 are allowed. We are agreed with you that this is a bug. The thing is, it's a *hardware* bug. The 80387 floating point unit is broken as designed, such that it is impossible to get standard-compliant floating point semantics and good performance at the same time. Thus, we have -ffloat-store, which lets you choose which you want, and that's the best we can do on the x86. It's usually possible to structure floating-point code such that it's robust against the sorts of errors that the bug introduces. You can also get a computer without the hardware bug - neither the AMD64 nor the PowerPC has this problem. *** This bug has been marked as a duplicate of 323 ***
Personally, I don't mind floats being imprecise in the last few bits, or doubles being imprecise in their last few bits. What I do mind is that after assigning float * float to double, that double value can have "simultaneously" two values that differ in almost half of their last bits. If this is allowed, one can never count on double having any more practical precision than float, especially with the duality being carried over [inlined] function calls. I'm sure many programs rely on this but, since it's broken so rarely it's not noticed until with a new compiler or a small non-related change in code structure. (Ouch!) The "double = float * float" case could be fixed without performance issues, too. The compiler could well generate code that always uses the value with extra precision (that of double, or more). As I noted, the compiler already does this in all cases but the one. At least in my example, using two forms (single and double precision) of the result generates an unnecessary instruction, so actually there would be a performance increase. I suppose a compiler patch could be too laborious compared to the gains, especially as it mustn't affect non-x86 compilations. My personal fix will be converting all uses of float to double, but obviously this is not good for all programs.