Here's hoping this doesn't get marked as a duplicate of 323, since the summary contains the keywords "floating-point" and "error." :) With the following trivial program, which simply does 20 subtractions, a logic error occurs during a comparison against the floating point value, believing that o < 0.05 is true when o == 0.05. This is reproducible on multiple processors. I've tried it with gcc 3.3 on Mac OS X 10.3 (powerppc g4), as well as gcc 3.3 on Redhat 9.0 (i686) and run into the same result. (For kicks, I did try -ffloat-store, as suggested in the 323 thread, but this had no effect). The problem occurs at all optimization levels I tried. int main() { float o = 1.0; while (1) { printf("o: %f\n", o); if (o < 0.05) break; o -= 0.05; } printf("final o: %f\n", o); return 0; }
This is your bug, learn how floating point is represented. The point is that .05 is not exactly representable in floating point.