On most systems ( alpha, mips for example ) floating
point binary operations are performed at the
highest precision of the operands. For example,
a float * float is performed at 32 bits. On i386,
the precision of the operands is ignored and the
operation is performed according to the current
setting of the control word. This makes it extremely
difficult to create exact matches within programs
that use both float and double types.
A compiler option that forces operation precision
to match the highest precision of the operands
would be very helpful.
On alpha or mips systems the attached test program
should produce the following output:
the first number is the correct 32 bit answer,
the second is the correct 64 bit answer.
on i386 the output will be:
I.e. the operation is performed according to the
setting of the control word.
Setting the control word to single precision will
produce the single precision answer in both cases
State-Changed-Why: Yes, that would be nice. One can get that on P4 machines with
gcc 3.1 with -mfpmath=sse, but that's not quite the same thing.
*** Bug 9736 has been marked as a duplicate of this bug. ***
Isn't this a dup of bug 323 really.
*** This bug has been marked as a duplicate of 323 ***