Serious bug

Richard Hadsell
Tue Sep 29 22:26:00 GMT 1998

Craig Burley wrote:
> This means that, in languages like Fortran, C, and, I would guess,
> C++, it is permitted for evaluation of
>   3.1 - 3.1
> to return a non-zero number, because the approximations of the two
> `3.1' constants need not be identical.

That's silly.  I don't care what approximation the compiler comes up
with, but it ought to be consistently the same, not randomly chosen.

> The above program is not necessarily going to print "Equal!" on all
> standard-conforming compilers.

I'm not arguing that a compiler fails to conform to some standard.  I'm
saying that it fails to conform to the common sense established by over
30 years of compilers that deal with all kinds of floating-point
representations.  There had better be a really good performance issue
involved to say that a compiler is not going to support those basic
items that so many of us assume.

> That's because what some people consider "a number" is different
> from what others consider "a number" to be.  "D = 3.1" is the
> problematic statement -- is the number contained by D in the
> subsequent comparison:
>   1.  precisely equal to 3.1
>   2.  equal to "the" double-precision approximation of 3.1 (assuming
>       there is only one such approximation on that processor, aka
>       compiler/runtime/OS/hardware)
>   3.  equal to "the" single-precision approximation of 3.1
>   4.  equal to some other approximation of 3.1
>   5.  equal to any of a variety of possible approximations of 3.1
>       at any given time

As I said, I'm not talking about a compiler's choice of representation,
approximation, rounding mode, or anything like that.  It's more basic. 
I just want to keep track of floating-point numbers in a consistent
way.  The bits that represent a number of any given type should always
be the same, no matter how many times I move it around, as long as I
don't convert it to another type or do any arithmetic on it.  Some might
even argue that I should be able to multiply it by 1 and get the same

If the processor might randomly change any of the bits, then those bits
should not be involved in making a comparison to see whether the number
is the same as another.  If this means that, on a particular processor,
the compiler has to specify a particular mode that slows it down, then
-- too bad -- that's a real problem with the processor.  Compiler
developers should put up a stink about something this basic, and the
processor vendor should at least put out a warning that comparison of a
number with itself will not necessarily give equality.

This kind of processor problem seems very unlikely to me.  We will
always need to make floating-point comparisons, and we will always need
consistent results of equality.

I'm not saying that this is part of anyone's standard.  I'm saying that
it is part of our common experience with a multitude of computers,
languages, and compilers.  Give me a good reason why any
compiler/machine should not support those basic assumptions.  Don't tell
me that it can't, because that is not credible.

Dick Hadsell			914-381-8400 x5446 Fax: 914-381-9790
Blue Sky | VIFX       
1 South Road, Harrison, NY 10528

More information about the Gcc mailing list