Serious bug

Craig Burley burley@gnu.org
Tue Sep 29 22:26:00 GMT 1998


>(1) If you subtract a number from itself, you get 0.
>
>(2) If you compare a number with itself, you get equality.
>
>(3) If you assign a number to another of the same type, you get the same
>number.
[...]
>This has nothing to do with rounding questions.

Actually, it *does* have to do with rounding.  In most imperative
languages, "a number" is defined as "a finite-rational approximation
of a real number".

And, the approximation method used to obtain *any* "incarnation" of
a number (computed from a text constant, an operation on other
numbers, etc.) is rarely required to be the exact same approximation
method for all numbers of that type.

This means that, in languages like Fortran, C, and, I would guess,
C++, it is permitted for evaluation of

  3.1 - 3.1

to return a non-zero number, because the approximations of the two
`3.1' constants need not be identical.  That specific example is
unlikely to be a problem, but when one or both constants is replaced
by a variable containing that constant, it *can* be a problem, because,
of course, the variable *cannot* contain that constant -- it contains
only an *approximation* of that constant (which, for a finite number
of possible constants, might be exactly equal to that constant).

In Fortran, anyway, this leads to situations that quite contradict
what I think you're saying is "guaranteed" of a language like Fortran:

	REAL R
	DOUBLE PRECISION D

	R = 3.1
	D = 3.1
	IF (R .EQ. D) PRINT *, 'Equal!'
	END

The above program is not necessarily going to print "Equal!" on all
standard-conforming compilers.  (It probably will for g77, probably
won't on most any other compiler...but if you have that program
print D, under g77, it probably won't print "3.1" like it probably
would under other compilers.)

That's because what some people consider "a number" is different
from what others consider "a number" to be.  "D = 3.1" is the
problematic statement -- is the number contained by D in the
subsequent comparison:

  1.  precisely equal to 3.1

  2.  equal to "the" double-precision approximation of 3.1 (assuming
      there is only one such approximation on that processor, aka
      compiler/runtime/OS/hardware)

  3.  equal to "the" single-precision approximation of 3.1

  4.  equal to some other approximation of 3.1

  5.  equal to any of a variety of possible approximations of 3.1
      at any given time

g77 answers #3, which is the same answer for "R = 3.1", hence the
comparison happens to work (and this is what the Fortran
standards basically imply).

But, many other Fortran compilers answer #2, and, for those that
don't also answer #2 for "R = 3.1", that leads to an inequality.

(A few Fortran compilers -- perhaps more in the future, will answer
#1, probably making the equality work, at least *most* of the time,
since such compilers will probably be *permitted* to provide a
lower-precision substitute whenever they like.)

So, a key aspect of the overall problem -- the issue of whether a
compiler is permitted, required, or forbidden to use more precision
than the linguistically pertinent type(s) of an operation to compute
the operation, or to carry forward the result of that operation
into a subsequent operation -- is exposed by the x86, which doesn't
permit any answer other than "permitted" for high-performance
computation.

That is, on the x86, if excess precision is *required* for intermediate
computations, then temporaries spilled to memory must be larger than
the explicitly declared IEEE 64-bit doubles; if it is *forbidden*,
then *all* intermediate computations must be spilled to memory as
explicitly declared IEEE 64-bit doubles (since the processor doesn't
provide any other means to achieve that effect).

And, since "permitted" is the only high-performance answer, that leaves
it up to the *compiler* to decide, for any set of operations, which
will be computed in excess precision and which won't.  And, it can
even generate code that changes these decision in mid-stream (though
I don't think gcc does that).

If the above doesn't make a fair amount of sense to someone -- technical
details on my part aside -- that person should not be writing, or
maintaining, any production code using floating-point.  Understanding
the distinction between real numbers and floating-point numbers, and
the linguistic and processor issues revolving around approximations
(required to extract the latter from the former), is crucial to getting
any FP code to work properly.

        tq vm, (burley)



More information about the Gcc mailing list