Serious bug

Craig Burley burley@gnu.org
Wed Sep 30 13:19:00 GMT 1998


[Sorry, I thought I'd already covered all your emails on this topic....]

>As I said, I'm not talking about a compiler's choice of representation,
>approximation, rounding mode, or anything like that.  It's more basic. 
>I just want to keep track of floating-point numbers in a consistent
>way. The bits that represent a number of any given type should always
>be the same, no matter how many times I move it around, as long as I
>don't convert it to another type or do any arithmetic on it.  Some might
>even argue that I should be able to multiply it by 1 and get the same
>result.

In other words, you're talking about a compiler's choice of
representation, approximation, rounding mode, and so on (like
choice of these for intermediate results).  Even though you *think*
you aren't, you are!  Putting on blinders is *not* going to help you.
(I know, I tried it.)

>If the processor might randomly change any of the bits, then those bits
>should not be involved in making a comparison to see whether the number
>is the same as another.  If this means that, on a particular processor,
>the compiler has to specify a particular mode that slows it down, then
>-- too bad -- that's a real problem with the processor.  Compiler
>developers should put up a stink about something this basic, and the
>processor vendor should at least put out a warning that comparison of a
>number with itself will not necessarily give equality.

Read newsgroups like comp.arch and Smell The Stink.  But please don't
say things like "comparison of a number with itself will not necessarily
give equality" -- AFAIK that's not true for anything but a clearly
broken chip.  The problem is that you, and lots of other people who try
to do floating-point programming, are unclear on what "a number"
means in imperative, finite-state programming languages.  It took me
over 20 years of nearly constant programming to understand this
myself.

>This kind of processor problem seems very unlikely to me.  We will
>always need to make floating-point comparisons, and we will always need
>consistent results of equality.

What we *really* need is to be sure that programmers understand the
languages and tools they are using.  Changing languages and compilers
out from under programmers who *already* understand them won't help.

E.g. you might be happier using a system that is built entirely
around fixed-point, or floating-point, *decimal* arithmetic.
(GNU `bc' and/or `dc' might be examples of this -- I don't know.)

>I'm not saying that this is part of anyone's standard.  I'm saying that
>it is part of our common experience with a multitude of computers,
>languages, and compilers.  Give me a good reason why any
>compiler/machine should not support those basic assumptions.  Don't tell
>me that it can't, because that is not credible.

It *can*.  Please go use tools that support those basic assumptions
-- they should be easy enough to find.  (For example, you could
restrict yourself to using integer arithmetic in C and Fortran.)

But, if you want bare-to-the-metal performance, which is what products
like gcc, g77, and so on are trying fairly hard to be about, then
you have to learn the rules whereby that game is played.  And learn
to recognize that there are plenty of brilliant, experienced
programmers out there who know how to exploit that performance *and*
get the floating-point consistency you seem to expect of `==' and `!='.

        tq vm, (burley)



More information about the Gcc mailing list