This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: Ada front-end depends on signed overflow
- From: Robert Dewar <dewar at adacore dot com>
- To: Paul Schlie <schlie at comcast dot net>
- Cc: Florian Weimer <fw at deneb dot enyo dot de>, Andrew Pinski <pinskia at physics dot uc dot edu>,GCC List <gcc at gcc dot gnu dot org>, bosch at gnat dot com
- Date: Wed, 08 Jun 2005 10:16:23 -0400
- Subject: Re: Ada front-end depends on signed overflow
- References: <BECC6D55.A6C5%schlie@comcast.net>
Paul Schlie wrote:
What's silly, is claiming that such operations are bit exact when even
something as basic as their representational base radix number systems
isn't even defined by the standard, nor need necessarily be the same
between different FP types; thereby an arbitrary value is never always
guaranteed to exactly representable as an FP value in all
implementations (therefore test for equivalence with an arbitrary value
is equally ambiguous, as would any operations on that value, unless it is
known that within a particular implementation that it's value and any
resulting intermediate operation values are correspondingly precisely
representable, which is both implementation and target specific, although
hopefully constrained to be as closely approximated as possible within
it's representational constraints.)
You are really just digging yourself into a hole here. It is clear
that you know very little about floating-point arithmetic. If you
are interested in learning, there are quite a lot of good references.
I would suggest Michael Overton's new book as a good starting point.
- Agreed, therefore because FP is inherently an imprecise representation,
and bit exact equivalences between arbitrary real numbers and their
representational form is not warranted, therefore should never be relied
upon; therefore seems reasonable to enable optimizations which may alter
the computed results as long as they are reasonably known to constrain
the result's divergence to some few number least significant bits
of precision. (as no arbitrary value is warranted to be representable,
with the possible exception of some implementation/target specific
whole number integer values, but who's overflow semantics are also
correspondingly undefined.)
There is nothing imprecise about IEEE floating-point operations
- only if it's naively relied upon to be precise to some arbitrary
precision, which as above is not warranted in general, so an algorithm's
implementation should not assume it to be in general, as given in your
example, neither operation is warranted to compute to an equivalent value
in any two arbitrary implementations (although hopefully consistent within
their respective implementations).
More complete nonsense. Of course we do not rely on fpt operations being
precise to arbitrary precision, we just expect well defined IEEE results
which are defined to the last bit, and all modern hardware provides this
capability.
- yes I understand C/C++ etc. has chosen to define overflow and
evaluation order (among a few other things) as being undefined.
a) the programmer should not have written this rubbish.
- or the language need not have enabled a potentially well defined
expression to be turned into rubbish by enabling an implementation
to do things like arbitrarily evaluate interdependent sub-expressions
in arbitrary orders, or not require an implementation to at least
optimize expressions consistently with their target's native semantics.
Well it is clear that language designers generally disagree with you.
Are you saying they are all idiots and you know better, or are you
willing to try to learn why they disagree with you?
- agreed, an operation defined as being undefined enables an implementation
to produce an arbitrary result (which therefore is reliably useless).
Distressing nonsense, sounds like you have learned nothing from this
thread. Well hopefully others have, but anyway, last contribution from
me, I think everything has been said that is useful.