This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: Ada front-end depends on signed overflow
- From: Paul Schlie <schlie at comcast dot net>
- To: Robert Dewar <dewar at adacore dot com>
- Cc: Florian Weimer <fw at deneb dot enyo dot de>,Andrew Pinski <pinskia at physics dot uc dot edu>,GCC List <gcc at gcc dot gnu dot org>,<bosch at gnat dot com>
- Date: Wed, 08 Jun 2005 10:53:04 -0400
- Subject: Re: Ada front-end depends on signed overflow
> From: Robert Dewar <dewar@adacore.com>
> Date: Wed, 08 Jun 2005 10:16:23 -0400
> To: Paul Schlie <schlie@comcast.net>
> Cc: Florian Weimer <fw@deneb.enyo.de>, Andrew Pinski <pinskia@physics.uc.edu>,
> GCC List <gcc@gcc.gnu.org>, <bosch@gnat.com>
> Subject: Re: Ada front-end depends on signed overflow
>
> You are really just digging yourself into a hole here. It is clear
> that you know very little about floating-point arithmetic. If you
> are interested in learning, there are quite a lot of good references.
> I would suggest Michael Overton's new book as a good starting point.
>
>> - Agreed, therefore because FP is inherently an imprecise representation,
>> and bit exact equivalences between arbitrary real numbers and their
>> representational form is not warranted, therefore should never be relied
>> upon; therefore seems reasonable to enable optimizations which may alter
>> the computed results as long as they are reasonably known to constrain
>> the result's divergence to some few number least significant bits
>> of precision. (as no arbitrary value is warranted to be representable,
>> with the possible exception of some implementation/target specific
>> whole number integer values, but who's overflow semantics are also
>> correspondingly undefined.)
>
> There is nothing imprecise about IEEE floating-point operations
- agreed, however nor is it mandated by most language specifications,
so seemingly irrelevant.
>> - only if it's naively relied upon to be precise to some arbitrary
>> precision, which as above is not warranted in general, so an algorithm's
>> implementation should not assume it to be in general, as given in your
>> example, neither operation is warranted to compute to an equivalent value
>> in any two arbitrary implementations (although hopefully consistent within
>> their respective implementations).
>
> More complete nonsense. Of course we do not rely on fpt operations being
> precise to arbitrary precision, we just expect well defined IEEE results
> which are defined to the last bit, and all modern hardware provides this
> capability.
- as above (actually most, if inclusive of all processors in production,
don't directly implement fully compliant IEEE FP math, although many
closely approximate it, or simply provide no FP support at all; and
as an aside: far more processors implement wrapping signed overflow
semantics than provide IEEE fp support, as most do not differentiate
between basic signed and unsigned 2's complement integer operations,
so if expectations are based on likelihood of an arbitrary production
processor supporting one vs. the other, one would expect wrapped overflow
with a high likely-hood, and fully compliant IEEE support with a less
likely-hood).
>>> a) the programmer should not have written this rubbish.
>
>> - or the language need not have enabled a potentially well defined
>> expression to be turned into rubbish by enabling an implementation
>> to do things like arbitrarily evaluate interdependent sub-expressions
>> in arbitrary orders, or not require an implementation to at least
>> optimize expressions consistently with their target's native semantics.
>
> Well it is clear that language designers generally disagree with you.
> Are you saying they are all idiots and you know better, or are you
> willing to try to learn why they disagree with you?
- I'm saying/implying nothing of that sort; as I happen to believe that
the reason things are the way they are for the most part, is that although
most knew better, the committees needed to politically accommodate
varied implementation practices and assumptions, as otherwise would end
up forcing some companies to invest a great deal of time and money to
either re-implement their existing compilers, processor implementations,
or application programs to accommodate a stricter set of specifications.
(which most commercial organizations would lobby strongly against), which
is one of the things that Java had the luxury of being able to somewhat
side step.
>> - agreed, an operation defined as being undefined enables an implementation
>> to produce an arbitrary result (which therefore is reliably useless).
>
> Distressing nonsense, sounds like you have learned nothing from this
> thread. Well hopefully others have, but anyway, last contribution from
> me, I think everything has been said that is useful.
- I would have if someone could provide a concrete example of an undefined
behavior which produces a reliably useful/predictable result.