This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Ada front-end depends on signed overflow


> From: Robert Dewar <dewar@adacore.com>
> Paul Schlie wrote:
> 
>> - yes, it certainly enables an implementation to generate more efficient
>>   code which has no required behavior; so in effect basically produce more
>>   efficient programs which don't reliably do anything in particular; which
>>   doesn't seem particularly useful?
> 
> You keep saying this over and over, but it does not make it true. Once
> again, the whole idea of making certain constructs undefined, is to
> ensure that efficient code can be generated for well defined constructs.

- Can you give an example of an operation which may yield an undefined
  non-deterministic result which is reliably useful for anything?

>> - Essentially yes; as FP is an approximate not absolute representation
>>   of a value, therefore seems reasonable to accept optimizations which
>>   may result in some least significant bits of ambiguity.
> 
> Rubbish, this shows a real misunderstanding of floating-point. FP values
> are not "approximations", they are well defined values in a system of
> arithmetic with precisely defined semantics, just as well defined as
> integer operations. Any compiler that followed your misguided ideas
> above would be a real menace and completely useless for any serious
> fpt work.

- I'm sorry I wasn't aware that C/C++ for example specified the bit exact
  representation and semantic requirements for FP. (possibly because it's
  not defined as being so?)

   What's silly, is claiming that such operations are bit exact when even
   something as basic as their representational base radix number systems
   isn't even defined by the standard, nor need necessarily be the same
   between different FP types; thereby an arbitrary value is never always
   guaranteed to exactly representable as an FP value in all
   implementations (therefore test for equivalence with an arbitrary value
   is equally ambiguous, as would any operations on that value, unless it is
   known that within a particular implementation that it's value and any
   resulting intermediate operation values are correspondingly precisely
   representable, which is both implementation and target specific, although
   hopefully constrained to be as closely approximated as possible within
   it's representational constraints.)

> As it is, the actual situation is that most serious fpt programmers
> find themselves in the same position you are with integer arithmetic.
> They often don't like the freedom given by the language to e.g. allow
> extra precision (although they tend to be efficiency hungry, so one
> doesn't know in practice that this is what they really want, since they
> want it without paying for it, and they don't know how much they would
> have to pay :-)

- Agreed, therefore because FP is inherently an imprecise representation,
  and bit exact equivalences between arbitrary real numbers and their
  representational form is not warranted, therefore should never be relied
  upon; therefore seems reasonable to enable optimizations which may alter
  the computed results as long as they are reasonably known to constrain
  the result's divergence to some few number least significant bits
  of precision. (as no arbitrary value is warranted to be representable,
  with the possible exception of some implementation/target specific
  whole number integer values, but who's overflow semantics are also
  correspondingly undefined.)

>>   Where integer operations are relied upon for state representations,
>>   which are in general must remain precisely and deterministically
>>   calculated, as otherwise catastrophic semantic divergences may result.
> 
> Nonsense, losing the last bit in an FP value can be fatal to many algorithms.
> Indeed, some languages allow what seems to FP programmers to be too much
> freedom, but not for a moment can a compiler writer contemplate doing an
> optimization which is not allowed. For instance, in general replacing
> (a+b)+c by a+(b+c) is an absolute no-no in most languages.

- only if it's naively relied upon to be precise to some arbitrary
  precision, which as above is not warranted in general, so an algorithm's
  implementation should not assume it to be in general, as given in your
  example, neither operation is warranted to compute to an equivalent value
  in any two arbitrary implementations (although hopefully consistent within
  their respective implementations).

>> - No, exactly the opposite, the definition of an order of evaluation
>>   eliminates ambiguities, it does not prohibit anything other than the
>>   compiler applying optimizations which would otherwise alter the meaning
>>   of the specified expression.
> 
> No, the optimizations do not alter the meaning of any C expression. If the
> meaning is undefined, then

- yes I understand C/C++ etc. has chosen to define overflow and
  evaluation order (among a few other things) as being undefined.

> a) the programmer should not have written this rubbish.

- or the language need not have enabled a potentially well defined
  expression to be turned into rubbish by enabling an implementation
  to do things like arbitrarily evaluate interdependent sub-expressions
  in arbitrary orders, or not require an implementation to at least
  optimize expressions consistently with their target's native semantics.

> b) any optimization leaves the semantics undefined, and hence unchanged.

- agreed, an operation defined as being undefined enables an implementation
  to produce an arbitrary result (which therefore is reliably useless).




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]