This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: PATCH COMMITTED: Don't break tests for enum in range


On 6/7/07, Richard Kenner <kenner@vlsi1.ultra.nyu.edu> wrote:
[Moved to gcc list from gcc-patches].

> > So now objects can have values outside of their type?
>
> If we accept that it is correct that TYPE_PRECISION is not synonymous
> with TYPE_MIN_VALUE and TYPE_MAX_VALUE, then, yes, objects can have
> values outside of their type (isn't that the whole point of
> check'valid, or whatever it is called?).

As you say, I think we need to define the *precise* semantics of what it
means if a value of a variable is outside the range of TYPE_{MIN,MAX}_VALUE.
The simplest example of that is an uninitialized variable.

It can conceivably mean a number of things:

(1) The effect of such a value is undefined and the compiler may assume
any consequences when this occurs.

(2) The compiler can treat the variable as having any value in the range
given by its TYPE_PRECISION that is convenient, but need not choose the same
value every time.

(3) The same as (2) except that the same value must be chosen every
time (for example the actual value or one of the bounds).

I think the best approach is to use flags set by the front end to indicate
which of these is to be the case.  For C, I believe (1) is always the
proper meaning.  I don't know what it is for C++, Fortran, and Java.  For
Ada, (3) is the normal case, but there are many situation where the front
end can prove that (2) or (1) is acceptable.

Note that types that have TYPE_MIN/MAX_VALUE different from the natural values defined by the types signedness and precision also ask for defining what happens if arithmetic on them is allowed. And if it is allowed, what the semantics for overflow are. And even more interesting, how to represent for example a + 1 for a of type int with a range of [5, 10] -- note that 1 is _not_ in that range.

So, the most obvious answer to these points is that arithmetic is
always performed in a type where TYPE_MIN/MAX_VALUE is
naturally defined and so we can rely on two's complement arithmetic.

The question that is retained is, when we expose types with non-natural
TYPE_MIN/MAX_VALUE to the middle-end, how do we handle
conversions between the "base" and the "derived" type.  Is it
value-preserving, even if the value is outside of the "derived" types
bounds?  If not, how is it "truncated"?

(1) conversion is only defined if the result is within the target types range?
     (this is what we assume now in VRP, if you don't use VIEW_CONVERT_EXPR)

(2) truncation happens explicitly (that is what we obviously don't want)

(3) we all values in the base types range, even for the derived type (this
is what we get with VIEW_CONVERT_EXPR now)

So from the last discussion(s) we agreed it only makes sense to define
arithmetic in "base" types.  We also agreed to the current state of
using (1) for conversions.

Of course all previous discussion only mattered to Ada and 'Valid, but
as we see now it obviously applies to types from other frontends as well,
iff they set TYPE_MIN/MAX_VALUE to non-natural values.

Richard.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]