This is the mail archive of the
mailing list for the GCC project.
Re: Handling non-constant bounds in extract_range_from_cond
- From: kenner at vlsi1 dot ultra dot nyu dot edu (Richard Kenner)
- To: law at redhat dot com
- Cc: gcc-patches at gcc dot gnu dot org
- Date: Mon, 29 Nov 04 16:53:32 EST
- Subject: Re: Handling non-constant bounds in extract_range_from_cond
I don't recall that discussion. That doesn't mean it didn't happen, I
just don't remember it... Can you summarize the problem in more detail.
It was back in early July; I originally had been holding off on these
patches until I got ACATS clean, but then decided not to.
It's filed as PR18663.
I'll assume that there's some Ada construct that allows a subtype to
have a varying range?
Correct. Indeed the basic definition of a subtype allows arbitrary bounds.
The problem was that we had a variable of such a subtype and had an ordered
comparison with it and extract_range_from_cond was returning the upper bound
of the type, but its caller assumes what it returns is an INTEGER_CST.
My first cut at this was to have the function fail for a non-constant bound
of the subtype, but when I suggested that patch, you pointed out that there
was another usage, whose result wasn't checked. So I tried this approach.
Using the bounds of the underlying type in that case seems appropriate
as long as the bounds of that underlying type are guaranteed to be as
least as big as the subtype.
That's what a subtype means.
However, it doesn't seem wise to do that in the more general case. Can
you clarify why we would always want to look at the subtype?
It's tricky. Basically, it boils down to a language question of what the
compiler is allowed to assume for an subtype: are you allowed to assume the
object's value is within the subtype? I'm not "language lawyer", but I know
that in Ada (pretty much the only time this will come up), the answer isn't
that clear. But we can ignore that because there's another issue.
Suppose I have
subtype Foo is Integer range 1..10;
I now read in a value for X from a binary file. Suppose that value is 15.
According to the Ada standard (at least as I understand it), there's nothing
wrong so far. If I were to *use* X, I'd have what's called a "bounded error".
But I am allowed to test and see if X is valid using the predicate
which is supposed to return true if X is in the range from 1 to 10.
But what code does that generate? Switching to C syntax, we'd write
X >= 1 && X <= 10
However, if we consider it as being required that X be in the range of
its subtype, this gets folded to "true", which is wrong, since it must
be allowed to fail.
We can try to fix this by writing as
((int) X) >= 1 && ((int) X) <= 10
but optimizers have been known to remove those casts.
It gets even trickier.
Suppose somebody writes
X in Foo
Theoretically, that's not the same as X'Valid and it is perhaps valid
for the compiler to remove that test, but just yesterday some ACT
customer expected it not to. (Interesting coincidence!)
So this whole area is tricky.
As somebody who's been working in optimization for so many years, I
too rebel against the optimizer ignoring some potentially-useful piece
of information. But in this case, using it can lead to either
tricky-to-implement things like 'Valid or result in programmer surprises.
These sorts of optimizations (knowing that comparisons against known
ranges are true) tend to be useful in code that has lots of macros so you
can eliminate choices in many macro invocations. But this is not a type
of coding used in Ada, which is where the subtypes are present. So I think,
on balance, that the "Principle of Least Surprise" trumps the possible
optimization benefit here.