This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [range-ops] patch 01/04: types for VR_UNDEFINED and VR_VARYING

On 7/23/19 5:33 AM, Richard Biener wrote:

What irange contains internally is a bit arbitrary.  It's really an API
for managing a set of subranges.  We could have used trees internally
just as easily, then we wouldnt need a type field. Since we were doing
lots of operations, rather than going back and forth from trees all the
time, we just used the underlying wide_int directly.   we could have
fiddle farted around with HOST_WIDE_INT or whatever, but wide_int is
there, has all the operations, and it works fine. so thats what it
currently is on the branch.
But a range has no type.  Period.  The fact that with the in-tree implementation
there's access to a type is artificial.  All workers in the in-tree
get a type for the operation they carry out.

But somehow irange doesn't get that.

In fact, an irange shouldn't be bound to any type, and the usual
"carry out multiplication of two integer typed vars with the result
in integer" on irange should be multiply two iranges (with unbound
result) plus a "truncate to integer type" operation.

so following thru on the implementation details of doing that,  we do the exact same thing that VRP does for multiply.. we eventually call wide_int_range_multiplicative_op.
The code from tree-vrp.c:

  wide_int res_lb, res_ub;
  wide_int vr0_lb = wi::to_wide (vr0_min);
  wide_int vr0_ub = wi::to_wide (vr0_max);
  wide_int vr1_lb = wi::to_wide (vr1->min ());
  wide_int vr1_ub = wi::to_wide (vr1->max ());
  bool overflow_undefined = TYPE_OVERFLOW_UNDEFINED (type);
  unsigned prec = TYPE_PRECISION (type);

  if (wide_int_range_multiplicative_op (res_lb, res_ub,
                                        code, TYPE_SIGN (type), prec,
                                        vr0_lb, vr0_ub, vr1_lb, vr1_ub,
    vr->set (VR_RANGE, wide_int_to_tree (type, res_lb),
             wide_int_to_tree (type, res_ub));

Which , lo and behold, it requires a type in order to get the signop right in the 4th argument.  we also use the type to figure out the precision in the 5th argument, as well as the overflow condition at the end.

So it *is* bound to a type in order to do the operation, its just a matter of whether you pass that type around, or include it with the object.  I simply can't imagine why you think it isn't.

sure, in this case the LHS, vr0, and vr1 are all the same type.. (or should be type compatible_p), so we can pass in one type, but not all operations follow that pattern... casts, shifts, comparisons, etc can have different types in the operand positions.  We include it with each range and  we always have accurate info associated with the operand.

How is that  a bad thing?

We are treating a range object as a unique self contained object.
Therefore, the range has a type so we know how to print it, and can
confirm before any operation that the ranges being operated on are
appropriately matched.  We found and opened bugzillas over the past
couple years for places where our code caught bugs because a range was
created and then operated on in a way that was not compatible with
another range.  I think there is a still an open one against ada(?)
where the switch and case  are different precision.
I'm fine with sanity-checking where it makes sense but looking at
the irange code the fact that there is a type is a fundamental requirement.
IMHO that's bad design.

we could remove the type from the range object.. we aren't critically tied to it, but then virtually everywhere we pass a range, we'll be passing in the type to be operated on, or the sign flag, or overflow flag,   or some combination of those.  Its only a matter of time until someone gets them out of whack.. It seems far more logical  to simply keep the type associated so we can pick up overflow, signop, precision and such as needed.. and do some sanity checking along the way.  thats what the type field is for after all, to consolidate all the info you might want...  Why pass the extra parameters when you don' t need to.

  From my point of view, a range object is similar to a tree node. A tree
node has the bits to indicate what the value is, but also associates a
type with those bits within the same object.  This is less error prone
than passing around the bits and the type separately.  As ranges are
starting to be used in many places outside of VRP, we should do the same
thing with ranges.  WIth value_range it would actually be free since
there is already a tree for the bounds already which contains the type.

Personally, Id put it in both for consistency.  I view a range as an object we are associating with either an expression or an ssa-name. That range should then have a consistent type with that name or expression.  Even if the range is empty, I would still expect it to be compatible since I'm usually associating it with an ssa_name or expression.

Because you seem so dead set against it, I can also see some consistency in not having a type for undefined... Since there are 0 ranges, we can't ask for a lower_bound () or upper_bound () on an empty range, so I can see an argument for extending that not being able to ask the type() either.  I agree that this is also a plausible viewpoint, and we are changing our code to make that happen, even though we have to pass a few extra types around and will lose some minor sanity checking when doing intersection or union with an empty range.   To me an empty range is akin to a NaN.. it isn't a real float value, but it does still have a type associated with it.

Being against a type for varying makes no sense to me since varying is simply shorthand for [MIN, MAX]. This is no different than some other range like [3, 45]  its just a range with 2 end points.  VRP may happen to special case it to represent the lattice value that maps to this and doesn't bother filling it in, but it doesn't change the fact that it represents [MIN, MAX]. And VRP does occasionally expand varying into its component limits... Aldy found cases where 2 ranges combined would result in VR_RANGE [MIN, MAX], but would not be considered varying because it wasn't normalized to VR_VARYING.. so there are 2 representations of the same value which is also bad.

So we can now normalize that code to not special case varying.. simply ask for lower_bound and upper_bound ().  Its less special casing, its consistent, costs no memory.. It all seems like a really good cleanup which you appear to be opposing for reasons I can't quite fathom.

to fold_range/op_range?  The API also seems to be oddly
constrained to binary ops.  Anyway, the way you build
the operator table requires an awful lot of global C++ ctor
invocations, sth we generally try to avoid.  But I'm getting
into too many details here.
Its "oddly constrained" because what you are looking at  is just the
standardized unary/binary ops code.. ie the equivalent of
extract_range_from_binary_expr() and extract_range_from_unary_expr().
The other ops we care about have specialized requirements, like PHIs
and the arbitrary numbers of parameters in a call, or anything less
common than one or two operands.    You are just not seeing those parts.

So - to answer your question above, I'd like you to pass down
a type to operations.  Because that's what is fundamentally
required - a range doesn't have a "type" and the current
value_range_base doesn't fall into the trap of needing one.


Why is having a type associated with the data a "trap"?   Perhaps the
old VRP lattice idea didn't need a type with the UNDEFINED and VARYING
lattice values, but we're moving past lattice values and into a realm
where we have ranges as useful things outside of VRP, and trying to
shoehorn lattice values does not seem appropriate anymore.

I looked at implementing range-ops without a type in the range, and we
end up passing 2 parameters everywhere each time we do anything with a
range.  This doubled the number of parameters in most routines, and when
we had chains of calls, we were simply passing the type along with the
range.  It seems archaic to be constantly passing information around
instead of associating it with the range itself.  Its just another place
for an error to creep in.. Aldy found a place where we were creating
varying nodes for floats IIRC.. the type checking in the range
operations caught it precisely because the range returned wasn't the
type it was assumed to be.
Well, I don't agree.  It's not archaic, it's the way GCC works everywhere
else.  So it's consistent.  Archaically consistend in your view.

Are you really arguing we should write code the same way for 30+ years and never change?

Indeed, I would argue that when we have the opportunity to modernize code in our code base, we really ought to be doing it... and we don't do it nearly enough.

Why don't we just pass a type along with every tree node instead of including it in the node? we'd save memory in each tree node.   I *really* fail to understand this argument of keeping the type separate from the range when the range by definition represents sub-ranges of a specific type.

But [1, 2] doesn't have a "type".  [1, 2] and [10000, 10004] can
be added just fine even if [1, 2] is from 'char' and [10000, 10004] is
from 'int'.  If you want sanity-checking you can do
But of course it does.  I defy you to create a value_range for [1,2] where I cant ask for the type of either the 1 or 2.

When  I write

 foo (unsigned x, char y)
  return x + y;

 we get

  _1 = (unsigned int) y_3(D);
  _2 = _1 + x_4(D);
  _5 = (int) _2;
  return _5;

In order to make sure everything works as expected, GCC inserts code to make sure the operands of the '+' are types_compatible_p.  It then uses the information in the TYPE field to perform the operation.   if GCC isn't going to add a 'char' to an 'unsigned int', why should the range code? we're using the same code under the covers.  I would say the same argument for ensuring the operands are compatible in GCC's '+' apply to determining the results of a ranges '+' operation...  If we don't, Im sure we'll get the wrong results at some point...  Either that, or GCC is wasting its time doing all this work.

What we are doing to calculate results is *exactly* the same from my point of view.  We use the type checking mostly for intersection and union, and while we don't do much type checking on operations like plus, (we could, but we assume GCC has already done that), We do use the fields of the TYPE to execute the operation in case we need to know the signop or overflow condition of the type.

This is where I do not understand what kind of point you are trying to make.

When we perform an operation like '+" on a range, we are calling gcc's routines to do the add on each of the operands of the range, which means we are trying to map range operations as directly to the IL operation as possible.  ie, PLUS is still the same PLUS that would be operated on by the two const tree values in 3 + 4.... , we are just teaching the compiler how ranges are different than constants, and require a few more operations combined to come up with the result... so for plus, we're teaching it       [L1, U1] + [L2, U2] results in [L1+L2, U1+U2]  plus variants based on overflow and sign and we use gcc's add operation for whatever the type  L1,L2, U1, and U2 are, and we need the info from the types of those endpoints to do the operation.

Why on earth we'd want to go out of our way to not include the type when we'll need pieces of it passed in separately I cannot imagine.

In the case of irange, we added the type field since it isn't available from the wide_int structure.. and the primary reason we 'need' it is so we can pick up the correct signop value and overflow style when we are performing certain wide int operations.  In the case of value_range, end points are already a tree, so we have the type when we need it and don't need to add anything extra.  so its inherently free in value range, which should make it an even easier sell.

irange_plus (tree in_type, irange op0, irange op1)
    gcc_assert (range_fits_type (in_type, op0) && range_fits_type
(in_type, op1));
    irange res = op0 + op1;
    return res.truncate_to (in_type);

or so easily.  See - even the checking part doesn't need the
type in the range.
That code could easily be the same , except our assert would read:

gcc_assert (types_compatible_p (op0.type (), op1.type ());

which ensures that those ranges came from places with the kinds of values we expect, and looks a lot like every thing else we see in GCC.

we don't actually do that, but we could if you like.  Its just a bit redundant since virtually every place we call into the fold operations have had ranges extracted from a gimple statement  and the resulting queries have a type matching the operand, so at that point we're pretty sure they will pass the type_compatible_p test. And even if they don't the underlying wide_int operational routines will complain.

Most of the type checking is for the benefit of the ranger, which goes off and finds ranges and brings them back. It typically has to perform a chain of operations over multiple statements , and when you are traveling over the CFG and combining computations, it includes unions and intersections.  its very helpful to ensure the range you are getting at back at each stage of the evaluation matches the type you were expecting. Its been a critical debugging aid.

There are also some operations, like shift, where the type of operands  may not even be the same sign as 'in_type'...  so we need to pass in the right type for that to truly get it right.  I noticed the existing VRP code does some fudging for that particular case... Im not convinced there isn't a lurking bug there..   but in any case we'll just have the precise info right there in the range and wont have to do anything special.

a = b < c is another example.  The LHS is a different type (boolean)  than op1 and op2.. its obscured in VRP because there is special code for comparison ops, but range-ops treats relationals exactly like any other generic binary operation.   We'd have to pass the type or signop for the RHS to the operation because its different than 'in_type'.  Basically, the API would have to have a type for each operand to be sure we are correct for all operations.

Range-ops provides a generic API that works for *all* binary or unary ops, and as such, you sometimes need the types of each operand in ways that can be hard to predict until you actually implement them.   We've found it to be very useful to have the type of the range easily available for some of today's tree codes...  and who knows in the future...  especially if we expand ranges to other types like floats which we are also preparing for.

I also do not have to "invent" a type when I want to work with irange
on widest_ints.  Oh, another comment when reading the code was

Well you had to get those wide ints from somewhere,  and you certainly have to "invent" a signop and possibly overflow flag to do many operations on wide_ints. In all our code, I have yet to "invent" a type.. Any range operation I want to do is associated with something, and that something always has a type.

Are you thinking of something in particular that is type-less that you need or want to do range operations on?

that irange storage should not be wide_int but trailing-widest-int.
Because obviously irange has storage requirement.  It makes
in-place modification of iranges impossible but that's the way modern
C++ code works (and be consistent with how wide-int works).

I don't really like you invented an API unlike any existing in GCC.

I'd say you are kidding, but I'm afraid you aren't.

Regardless, we don't need to discuss irange_storage details as we aren't proposing integration of irange, nor irange_storage in the near future... so I don't think the internal details of that prototype matter at this point, only what we are trying to do with value_range.

That said. I believe we can do away with the need for a type with an
'UNDEFINED' range.  That is not too onerous, and there doesn't really
seem to be too many ripple effect from have a typeless undefined range.
I think we can contain that, and prevents us from having to add a hack
to value_range for that situation.

VARYING  is another situation completely.  We adopted the term 'varying'
for ease of compatibility with VRP, but varying is simply a range going
from MIN to MAX for a type.  Removing the type from that would then
require we always pass a type in with every range which gets back to
doubling the number of of parameters everywhere for no good reason.

If we standardize value_range so that MIN and MAX are set for varying,
then everything works very smoothly, we can make value_range and irange
interchangeable and facilitate getting the range ops code into trunk.

It seems like the only reason we cant do this right now is the CONST
varying nodes that are returned from get_value_range().
Looking at that routine, it seems there are only 2 cases where that can
be returned
   1) we ask for an ssa-name beyond the end of the local ssa-name vector
   2) once values_propagated is set  to true and an ssa-name has no entry.

Both seem pretty straightforward to fix...
1)  if we ask for an ssa-Name beyond the end, simply reallocate the
vector to be big enough.   I doubt this will trigger a lot, and if we
initially allocate it with room for an extra 10% names it'll probably
never trigger.  Or pick whatever number seems appropriate.
2) if values_propagated is true, simply allocate a node for the
ssa-name,  set it to varying and return it. THIs accomplishes the same
But it's completely pointless to do this work!  I refuse to make the
existing code slower just to make ranger easier to merge (and your
compile-time comparisons worse for the old code - not that all the
ranger-driven C++ification helped here).
slower? You really think you will *ever* measure this to be slower? mostly this is filling in the min and max fields for varying... Thats hardly expensive.

Most of the "c++ification" that Aldy did was simple factoring out of the common code so we could also call it...  and did a lot of bug fixing along the way.  Code reuse should be a *good* thing and I would be very surprised if that refactoring has any tangible difference on execution speed.

I fail to see how any of that, or what we are proposing here, will make things slower in any kind of measurable way.

And on top of that, thanks to this work of Aldy's, we are also able to use value_range internally in the ranger now.  When we wish to try to integrate range_ops or the ranger, it will be using value_range as well.  So if we are making value_range slower, we will be making ourselves slower by the same amount since we'll share the code base.   There is no nefarious plan to slow VRP down so we look better...  we're actually trying to  make it easier to compare whats in trunk to what we have.

the number of additional nodes we allocate will be pretty minimal, and
it will never exceed the number of ssa-names. Then we never have to
worry about having CONST set for a varying node either. I see places
where there is special processing to avoid calling set_varying  because
we dont know if the vbalue_range in hand is in constant memory and would
cause the compiler to trap. This seems like a really bad situation, and
it would eliminate that issue.  We can also then eliminate the places
where VARYING is expanded to min/max for a given type.. instead we can
just pick up min/max directly.    It seems much cleaner overall.

Something like the simple attached patch would resolve that issue, and
remove any lurking concerns/bugs with the CONST code.

Then we can associate a type with varying, canonicalize it consistently,
and set MIN/MAX appropriately.   This will give us full
interchangeability between irange and value_range, establish a
solid/consistent API,  and we can move on to the next thing :-)

Does this not seem reasonable?
I still do not agree with the design decision of the ranger API.  You do
want that to be included in the end, no?  Of course I have no veto power
or whatnot but the API and implementation is awkward.  You didn't

You mean you disagree with our range API?     the ranger API is completely disjoint from this and is not being proposed right now... and in fact EVRP can easily be mapped to the ranger API. It a pretty simple API for querying ranges.  Or maybe you meant the range-ops code and API for evaluating ranges (Which we have also not proposed yet).  It's being reworked since it was in pure prototype mode and hasn't been sanitized.

The API we are proposing here applies only to ranges, and specifically value_range. It is designed to mirror what we do with tree nodes. It is designed to promote ranges to be "just another kind of type" in GCC, one that we could eventually represent with a tree_node if we wanted to. Its really a type which specifies one or more sub-ranges for a type.   As such, it contains values which are expected to be of that type. Thats it.

Our value_range type() query is simply extracting TREE_TYPE () from one of the bounds...  its really shorthand for the tree equivalent of TREE_TYPE (TREE_OPERAND (t, 0)) if there were a RANGE_TYPE tree node.

If we want to know the bounds for what is referred to as  a varying node  ([MIN, MAX]), you have to have a type available. and if we store min and max, its always there.

I still fail to understand why you are so dead set against representing varying as [MIN, MAX].  Its what varying *is*.

And really, that's the only only fundamental change we are proposing here... And we aren't even adding a field to the structure to do it!

answer my concern about all the global ctors, did you?


We arent proposing anything with global constructors yet, so I didn't see the need to comment on it.

1) the entire range-ops code base is being restructured before we actually propose it.   Aldy provided whats currently there for completeness so you could see how we were using things if you wanted.   It needs some cleaning up and I'm in the middle of that now.   It still retains a lot of the old switch structure that doesn't need to be there. Operational flow will be much easier to read and localized.

2) You don't like global constructors.  What is the actual concern? To the best of my knowledge the tidbit of things that would be happening there are not going to have an impact on anything. Each of the, what, 30ish constructors,  set a 'enum tree_code' field, and sets an array index to point to this object. That's it.

Im not aware of any policy that says we cant have compile time constructors.  Regardless, if the number of small constructors  is really a big concern, they can all be rolled into a single initializer for the table... assuming it even looks like this when we're ready to submit a proposal.      Its a similar amount of executable code, it just puts the onus on the author to register them since since they no longer self register with the table... Just an extra thing you have to remember to do when writing code for a new opcode.  And it would take 20 minutes to change.

Six of one, half dozen of the other by my reckoning...  I just prefer to use useful language features when available and remove the human error element.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]