This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: patch to canonize unsigned tree-csts


Kenneth Zadeck <zadeck@naturalbridge.com> writes:
> On 10/04/2013 01:00 PM, Richard Sandiford wrote:
>> I was hoping Richard would weigh in here.  In case not...
>>
>> Kenneth Zadeck <zadeck@naturalbridge.com> writes:
>>>>>> I was thinking that we should always be able to use the constant as-is
>>>>>> for max_wide_int-based and addr_wide_int-based operations.  The small_prec
>>>>> again, you can get edge cased to death here.    i think it would work
>>>>> for max because that really is bigger than anything else, but it is
>>>>> possible (though unlikely) to have something big converted to an address
>>>>> by truncation.
>>>> But I'd have expected that conversion to be represented by an explicit
>>>> CONVERT_EXPR or NOP_EXPR.  It seems wrong to use addr_wide_int directly on
>>>> something that isn't bit- or byte-address-sized.  It'd be the C equivalent
>>>> of int + long -> int rather than the expected int + long -> long.
>>>>
>>>> Same goes for wide_int.  If we're doing arithmetic at a specific
>>>> precision, it seems odd for one of the inputs to be wider and yet
>>>> not have an explicit truncation.
>>> you miss the second reason why we needed addr-wide-int.   A large amount
>>> of the places where the addressing arithmetic is done are not "type
>>> safe".    Only the gimple and rtl that is translated from the source
>>> code is really type safe.     In passes like the strength reduction
>>> where they just "grab things from all over", the addr-wide-int or the
>>> max-wide-int are safe haven structures that are guaranteed to be large
>>> enough to not matter.    So what i fear here is something like a very
>>> wide loop counter being grabbed into some address calculation.
>> It still seems really dangerous to be implicitly truncating a wider type
>> to addr_wide_int.  It's not something we'd ever do in mainline, because
>> uses of addr_wide_int are replacing uses of double_int, and double_int
>> is our current maximum-width representation.
>>
>> Using addr_wide_int rather than max_wide_int is an optimisation.
>> IMO part of implementating that optimisation should be to introduce
>> explicit truncations whenever we try to use addr_wide_int to operate
>> on inputs than might be wider than addr_wide_int.
>>
>> So I still think the assert is the way to go.
> addr_wide_int is not as much of an optimization as it is documentation 
> of what you are doing - i.e. this is addressing arithmetic.  My 
> justification for putting it in was that we wanted a sort of an abstract 
> type to say that this was not just user math, it was addressing 
> arithmetic and that the ultimate result is going to be slammed into a 
> target pointer.
>
> I was only using that as an example to try to indicate that I did not 
> think that it was wrong if someone did truncate.   In particular, would 
> you want the assert to be that the value was truncated or that the type 
> of the value would allow numbers that would be truncated?   I actually 
> think neither.

I'm arguing for:

    gcc_assert (precision >= xprecision);

in wi::int_traits <const_tree>::decompose.

IIRC one of the reasons for wanting addr_wide_int rather than wide_int
was that we wanted a type that could handle both bit and byte sizes.
And we wanted to be able to convert between bits and bytes seamlessly.
That means that shifting is a valid operation for addr_wide_int.  But if
we also implicitly (and that's the key word) used addr_wide_int
directly on tree constants that are wider than addr_wide_int, and say
shifted the result right, the answer would be different from what you'd
get if you did the shift in max_wide_int.  That seems like new behaviour,
since all address arithmetic is effectively done to maximum precision on
mainline.  It's just that the maximum on mainline is rather small.

If code is looking through casts to see wider-than-addr_wide_int types,
I think it's reasonable for that code to have to explicitly force the
tree to addr_wide_int size, via addr_wide_int::from.  Leaving it implicit
seems too subtle and also means that every caller to wi::int_traits
<const_tree>::decompose does a check that is usually unnecessary.

> If a programmer uses a long long on a 32 bit machine for some index 
> variable and slams that into a pointer, he either knows what he is doing 
> or has made a mistake.    do you really think that the compiler should ice?

No, I'm saying that passes that operate on addr_wide_ints while "grabbing
trees from all over" (still not sure what that means in practice) should
explicitly mark places where a truncation is deliberately allowed.
Those places then guarantee that the dropped bits wouldn't affect any of
the later calculations, which is something only the pass itself can know.

We already forbid direct assignments like:

   addr_wide_int x = max_wide_int(...);

at compile time, for similar reasons.

Thanks,
Richard


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]