[wide-int] int_traits <tree>

Richard Sandiford rdsandiford@googlemail.com
Sun Oct 20 08:47:00 GMT 2013


Kenneth Zadeck <zadeck@naturalbridge.com> writes:
> On 10/19/2013 10:18 AM, Richard Sandiford wrote:
>> Kenneth Zadeck <zadeck@naturalbridge.com> writes:
>>> On 10/19/2013 05:01 AM, Richard Sandiford wrote:
>>>> Mike Stump <mikestump@comcast.net> writes:
>>>>> +  // We optimize x < y, where y is 64 or fewer bits.
>>>>> +  // We have to be careful to not allow comparison to a large positive
>>>>> +  // unsigned value like 0x8000000000000000, those would be encoded
>>>>> +  // with a y.len == 2.
>>>>> +  if (y.precision <= HOST_BITS_PER_WIDE_INT
>>>>> +      && y.len == 1)
>>>> I don't get this.  If y.precision <= HOST_BITS_PER_WIDE_INT then
>>>> y.len must be 1.  I realise that tree constants can be stored with
>>>> TREE_INT_CST_NUNITS > TYPE_PRECISION / HOST_BITS_PER_WIDE_INT
>>>> (so that extensions beyond TYPE_PRECISION are free).  But the
>>>> wide-int code is shielded from that by the ::decompose routine.
>>>> A wide int never has len > precision / HOST_BITS_PER_WIDE_INT.
>>>>
>>>> Thanks,
>>>> Richard
>>> I think that part of this is that neither mike or i really understand
>>> how this stuff works anymore.
>>>
>>> in the old version where we had precision 0, we would wait to
>>> canonicalize a constant or a simple integer until we saw what the
>>> precision of the other operand was.   That was what precison 0 meant.
>>> it was kind of like what you are now proposing with this new trait, but
>>> for the reason that we actually did not know what to do than some
>>> concern about escapement.
>>>
>>> What i do not understand is what happens what do you get when you bring
>>> in an integer variable that is an unsigned HOST_WIDE_INT with the top
>>> bit set.   In the precision 0 days, if the prec of the other side was 64
>>> or less, the incoming integer took 1 hwi and if the precision was
>>> larger, it took two hwis.  The canonicalization happened late enough so
>>> that there was never a question.
>> Ah, I think I know what you mean.  The original implementation was:
>>
>>    template <typename T>
>>    static inline const HOST_WIDE_INT*
>>    to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p, const T& x)
>>    {
>>      s[0] = x;
>>      if (signedp(x)
>>          || sizeof (T) < sizeof (HOST_WIDE_INT)
>>          || ! top_bit_set (x))
>>        {
>>          *l = 1;
>>        }
>>      else
>>        {
>>          s[1] = 0;
>>          *l = 2;
>>        }
>>      *p = 0;
>>      return s;
>>    }
>>
>>    void
>>    wide_int_ro::check_precision (unsigned int *p1, unsigned int *p2,
>>                                  bool check_equal ATTRIBUTE_UNUSED,
>>                                  bool check_zero ATTRIBUTE_UNUSED)
>>    {
>>      gcc_checking_assert ((!check_zero) || *p1 != 0 || *p2 != 0);
>>
>>      if (*p1 == 0)
>>        *p1 = *p2;
>>
>>      if (*p2 == 0)
>>        *p2 = *p1;
>>
>>      gcc_checking_assert ((!check_equal) || *p1 == *p2);
>>    }
>>
>>    /* Return true if C1 < C2 using signed comparisons.  */
>>    template <typename T1, typename T2>
>>    static inline bool
>>    lts_p (const T1 &c1, const T2 &c2)
>>    {
>>      bool result;
>>      HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS];
>>      HOST_WIDE_INT ws2[WIDE_INT_MAX_ELTS];
>>      const HOST_WIDE_INT *s1, *s2;  /* Returned data */
>>      unsigned int cl1, cl2;         /* array lengths  */
>>      unsigned int p1, p2;           /* precisions */
>>      
>>      s1 = to_shwi1 (ws1, &cl1, &p1, c1);
>>      s2 = to_shwi1 (ws2, &cl2, &p2, c2);
>>      check_precision (&p1, &p2, false, true);
>>      
>>      if (p1 <= HOST_BITS_PER_WIDE_INT
>>          && p2 <= HOST_BITS_PER_WIDE_INT)
>>        {
>>          HOST_WIDE_INT x0 = sext_hwi (s1[0], p1);
>>          HOST_WIDE_INT x1 = sext_hwi (s2[0], p2);
>>          result = x0 < x1;
>>        }
>>      else
>>        result = lts_p_large (s1, cl1, p1, s2, cl2, p2);
>>      
>> #ifdef DEBUG_WIDE_INT
>>      debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n", result, s1, cl1, p1, s2, cl2, p2);
>> #endif
>>      return result;
>>    }
> you need to be careful about asserting too much from the old code. the 
> time line was:
>
> 1) we developed the stuff on x86-64
> 2) you did your patch
> 3) we ported everything to ppc and our private port.
>
> i really only became very sensitive to this issue during step 3 because 
> the x86 does not exhibit these bugs.
>
>
>> So if you had a 128-bit wide_int and T == unsigned HOST_WIDE_INT,
>> this lts_p would zero-extend the unsigned HOST_WIDE_INT to 128 bits and
>> then do a signed comparison.
>>
>> The thing here is that the "check_equal" argument is false.
>> So if instead you were comparing a 128-bit wide_int with a 64-bit tree
>> constant, lts_p would treat that tree constant as a signed 64-bit number,
>> even if it was TYPE_UNSIGNED.  Similarly if you were comparing a 128-bit
>> tree constant and a 64-bit tree constant.  You also allowed a comparison
>> of a 128-bit wide_int with a 64-bit rtx, again treating the 64-bit rtx
>> as signed.
> I do not think that this is what check_equal meant because the 0 
> precision was a wild card.  The 0 precision allowed the values to come 
> in from simple vars and constants and be converted on the fly.   

Right, that's what I mean.  I agree the 0 precision case did what you
say (the "128-bit wide_int and T == unsigned HOST_WIDE_INT" thing).
But the last paragraph above was about what happened for !check_equal
operations like lts_p when p1 and p2 were different and both nonzero.
In those cases we left both parameters in their original precision
without requiring them to be equal.  So...

>> So when doing the wi:: conversion, I'd interpreted the desired semantics
>> for lts_p as being "treat both inputs as signed without extending them",
>> since that's what the function did in most cases.  It seemed inconsistent
>> to treat a 64-bit unsigned primitive integer differently from a
>> 64-bit unsigned tree constant.  So at the moment, it doesn't matter
> i do not see this as inconsistently as you do.   if i have a 6 in the 
> gcc source, i really mean that i want to compare that 6 with a 6 of any 
> type that happens to appear in the user's source program.  My 6 has to 
> be generic enough to match anything that the user might throw at it.    
> This was richi's big argument against me having to write foo.lts_p 
> (wide_int (6, foo.get_precision()).   The gcc source code writer needs 
> his 6 to be special.   Richi was right!!!!

...using lts_p to compare (say) addr_wide_int with a:

     (unsigned HOST_WIDE_INT) -1

in GCC's source code behaved differently from using lts_p to compare
addr_wide_int with a:

     (unsigned HOST_WIDE_INT) -1

in the user's input and represented as a tree.  The former would
be zero-extended to 2 HWIs because it had precision 0.  The latter
would stay in its original HWI precision and be treated as signed.
That's the inconsistency that bothered me.

(Of course, both cases treated (unsigned HOST_WIDE_INT) -1 as signed when
compared with a HWI-sized input.)

All I'm saying is that I think the two should be treated the same way.
They are in the current version, but that way is to make the first case
behave like the second.  But...

>> whether any HOST_WIDE_INT input to lts_p is signed or unsigned, just like
>> it didn't and doesn't matter whether any tree input is signed or unsigned.
>>
>> If instead we want lts_p to operate to a single unified precision,
>> like eq_p did:
>>
>>      s1 = to_shwi1 (ws1, &cl1, &p1, c1);
>>      s2 = to_shwi1 (ws2, &cl2, &p2, c2);
>>      check_precision (&p1, &p2, true, false);
>>
>> and still does, then that's easy enough to change.  Then all extendable
>> inputs will be extended according to their natural signedness and then
>> compared signed.  Mixed-precision rtx comparisons would be forbidden.

...that could be changed.

>> But that's tangential to the point I was trying to make above,
>> which is that the rules about valid values for "len" and post-
>> check_precision "precision" are still the same as in your original
>> version.  So I think Mike's original patch was right and that this extra
>> "y.len == 1" check is redundant.  That would still be true if we changed
>> lts_p as above.
> the relative comparisons and the equality comparisons are different.    
> The equality comparisons allowed the precision mismatch because there 
> were places in the front end that hashed tree-csts and so it did 
> comparisons on things whose types were not even similar.   We likely 
> could have fixed this by changing the code around to do the type 
> comparsion first, but we chose to make the equality case more general - 
> hense the false for the parameter for check_equal.   We never made the 
> similar changes for the relative comparisons.

Hmm, it's the other way around, isn't it?  The eq_p code quoted above
passed "check_equal == true".  I.e. the eq_p inputs had to have the
same precision (after precision 0 was handled).  And they still do in
the current version.

It's the relative comparisons that allowed the inputs to be different
precisions (and still do).

> My point is that i do not see how that works now because there is no 
> tying of the precisions from the two operands of binary operations.   I 
> agree that we do not need the length test for the short circuit code, 
> but we do need to be at a point where (unsigned HWI) 0xfffffffffffffff 
> is canonicalized as two hwis if it is being compared with a number with 
> a 128 precision.

This happens for equality and binary arithmetic through FLEXIBLE_PRECISION.
E.g. the direct equivalent of the original:

>>      if (*p1 == 0)
>>        *p1 = *p2;
>>
>>      if (*p2 == 0)
>>        *p2 = *p1;

is:

template <typename T1, typename T2>
inline wide_int
wi::int_traits <wide_int_storage>::get_binary_result (const T1 &x, const T2 &y)
{
  /* This shouldn't be used for two flexible-precision inputs.  */
  STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION
		 || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION);
  if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION)
    return wide_int::create (wi::get_precision (y));
  else
    return wide_int::create (wi::get_precision (x));
}

The point of doing it this way was to force the conditions to be done
at compile time.

Thanks,
Richard



More information about the Gcc-patches mailing list