[wide-int] int_traits <tree>

Richard Sandiford rdsandiford@googlemail.com
Sat Oct 19 17:51:00 GMT 2013


Kenneth Zadeck <zadeck@naturalbridge.com> writes:
> On 10/19/2013 05:01 AM, Richard Sandiford wrote:
>> Mike Stump <mikestump@comcast.net> writes:
>>> +  // We optimize x < y, where y is 64 or fewer bits.
>>> +  // We have to be careful to not allow comparison to a large positive
>>> +  // unsigned value like 0x8000000000000000, those would be encoded
>>> +  // with a y.len == 2.
>>> +  if (y.precision <= HOST_BITS_PER_WIDE_INT
>>> +      && y.len == 1)
>> I don't get this.  If y.precision <= HOST_BITS_PER_WIDE_INT then
>> y.len must be 1.  I realise that tree constants can be stored with
>> TREE_INT_CST_NUNITS > TYPE_PRECISION / HOST_BITS_PER_WIDE_INT
>> (so that extensions beyond TYPE_PRECISION are free).  But the
>> wide-int code is shielded from that by the ::decompose routine.
>> A wide int never has len > precision / HOST_BITS_PER_WIDE_INT.
>>
>> Thanks,
>> Richard
> I think that part of this is that neither mike or i really understand 
> how this stuff works anymore.
>
> in the old version where we had precision 0, we would wait to 
> canonicalize a constant or a simple integer until we saw what the 
> precision of the other operand was.   That was what precison 0 meant.    
> it was kind of like what you are now proposing with this new trait, but 
> for the reason that we actually did not know what to do than some 
> concern about escapement.
>
> What i do not understand is what happens what do you get when you bring 
> in an integer variable that is an unsigned HOST_WIDE_INT with the top 
> bit set.   In the precision 0 days, if the prec of the other side was 64 
> or less, the incoming integer took 1 hwi and if the precision was 
> larger, it took two hwis.  The canonicalization happened late enough so 
> that there was never a question.

Ah, I think I know what you mean.  The original implementation was:

  template <typename T>
  static inline const HOST_WIDE_INT*
  to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p, const T& x)
  {
    s[0] = x;
    if (signedp(x)
        || sizeof (T) < sizeof (HOST_WIDE_INT)
        || ! top_bit_set (x))
      {
        *l = 1;
      }
    else
      {
        s[1] = 0;
        *l = 2; 
      }
    *p = 0;
    return s;
  }

  void
  wide_int_ro::check_precision (unsigned int *p1, unsigned int *p2,
                                bool check_equal ATTRIBUTE_UNUSED,
                                bool check_zero ATTRIBUTE_UNUSED)
  {
    gcc_checking_assert ((!check_zero) || *p1 != 0 || *p2 != 0);

    if (*p1 == 0)
      *p1 = *p2;

    if (*p2 == 0)
      *p2 = *p1;

    gcc_checking_assert ((!check_equal) || *p1 == *p2);
  }

  /* Return true if C1 < C2 using signed comparisons.  */
  template <typename T1, typename T2>
  static inline bool
  lts_p (const T1 &c1, const T2 &c2)
  {
    bool result;
    HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS];
    HOST_WIDE_INT ws2[WIDE_INT_MAX_ELTS];
    const HOST_WIDE_INT *s1, *s2;  /* Returned data */
    unsigned int cl1, cl2;         /* array lengths  */
    unsigned int p1, p2;           /* precisions */
    
    s1 = to_shwi1 (ws1, &cl1, &p1, c1);
    s2 = to_shwi1 (ws2, &cl2, &p2, c2);
    check_precision (&p1, &p2, false, true);
    
    if (p1 <= HOST_BITS_PER_WIDE_INT
        && p2 <= HOST_BITS_PER_WIDE_INT)
      {
        HOST_WIDE_INT x0 = sext_hwi (s1[0], p1);
        HOST_WIDE_INT x1 = sext_hwi (s2[0], p2);
        result = x0 < x1;
      }
    else
      result = lts_p_large (s1, cl1, p1, s2, cl2, p2);
    
#ifdef DEBUG_WIDE_INT
    debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n", result, s1, cl1, p1, s2, cl2, p2);
#endif
    return result;
  }

So if you had a 128-bit wide_int and T == unsigned HOST_WIDE_INT,
this lts_p would zero-extend the unsigned HOST_WIDE_INT to 128 bits and
then do a signed comparison.

The thing here is that the "check_equal" argument is false.
So if instead you were comparing a 128-bit wide_int with a 64-bit tree
constant, lts_p would treat that tree constant as a signed 64-bit number,
even if it was TYPE_UNSIGNED.  Similarly if you were comparing a 128-bit
tree constant and a 64-bit tree constant.  You also allowed a comparison
of a 128-bit wide_int with a 64-bit rtx, again treating the 64-bit rtx
as signed.

So when doing the wi:: conversion, I'd interpreted the desired semantics
for lts_p as being "treat both inputs as signed without extending them",
since that's what the function did in most cases.  It seemed inconsistent
to treat a 64-bit unsigned primitive integer differently from a
64-bit unsigned tree constant.  So at the moment, it doesn't matter
whether any HOST_WIDE_INT input to lts_p is signed or unsigned, just like
it didn't and doesn't matter whether any tree input is signed or unsigned.

If instead we want lts_p to operate to a single unified precision,
like eq_p did:

    s1 = to_shwi1 (ws1, &cl1, &p1, c1);
    s2 = to_shwi1 (ws2, &cl2, &p2, c2);
    check_precision (&p1, &p2, true, false);

and still does, then that's easy enough to change.  Then all extendable
inputs will be extended according to their natural signedness and then
compared signed.  Mixed-precision rtx comparisons would be forbidden.

But that's tangential to the point I was trying to make above,
which is that the rules about valid values for "len" and post-
check_precision "precision" are still the same as in your original
version.  So I think Mike's original patch was right and that this extra
"y.len == 1" check is redundant.  That would still be true if we changed
lts_p as above.

Thanks,
Richard



More information about the Gcc-patches mailing list