This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: wide-int more performance fixes for wide multiplication.


Kenneth Zadeck <zadeck@naturalbridge.com> writes:
>>> The current world
>>> is actually structured so that we never ask about overflow for the two
>>> larger classes because the reason that you used those classes was that
>>> you never wanted to have this discussion. So if you never ask about
>>> overflow, then it really does not matter because we are not going to
>>> return enough bits for you to care what happened on the inside.  Of
>>> course that could change and someone could say that they wanted overflow
>>> on widest-int.   Then the comment makes sense, with revisions, unless
>>> your review of the code that wants overflow on widest int suggests that
>>> they are just being stupid.
>> But widest_int is now supposed to be at least 1 bit wider than widest
>> input type (unlike previously where it was double the widest input type).
>> So I can definitely see cases where we'd want to know whether a
>> widest_int * widest_int result overflows.
>>
>> My point is that the widest_int * widest_int would normally be a signed
>> multiplication rather than an unsigned multiplication, since the extra
>> 1 bit of precision allows every operation to be signed.  So it isn't
>> a case of whether the top bit of a widest_int will be set, but whether
>> we ever reach here for widest_int in the first place.  We should be
>> going down the sgn == SIGNED path rather than the sgn == UNSIGNED path.
>>
>> widest_int can represent an all-1s value, usually interpreted as -1.
>> If we do go down this sgn == UNSIGNED path for widest_int then we will
>> instead treat the all-1s value as the maximum unsigned number, just like
>> for any other kind of wide int.
>>
>> As far as this function goes there really is no difference between
>> wide_int, offset_int and widest_int.  Which is good, because offset_int
>> and widest_int should just be wide_ints that are optimised for a specific
>> and fixed precision.
>>
>> Thanks,
>> Richard
> I am now seriously regretting letting richi talk me into changing the 
> size of the wide int buffer from being 2x of the largest mode on the 
> machine.   It was a terrible mistake AND i would guess making it smaller 
> does not provide any real benefit.
>
> The problem is that when you use widest-int (and by analogy offset int) 
> it should NEVER EVER overflow.  Furthermore we need to change the 
> interfaces for these two so that you cannot even ask!!!!!!    (i do not 
> believe that anyone does ask so the change would be small.)

offset_int * offset_int could overflow too, at least in the sense that
there are combinations of valid offset_ints whose product can't be
represented in an offset_int.  E.g. (1ULL << 67) * (1ULL << 67).
I think that was always the case.

> There are a huge set of bugs on the trunk that are "fixed" with wide-int 
> because people wrote code for double-int thinking that it was infinite 
> precision.    So they never tested the cases of what happens when the 
> size of the variable needed two HWIs.   Most of those cases were 
> resolved by making passes like tree-vrp use wide-int and then being 
> explicit about the overflow on every operation, because with wide-int 
> the issue is in your face since things overflow even for 32 bit 
> numbers.  However, with the current widest-int, we will only be safe for 
> add and subtract by adding the extra bit.  In multiply we are exposed.   
> The perception is that widest-int is a good as infinite precision and no 
> one will ever write the code to check if it overflowed because it only 
> rarely happens.

All operations can overflow.  We would need 2 extra bits rather than 1
extra bit to stop addition overflowing, because the 1 extra bit we already
have is to allow unsigned values to be treated as signed.  But 2 extra bits
is only good for one addition, not a chain of two additions.

That's why ignoring overflow seems dangerous to me.  The old wide-int
way might have allowed any x * y to be represented, but if nothing
checked whether x * y was bigger than expected then x * y + z could
overflow.

Thanks,
Richard



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]