[PATCH 03/10] tree-object-size: Use tree instead of HOST_WIDE_INT

Siddhesh Poyarekar siddhesh@gotplt.org
Mon Nov 22 10:11:57 GMT 2021

On 11/20/21 00:31, Siddhesh Poyarekar wrote:
>> This doesn't match what the code did and I'm surprised if it works at 
>> all.
>> TREE_OPERAND (pt_var, 1), while it is an INTEGER_CST or POLY_INT_CST,
>> has in its type encoded the type for aliasing, so the type is some 
>> pointer
>> type.  Performing size_binop etc. on such values can misbehave, the code
>> assumes that it is sizetype or at least some integral type compatible 
>> with
>> it.Also, mem_ref_offset is signed and offset_int has bigger precision
>> than pointers on the target such that it can be always signed.  So
>> e.g. if MEM_REF's second operand is bigger or equal than half of the
>> address space, in the old code it would appear to be negative, wi::sub 
>> would
>> result in a value bigger than sz.  Not really sure right now if that is
>> exactly how we want to treat it, would be nice to try some testcase.
> Let me try coming up with a test case for it.

So I played around a bit with this.  Basically:

char buf[8];

__SIZE_TYPE__ test (void)
   char *p = &buf[0x90000004];
   return __builtin_object_size (p + 2, 0);

when built with -m32 returns 0x70000002 but on 64-bit, returns 0 as 
expected.  of course, with subscript as 0x9000000000000004, 64-bit gives 
  0x7000000000000002 as the result.

With the tree conversion, this is at least partly taken care of since 
offset larger than size, to the extent that it fits into sizetype, 
returns a size of zero.  So in the above example, the size returned is 
zero in both -m32 as well as -m64.  Likewise for negative offset, i.e. 
&buf[-4]; the old code returns 10 while with trees it returns 0, which 
seems correct to me since it is an underflow.

It's only partly taken care of because, e.g.

   char *p = &buf[0x100000004];

ends up truncating the offset, returning object size of 2.  This however 
is an unrelated problem; it's the folding of offsets that is responsible 
for this since it ends up truncating the offset to shwi bounds.  Perhaps 
there's an opportunity in get_addr_base_and_unit_offset to warn of 
overflow if offset goes above pointer precision before truncating it.

So for this patch, may I simply ensure that offset is converted to 
sizetype and keep everything else the same?  it appears to demonstrate 
better behaviour than the older code.  I'll also add these tests.


More information about the Gcc-patches mailing list