[PATCH 0/3] Fix PR47654 and PR49649

Sebastian Pop sebpop@gmail.com
Mon Jul 18 17:48:00 GMT 2011

On Mon, Jul 18, 2011 at 11:11, Tobias Grosser <tobias@grosser.es> wrote:
>> We introduce casts when we generate code for an expression with a type
>> smaller than the types of its constituents.
> So the next question is: What is the type of the expression. As far as I
> understand we decide ourselves which type the expression should have. Our
> correctness constraint is that the type must be large enough to contain [lb,
> ub] of the result as well as all operands. We can choose any larger type to
> reduce the number of casts that will generated.

Right, and so we cannot only rely on the lb_ub info, and we have to
walk the CLAST expression and determine the largest type used as a
sub-expression: that's what gcc_type_for_clast_expr tries to do.

>> OK.
>> Here is an example: supposing that we already code generated an IV
>> graphite_IV with a signed short type, then we are asked to generate
>> code for the expression "42 * graphite_IV", and finally suppose that
>> lb_ub returns the interval [-100, 100] for this expression, then we
>> would have a signed char type for the expression
> So the signed char type is the smallest type we can use for the expression.
>> and so we would
>> introduce a cast of the graphite_IV to signed char before doing the
>> multiplication: so I guess the code generated for this expression
>> would be something like: "42 * (char) graphite_IV" of type char.
>> Without taking the max of the types of the sub-expressions, you would
>> have to generate casts for a shorter type.
> Your example is a little incomplete, as 42 is missing a type. At the moment
> gcc_type_for_value() returns the smallest type that contains the scalar
> value. So I assume the type is a signed char.
> If we choose for as type of the expression signed char we get this:
> 42 * (char) graphite_IV
> On the other hand, if we choose signed short for the expression we get:
> ((short) 42) * graphite_IV

That would be just "42 * graphite_IV" of type short, as widening from
char to short is possible without extra cast.

> Both expressions contain a cast. Which one is better for the optimizer is
> something I do not yet understand. However, I believe this choice is
> basically a choice that can be taken locally for each expression. Using
> either max(validy_type,(min(op1_type, op2_type, ...)) to favor small types
> or max(validy_type,(max(op1_type, op2_type, ...)) to favor large types.
>>> Can we find a solution that solves this problem more locally. I mean
>>> instead
>>> of using an imprecise algorithm that leads to larger types which
>>> (accidentally?) leads to less casts, we could understand the type
>>> calculated
>>> by lb_ub_... as a minimal type needed for correctness. For each
>>> instruction
>>> that we code generate, we derive the type by electing a type that is
>>> large
>>> enough to be correct and that at the same time is not smaller than the
>>> smallest type of each operand.
>> You surely mean "not smaller than the *largest* type of each operand".
> No. As said above, both seem to be valid choices and this seems to be

Here is a counter-example where taking the type "not smaller than the
smallest type of each operand" produces wrong code:

expression "a + b"
type of a is char
type of b is short
a goes from -100 to 100
b goes from 100 to 200

Computing the type of "a + b" as the type "not smaller than the
smallest type of each operand" leads to char.  Then, we would have to
generate the expression "(char) a + (char) b" of type char.  Casting
200 into a char overflows and is undefined.

> I honestly do not like this function chain. The problem is that we would
> calculate huge types, if we derive the type of an expression by only taking
> into account the types of its sub(sub, ...) expressions. At the moment we do
> not get these huge types, as our calculations are not 100% correct.
> Lets assume we have a clast consisting of a single addition from several
> integers.
> 127 + 127 + 127 + 127
> Each integer would get the type signed char. Now the function
> gcc_type_for_clast_red() would calculate the necessary type as max(char,
> char, char, char) => char. However the correct type is actually max
> (type([lb, ub]), min/max(char,char,char,char)). Where type([lb,ub]) =
> type([508,508]) = short and the result is max(short, char) = short.

Right.  We should be able to represent both the sub-expressions and
their results in the computed type.  The current implementation of
gcc_type_for_clast_* is wrong.

> gcc_type_for_clast_term () seems also to just use the type of the variable
> instead of the type that would actually be needed to calculate <integer> *
> variable.
> The types would become even larger as they would increase while walking up
> the ClAST. This is because, we never take the actual ub,lb values into
> account, but always derive the minimal type needed for correctness from the
> types of the operands.
> I believe, to derive correct types that are nor overly large, doing those
> calculations locally would be better. For each expression we use ub/lb to
> calculate the minimal type needed for correctness and then we choose a type
> equal or larger than this type that minimizes the number of bad casts.

You would also need to make sure that every sub-expression can be
expressed in that computed type: the example here is that of a min or
a max expression of a loop bound: even if we know from the iteration
domain that the IV belongs to [-128, 127] we cannot generate a char
for this loop:

for (i = -128; i < min (127, j + 200); i++)

we need a type large enough to be able to represent "j + 200", and
that would be short.  So we need to take again the "max of the types
of sub-expressions".


More information about the Gcc-patches mailing list