This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [Patch, fortran, RFC] PR28105 Overflow check for ALLOCATE statement
- From: Tobias Burnus <burnus at net-b dot de>
- To: Janne Blomqvist <blomqvist dot janne at gmail dot com>
- Cc: Fortran List <fortran at gcc dot gnu dot org>, GCC Patches <gcc-patches at gcc dot gnu dot org>
- Date: Tue, 23 Nov 2010 16:50:48 +0100
- Subject: Re: [Patch, fortran, RFC] PR28105 Overflow check for ALLOCATE statement
- References: <AANLkTimWJMty0jEWVc8d3wZvmfw=aXxXY3tFCNa7tEaz@mail.gmail.com>
On 11/22/2010 11:37 PM, Janne Blomqvist wrote:
the attached patch implements overflow checking for the ALLOCATE
statement. It also fixes the minor regression introduced a few days
ago when the old "< 0" overflow check was removed as a side-effect of
fixing the unsignedness of size_t.
Frankly, I do not like the way the overflow check is done - namely by
requiring an overflow:
D.1506 = MAX_EXPR <my_size, 0>;
D.1507 = D.1506 != 0 ? 2147483647 / D.1506 <= 0 ? 1 : 0 : 0;
D.1508 = NON_LVALUE_EXPR <D.1506>;
D.1509 = ((<unnamed-unsigned:32>) D.1508 > -1 ? 1 : 0) + D.1507;
overflow.1 = D.1509;
if (overflow.1 != 0)
_gfortran_runtime_error (&"Integer overflow when
calculating the amount of memory to allocate"[1]{lb: 1 sz: 1});
Namely, doing the "MAXSIZE/my_size <= 0" check where my_size is the
signed requested size of the array. Explicitly asking for an overflow
seems to be wrong. Additionally, I have no idea what the supposed result
is; MAXSIZE (= "TYPE_MAX_VALUE (sizetype)") is unsigned and does not fit
into a signed variable, however, the calculation is supposed to be done
signed as both the D.1506 and the D.1507 is signed.
Additionally, you completely ignore one important ingredient: The size
of the actual object which is to be allocated; using derived types those
can be large. Example:
use iso_c_binding, only: c_intptr_t
integer(1), allocatable :: a(:)
integer(4), allocatable :: b(:)
allocate(a(huge(0_c_intptr_t)*i)) ! OK
allocate(b(huge(0_c_intptr_t)*i)) ! Too big
end
That's the reason I was thinking of:
if (my_size > MAXSIZE/sizeof(type)) error()
if one wants to include the multiplications for
allocate(array(sizeA,sizeB)) then one can do:
if (sizeB > 0 && sizeA > MAXSIZE/sizeB)
That way one always avoids an overflow. I am not sure whether one
rejects a value close to MAXSIZE, but at least one avoids
signed/unsigned trickery.
For the conditions, one should consider to use __builtin_expect to give
the compiler some additional optimization potential.
Wrt whether these checks should be done always, or only when
-fcheck=mem is enabled, IMHO some benchmarks should be run and if no
difference can be seen they can be left enabled. So, does anybody know
of suitable benchmarks that are influenced by ALLOCATE performance?
Polyhedron, maybe not so much?
I don't. The issue is presumably largest for calls to small functions
called in loops, which get later inlined. The less conditions and other
trickery is done, the more likely the compiler can move the malloc out
of the loop. I think it is trivial to construct such a program but it is
hard to know how representative such a program would be for the real
world. The probability has to be weighted against using a too large
value for ALLOCATE without -check=all/-check=mem. I am inclined to make
it conditional to -check=mem; but on the other hand, I live in a 64bit
world where the chance of an overflow is rather low (for somewhat
reasonable programs).
Tobias