[Bug tree-optimization/63148] [4.8/4.9/5 Regression] r187042 causes auto-vectorization failure for X86 for -m32.

rguenth at gcc dot gnu.org gcc-bugzilla@gcc.gnu.org
Fri Sep 5 08:07:00 GMT 2014


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63148

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|rguenther at suse dot de           |

--- Comment #7 from Richard Biener <rguenth at gcc dot gnu.org> ---
(In reply to Doug Gilmore from comment #6)
> > The input to the vectorizer is already bogus:
> >
> >   _12 = i.0_5 + 536870911;
> >   _13 = global_data.b[_12];
> 
> Note that gimple out generated by the front end
> is already problematic:
> 
> Before r187042:
>   D.1747 = i.0 + -1;
> With r187042:
>   D.1747 = i.0 + 536870911;
> Any idea what the intent of the changes in r187042 that transform
> signed to unsigned constants?  To me, that is the problematic issue.

Well, before r187042 the constants had an unsigned type but were
sign-extended (but only constants were!).  This has caused similar
issues elsewhere.  Now constants are consistent with their types
but now we run into the issue that as POINTER_PLUS_EXPR forces
the offset to be 'sizetype' (which is unsigned), we lose information
when translating C array[index] as *(&array + index * element_size).
So we can't go "back" to array[index] by dividing the pointer offset
by the element_size because we have no idea if the offset is really
signed or not (but even then the index may be obfuscated by the
programmer so you can't really go back to array[index] from pointer
arithmetic).



More information about the Gcc-bugs mailing list