This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug middle-end/83004] [8 regression] gcc.dg/vect/pr81136.c fail


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83004

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2017-11-21
                 CC|                            |jakub at gcc dot gnu.org
     Ever confirmed|0                           |1

--- Comment #1 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
I think this test fails with -mavx and later since it has been introduced.
The test uses the VECTOR_BITS macro and assumes that is the vector size, but
tree-vect.h hardcodes VECTOR_BITS to 128 on all targets and all ISAs.
Strangely, various tests test for VECTOR_BITS > 128, > 256 etc.
So, shall we define VECTOR_BITS to higher values based on preprocessor macros?
For x86, the question then would be if __AVX__ without __AVX2__ should enable
VECTOR_BITS 256 or not, floating point vectors are 256-bit, but integral
128-bit.
Also, -mprefer-avx{128,256} change this stuff.
Or shall we have VECTOR_BITS as usual vector bits and MAX_VECTOR_BITS as
maximum for the current option?
Or shall the test use its own macro, defined by default to VECTOR_BITS but for
some ISAs to something different?

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]