]> gcc.gnu.org Git - gcc.git/commit
i386: Improve avx* vector concatenation [PR93594]
authorJakub Jelinek <jakub@redhat.com>
Thu, 6 Feb 2020 10:08:59 +0000 (11:08 +0100)
committerJakub Jelinek <jakub@redhat.com>
Thu, 6 Feb 2020 10:08:59 +0000 (11:08 +0100)
commit3f740c67dbb90177aa71d3c60ef9b0fd2f44dbd9
treeda4f56c7d249b3940ba60ff223273b8326db16fe
parentcb3f06480a17f98579704b9927632627a3814c5c
i386: Improve avx* vector concatenation [PR93594]

The following testcase shows that for _mm256_set*_m128i and similar
intrinsics, we sometimes generate bad code.  All 4 routines are expressing
the same thing, a 128-bit vector zero padded to 256-bit vector, but only the
3rd one actually emits the desired vmovdqa      %xmm0, %xmm0 insn, the
others vpxor    %xmm1, %xmm1, %xmm1; vinserti128        $0x1, %xmm1, %ymm0, %ymm0
The problem is that the cast builtins use UNSPEC_CAST which is after reload
simplified using a splitter, but during combine it prevents optimizations.
We do have avx_vec_concat* patterns that generate efficient code, both for
this low part + zero concatenation special case and for other cases too, so
the following define_insn_and_split just recognizes avx_vec_concat made of a
low half of a cast and some other reg.

2020-02-06  Jakub Jelinek  <jakub@redhat.com>

PR target/93594
* config/i386/predicates.md (avx_identity_operand): New predicate.
* config/i386/sse.md (*avx_vec_concat<mode>_1): New
define_insn_and_split.

* gcc.target/i386/avx2-pr93594.c: New test.
gcc/ChangeLog
gcc/config/i386/predicates.md
gcc/config/i386/sse.md
gcc/testsuite/ChangeLog
gcc/testsuite/gcc.target/i386/avx2-pr93594.c [new file with mode: 0644]
This page took 0.067769 seconds and 6 git commands to generate.