Bug 85048 - [missed optimization] vector conversions
Summary: [missed optimization] vector conversions
Status: NEW
Alias: None
Product: gcc
Classification: Unclassified
Component: target (show other bugs)
Version: 8.0.1
: P3 enhancement
Target Milestone: ---
Assignee: Not yet assigned to anyone
URL:
Keywords: missed-optimization
Depends on:
Blocks: genvector
  Show dependency treegraph
 
Reported: 2018-03-23 11:25 UTC by Matthias Kretz (Vir)
Modified: 2024-04-22 11:52 UTC (History)
3 users (show)

See Also:
Host:
Target: x86_64-*-*, i?86-*-*
Build:
Known to work:
Known to fail:
Last reconfirmed: 2018-03-23 00:00:00


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Matthias Kretz (Vir) 2018-03-23 11:25:48 UTC
The following testcase lists all integer and/or float conversions applied to vector builtins of the same number of elements. All of those functions can be compiled to a single instruction (the function's name plus `ret`) when `-march=skylake-avx512` is active. AFAICS many conversion instructions in the SSE and AVX ISA extensions are also unsupported.

I would expect this code to compile to optimal conversion sequences even on -O2 (and lower) since the conversion is applied directly on vector builtins. If this is not in scope, I'd like to open a feature request for something like clang's __builtin_convertvector (could be even done via static_cast) that produces optimal conversion instruction sequences on vector builtins without the auto-vectorizer.

#include <cstdint>

template <class T, int N, int Size = N * sizeof(T)>
using V [[gnu::vector_size(Size)]] = T;

template <class From, class To> V<To, 2> cvt2(V<From, 2> x) {
    return V<To, 2>{To(x[0]), To(x[1])};
}
template <class From, class To> V<To, 4> cvt4(V<From, 4> x) {
    return V<To, 4>{To(x[0]), To(x[1]), To(x[2]), To(x[3])};
}
template <class From, class To> V<To, 8> cvt8(V<From, 8> x) {
    return V<To, 8>{
        To(x[0]), To(x[1]), To(x[2]), To(x[3]),
        To(x[4]), To(x[5]), To(x[6]), To(x[7])
    };
}
template <class From, class To> V<To, 16> cvt16(V<From, 16> x) {
    return V<To, 16>{
        To(x[0]), To(x[1]), To(x[2]), To(x[3]),
        To(x[4]), To(x[5]), To(x[6]), To(x[7]),
        To(x[8]), To(x[9]), To(x[10]), To(x[11]),
        To(x[12]), To(x[13]), To(x[14]), To(x[15])
    };
}
template <class From, class To> V<To, 32> cvt32(V<From, 32> x) {
    return V<To, 32>{
        To(x[0]), To(x[1]), To(x[2]), To(x[3]),
        To(x[4]), To(x[5]), To(x[6]), To(x[7]),
        To(x[8]), To(x[9]), To(x[10]), To(x[11]),
        To(x[12]), To(x[13]), To(x[14]), To(x[15]),
        To(x[16]), To(x[17]), To(x[18]), To(x[19]),
        To(x[20]), To(x[21]), To(x[22]), To(x[23]),
        To(x[24]), To(x[25]), To(x[26]), To(x[27]),
        To(x[28]), To(x[29]), To(x[30]), To(x[31])
    };
}
template <class From, class To> V<To, 64> cvt64(V<From, 64> x) {
    return V<To, 64>{
        To(x[ 0]), To(x[ 1]), To(x[ 2]), To(x[ 3]),
        To(x[ 4]), To(x[ 5]), To(x[ 6]), To(x[ 7]),
        To(x[ 8]), To(x[ 9]), To(x[10]), To(x[11]),
        To(x[12]), To(x[13]), To(x[14]), To(x[15]),
        To(x[16]), To(x[17]), To(x[18]), To(x[19]),
        To(x[20]), To(x[21]), To(x[22]), To(x[23]),
        To(x[24]), To(x[25]), To(x[26]), To(x[27]),
        To(x[28]), To(x[29]), To(x[30]), To(x[31]),
        To(x[32]), To(x[33]), To(x[34]), To(x[35]),
        To(x[36]), To(x[37]), To(x[38]), To(x[39]),
        To(x[40]), To(x[41]), To(x[42]), To(x[43]),
        To(x[44]), To(x[45]), To(x[46]), To(x[47]),
        To(x[48]), To(x[49]), To(x[50]), To(x[51]),
        To(x[52]), To(x[53]), To(x[54]), To(x[55]),
        To(x[56]), To(x[57]), To(x[58]), To(x[59]),
        To(x[60]), To(x[61]), To(x[62]), To(x[63]),
    };
}

#define _(name, from, to, size) \
auto name(V<from, size> x) { return cvt##size<from, to>(x); }
// integral -> integral; truncation
_(vpmovqd , uint64_t, uint32_t,  2)
_(vpmovqd , uint64_t, uint32_t,  4)
_(vpmovqd , uint64_t, uint32_t,  8)
_(vpmovqd ,  int64_t, uint32_t,  2)
_(vpmovqd ,  int64_t, uint32_t,  4)
_(vpmovqd ,  int64_t, uint32_t,  8)
_(vpmovqd_, uint64_t,  int32_t,  2)
_(vpmovqd_, uint64_t,  int32_t,  4)
_(vpmovqd_, uint64_t,  int32_t,  8)
_(vpmovqd_,  int64_t,  int32_t,  2)
_(vpmovqd_,  int64_t,  int32_t,  4)
_(vpmovqd_,  int64_t,  int32_t,  8)

_(vpmovqw , uint64_t, uint16_t,  2)
_(vpmovqw , uint64_t, uint16_t,  4)
_(vpmovqw , uint64_t, uint16_t,  8)
_(vpmovqw ,  int64_t, uint16_t,  2)
_(vpmovqw ,  int64_t, uint16_t,  4)
_(vpmovqw ,  int64_t, uint16_t,  8)
_(vpmovqw_, uint64_t,  int16_t,  2)
_(vpmovqw_, uint64_t,  int16_t,  4)
_(vpmovqw_, uint64_t,  int16_t,  8)
_(vpmovqw_,  int64_t,  int16_t,  2)
_(vpmovqw_,  int64_t,  int16_t,  4)
_(vpmovqw_,  int64_t,  int16_t,  8)

_(vpmovqb , uint64_t,  uint8_t,  2)
_(vpmovqb , uint64_t,  uint8_t,  4)
_(vpmovqb , uint64_t,  uint8_t,  8)
_(vpmovqb ,  int64_t,  uint8_t,  2)
_(vpmovqb ,  int64_t,  uint8_t,  4)
_(vpmovqb ,  int64_t,  uint8_t,  8)
_(vpmovqb_, uint64_t,   int8_t,  2)
_(vpmovqb_, uint64_t,   int8_t,  4)
_(vpmovqb_, uint64_t,   int8_t,  8)
_(vpmovqb_,  int64_t,   int8_t,  2)
_(vpmovqb_,  int64_t,   int8_t,  4)
_(vpmovqb_,  int64_t,   int8_t,  8)

_(vpmovdw , uint32_t, uint16_t,  4)
_(vpmovdw , uint32_t, uint16_t,  8)
_(vpmovdw , uint32_t, uint16_t, 16)
_(vpmovdw ,  int32_t, uint16_t,  4)
_(vpmovdw ,  int32_t, uint16_t,  8)
_(vpmovdw ,  int32_t, uint16_t, 16)
_(vpmovdw_, uint32_t,  int16_t,  4)
_(vpmovdw_, uint32_t,  int16_t,  8)
_(vpmovdw_, uint32_t,  int16_t, 16)
_(vpmovdw_,  int32_t,  int16_t,  4)
_(vpmovdw_,  int32_t,  int16_t,  8)
_(vpmovdw_,  int32_t,  int16_t, 16)

_(vpmovdb , uint32_t,  uint8_t,  4)
_(vpmovdb , uint32_t,  uint8_t,  8)
_(vpmovdb , uint32_t,  uint8_t, 16)
_(vpmovdb ,  int32_t,  uint8_t,  4)
_(vpmovdb ,  int32_t,  uint8_t,  8)
_(vpmovdb ,  int32_t,  uint8_t, 16)
_(vpmovdb_, uint32_t,   int8_t,  4)
_(vpmovdb_, uint32_t,   int8_t,  8)
_(vpmovdb_, uint32_t,   int8_t, 16)
_(vpmovdb_,  int32_t,   int8_t,  4)
_(vpmovdb_,  int32_t,   int8_t,  8)
_(vpmovdb_,  int32_t,   int8_t, 16)

_(vpmovwb , uint16_t,  uint8_t,  8)
_(vpmovwb , uint16_t,  uint8_t, 16)
_(vpmovwb , uint16_t,  uint8_t, 32)
_(vpmovwb ,  int16_t,  uint8_t,  8)
_(vpmovwb ,  int16_t,  uint8_t, 16)
_(vpmovwb ,  int16_t,  uint8_t, 32)
_(vpmovwb_, uint16_t,   int8_t,  8)
_(vpmovwb_, uint16_t,   int8_t, 16)
_(vpmovwb_, uint16_t,   int8_t, 32)
_(vpmovwb_,  int16_t,   int8_t,  8)
_(vpmovwb_,  int16_t,   int8_t, 16)
_(vpmovwb_,  int16_t,   int8_t, 32)

// integral -> integral; zero extension
_(vpmovzxbw , uint8_t,  int16_t,  8)
_(vpmovzxbw , uint8_t,  int16_t, 16)
_(vpmovzxbw , uint8_t,  int16_t, 32)
_(vpmovzxbw_, uint8_t, uint16_t,  8)
_(vpmovzxbw_, uint8_t, uint16_t, 16)
_(vpmovzxbw_, uint8_t, uint16_t, 32)

_(vpmovzxbd ,  uint8_t,  int32_t,  4)
_(vpmovzxbd ,  uint8_t,  int32_t,  8)
_(vpmovzxbd ,  uint8_t,  int32_t, 16)
_(vpmovzxwd , uint16_t,  int32_t,  4)
_(vpmovzxwd , uint16_t,  int32_t,  8)
_(vpmovzxwd , uint16_t,  int32_t, 16)
_(vpmovzxbd_,  uint8_t, uint32_t,  4)
_(vpmovzxbd_,  uint8_t, uint32_t,  8)
_(vpmovzxbd_,  uint8_t, uint32_t, 16)
_(vpmovzxwd_, uint16_t, uint32_t,  4)
_(vpmovzxwd_, uint16_t, uint32_t,  8)
_(vpmovzxwd_, uint16_t, uint32_t, 16)

_(vpmovzxbq ,  uint8_t,  int64_t, 2)
_(vpmovzxbq ,  uint8_t,  int64_t, 4)
_(vpmovzxbq ,  uint8_t,  int64_t, 8)
_(vpmovzxwq , uint16_t,  int64_t, 2)
_(vpmovzxwq , uint16_t,  int64_t, 4)
_(vpmovzxwq , uint16_t,  int64_t, 8)
_(vpmovzxdq , uint32_t,  int64_t, 2)
_(vpmovzxdq , uint32_t,  int64_t, 4)
_(vpmovzxdq , uint32_t,  int64_t, 8)
_(vpmovzxbq_,  uint8_t, uint64_t, 2)
_(vpmovzxbq_,  uint8_t, uint64_t, 4)
_(vpmovzxbq_,  uint8_t, uint64_t, 8)
_(vpmovzxwq_, uint16_t, uint64_t, 2)
_(vpmovzxwq_, uint16_t, uint64_t, 4)
_(vpmovzxwq_, uint16_t, uint64_t, 8)
_(vpmovzxdq_, uint32_t, uint64_t, 2)
_(vpmovzxdq_, uint32_t, uint64_t, 4)
_(vpmovzxdq_, uint32_t, uint64_t, 8)

// integral -> integral; sign extension
_(vpmovsxbw , int8_t,  int16_t,  8)
_(vpmovsxbw , int8_t,  int16_t, 16)
_(vpmovsxbw , int8_t,  int16_t, 32)
_(vpmovsxbw_, int8_t, uint16_t,  8)
_(vpmovsxbw_, int8_t, uint16_t, 16)
_(vpmovsxbw_, int8_t, uint16_t, 32)

_(vpmovsxbd ,  int8_t,  int32_t,  4)
_(vpmovsxbd ,  int8_t,  int32_t,  8)
_(vpmovsxbd ,  int8_t,  int32_t, 16)
_(vpmovsxwd , int16_t,  int32_t,  4)
_(vpmovsxwd , int16_t,  int32_t,  8)
_(vpmovsxwd , int16_t,  int32_t, 16)
_(vpmovsxbd_,  int8_t, uint32_t,  4)
_(vpmovsxbd_,  int8_t, uint32_t,  8)
_(vpmovsxbd_,  int8_t, uint32_t, 16)
_(vpmovsxwd_, int16_t, uint32_t,  4)
_(vpmovsxwd_, int16_t, uint32_t,  8)
_(vpmovsxwd_, int16_t, uint32_t, 16)

_(vpmovsxbq ,  int8_t,  int64_t, 2)
_(vpmovsxbq ,  int8_t,  int64_t, 4)
_(vpmovsxbq ,  int8_t,  int64_t, 8)
_(vpmovsxwq , int16_t,  int64_t, 2)
_(vpmovsxwq , int16_t,  int64_t, 4)
_(vpmovsxwq , int16_t,  int64_t, 8)
_(vpmovsxdq , int32_t,  int64_t, 2)
_(vpmovsxdq , int32_t,  int64_t, 4)
_(vpmovsxdq , int32_t,  int64_t, 8)
_(vpmovsxbq_,  int8_t, uint64_t, 2)
_(vpmovsxbq_,  int8_t, uint64_t, 4)
_(vpmovsxbq_,  int8_t, uint64_t, 8)
_(vpmovsxwq_, int16_t, uint64_t, 2)
_(vpmovsxwq_, int16_t, uint64_t, 4)
_(vpmovsxwq_, int16_t, uint64_t, 8)
_(vpmovsxdq_, int32_t, uint64_t, 2)
_(vpmovsxdq_, int32_t, uint64_t, 4)
_(vpmovsxdq_, int32_t, uint64_t, 8)

// integral -> double
_(vcvtdq2pd ,  int32_t, double, 2)
_(vcvtdq2pd ,  int32_t, double, 4)
_(vcvtdq2pd ,  int32_t, double, 8)
_(vcvtudq2pd, uint32_t, double, 2)
_(vcvtudq2pd, uint32_t, double, 4)
_(vcvtudq2pd, uint32_t, double, 8)
_(vcvtqq2pd ,  int64_t, double, 2)
_(vcvtqq2pd ,  int64_t, double, 4)
_(vcvtqq2pd ,  int64_t, double, 8)
_(vcvtuqq2pd, uint64_t, double, 2)
_(vcvtuqq2pd, uint64_t, double, 4)
_(vcvtuqq2pd, uint64_t, double, 8)

// integral -> float
_(vcvtdq2ps ,  int32_t, float,  4)
_(vcvtdq2ps ,  int32_t, float,  8)
_(vcvtdq2ps ,  int32_t, float, 16)
_(vcvtudq2ps, uint32_t, float,  4)
_(vcvtudq2ps, uint32_t, float,  8)
_(vcvtudq2ps, uint32_t, float, 16)
_(vcvtqq2ps ,  int64_t, float,  4)
_(vcvtqq2ps ,  int64_t, float,  8)
_(vcvtqq2ps ,  int64_t, float, 16)
_(vcvtuqq2ps, uint64_t, float,  4)
_(vcvtuqq2ps, uint64_t, float,  8)
_(vcvtuqq2ps, uint64_t, float, 16)

// float <-> double
_( cvttpd2ps, double, float,  2)
_(vcvttpd2ps, double, float,  4)
_(vcvttpd2ps, double, float,  8)
_( cvttps2pd, float, double,  2)
_(vcvttps2pd, float, double,  4)
_(vcvttps2pd, float, double,  8)

// float -> integral
_( cvttps2dq, float, int32_t,  4)
_(vcvttps2dq, float, int32_t,  8)
_(vcvttps2dq, float, int32_t, 16)
_( cvttps2qq, float, int64_t,  4)
_(vcvttps2qq, float, int64_t,  8)
_(vcvttps2qq, float, int64_t, 16)

_( cvttps2udq, float, uint32_t,  4)
_(vcvttps2udq, float, uint32_t,  8)
_(vcvttps2udq, float, uint32_t, 16)
_( cvttps2uqq, float, uint64_t,  4)
_(vcvttps2uqq, float, uint64_t,  8)
_(vcvttps2uqq, float, uint64_t, 16)

// double -> integral
_( cvttpd2dq, double, int32_t, 2)
_(vcvttpd2dq, double, int32_t, 4)
_(vcvttpd2dq, double, int32_t, 8)
_(vcvttpd2qq, double, int64_t, 2)
_(vcvttpd2qq, double, int64_t, 4)
_(vcvttpd2qq, double, int64_t, 8)

_(vcvttpd2udq, double, uint32_t, 2)
_(vcvttpd2udq, double, uint32_t, 4)
_(vcvttpd2udq, double, uint32_t, 8)
_(vcvttpd2uqq, double, uint64_t, 2)
_(vcvttpd2uqq, double, uint64_t, 4)
_(vcvttpd2uqq, double, uint64_t, 8)

// no change in type; nop
_(nop,   int8_t,   int8_t, 16)
_(nop,  uint8_t,  uint8_t, 16)
_(nop,   int8_t,   int8_t, 32)
_(nop,  uint8_t,  uint8_t, 32)
_(nop,   int8_t,   int8_t, 64)
_(nop,  uint8_t,  uint8_t, 64)
_(nop,  int16_t,  int16_t,  8)
_(nop, uint16_t, uint16_t,  8)
_(nop,  int16_t,  int16_t, 16)
_(nop, uint16_t, uint16_t, 16)
_(nop,  int16_t,  int16_t, 32)
_(nop, uint16_t, uint16_t, 32)
_(nop,  int32_t,  int32_t,  4)
_(nop, uint32_t, uint32_t,  4)
_(nop,  int32_t,  int32_t,  8)
_(nop, uint32_t, uint32_t,  8)
_(nop,  int32_t,  int32_t, 16)
_(nop, uint32_t, uint32_t, 16)
_(nop,  int64_t,  int64_t,  2)
_(nop, uint64_t, uint64_t,  2)
_(nop,  int64_t,  int64_t,  4)
_(nop, uint64_t, uint64_t,  4)
_(nop,  int64_t,  int64_t,  8)
_(nop, uint64_t, uint64_t,  8)
_(nop,   double,   double,  2)
_(nop,   double,   double,  4)
_(nop,   double,   double,  8)
_(nop,    float,    float,  4)
_(nop,    float,    float,  8)
_(nop,    float,    float, 16)
Comment 1 Matthias Kretz (Vir) 2018-03-23 11:27:27 UTC
Godbolt link: https://godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAKxAEZSAbAQwDtRkBSAJgCFufSAZ1QBXYskwgA5NwDMeFsgYisAag6yAwskEF8LAhuwcADAEFTZgpgC2AB2bX1WpU0GDVAFVKqFBVQByPn6qAMp4AF6YzgAigaoAVKqCkZioAGYQngCURpYiKWyqAGrqAKx8FcAsIiAgAG6YyETEAPopURDhUbllMRx9sV4afOaW1vaO0RrazO6qAGLEqDY%2BrvOeqEYlM5s%2BXNvI9QRcEMUzSyv72wAe2eoA7KNmqq%2BqxJgEYiw7WnuqB1kxiemwgNwGPBMAxi2R8oPBFVo0NyD36smeHFR41sDiYThm6w8l1WqkJXi2QN%2Bmn%2BABZDscaWcLssSXTKXdHs83u9Pt8qbS8iDUGCIVC%2BrDySLEci4cKETwDuLZVKeLJkZi0RiseYJrj8S45kSWWtDeTtuc/qgfAAOekEa1MrTEm23e6YrlvD5fYg/C3Uq2qW1A92Wbnc%2BGimWS%2BVIpXRiGKmHK%2BVqpWhsOvCMVOlxrM8MpRvMANkLcohD3V5m5GpGlhrOpxU2cszcRqupNNm3NuwDtBLlKOBD7js0ztUfddnPTry9fL9/wnwae07DebFSfj0tzZYqiYledTSZX4Z3PBzG7zBe3KpL1/lFbTVYzm54trvEIAnKWVbR1/vT7QsZHk%2BGZ5rQe7JhCtCHv%2BP7nrBMZXjCK41uidbalYjZ4tMBqtosxodnhXaUvOAayIC2Cksc5EjmO5GTiGIE8t6vo9j49FLh6oGnn%2BkFbhep4QS%2BMGkMebx5vBfH5t%2B8q3gJKoPsBLzPnmb7yfKX7voivEvoBMpiZmAFCWBIm6ZJulIbCBm6XJCFQYpdmImpjk8LQmlKc%2BL5cDpeZcEBLlcMZgkidZvnmb5llSVwtlRQ5olMSeKpcM5UXuS5sg%2Baesj%2BShWJoWMGG6k2BKmmOZLEZRpE%2BEWbKUYONW0QRNUMcuTGzj6/IBs1nGhaeqiZSq44yRCALDRUqghQl4l9eFfWRS%2BqgxQtcW9YNKULWlUm/mNrn%2BfFyncT%2BQU/qZYGzT%2BkWrTGS1gXFunrWBm1XQmA3yn5O2BR9k0HaugnnW982%2BTdgkrVNhlJQ9gmbcJr0Qtl%2Blg8Jx0pqdWX/XDl2IwewMqrId0HpDuNPVjp40rD2Z7S%2BNLIxCNLfZ5Eno9mgOkzj8o0qDP2JezhPs9Dl7k/me3PRUZQ06LqMqmUTP5pjXPTVLbMQmU%2BOnmUvPK8T8vg7JgtFpTxbizwRb028qFav0YxjFwshYOkCjRK0EAsEwNiYD46QEUQPgdJgbplJolhMCIRCqC7bsjp77a%2By1PDMXyg5yHIMdaFHJJEEYYK5OijyW2YAD0%2Be%2BAYmDAMQTAMKoAC02x%2BKX5cMCMqgEMQIiKHieCoCwlhO/Udg2Kg9QAI7oKoPgiH4NWtAQ49%2BOR08%2BACuTmL3/eDyPY%2BqBPBhTzPW9z1wC%2BvDSy9mKvA/D6Ps87zSR/bwQ8974Gp/n%2BvV%2BvJPt97/fj%2BL1wL8QD7hfDei9P53wPkfVQJ8e6ALXpfTexcCC72vg/Q%2BT9rQAKAW/VoKDkEfwML/V4/8YFYMvjg/eN9IEQKftAlesDgHoHIffPBiDCHPxIXAke5DEEsOoX/TBnDGGgMoU/Phx8BEMO4WA0RBC0GLwwXWOhpCh4AHdN7MK/igvskDiFKM4Wo3BmiKFDiLJA2hZ96HrwMcYvB99tHoIkVYhB0itGmKfroixyjrE8KMXYtxi9zGv2Ht4lxxj7HyMccEphoTEHhKIZE1R0SRHCJMWYhJKiklIKMbE/xrwFF6OARklJvCDBxKXhwwpUjkn4NSTQ9JVSslUNKbk9hVtPGcIAEbqJiffa0OiEldMMZA3paSKnr0GTY7JIyHFjOHhMnxwy/B9PcQM5x1TjHLICas4pUylmQPye04BHTMksMQZs%2BJsyh7HKGU/M5oyCnjJOdku5MyHlzIaacvZKzLnXJqZ8gw5yoEDI%2Bc8r5ETFGHMHugaxP85FhJaYEyx9RoXdNkeA5p%2BzIkopQYQvxR8%2BxYpCWi7%2BfgymItIdimpuLSUtIOUEylrC4V4r3gS2Z0LMlsJpfcyFyKinGM5Ri15PL2U4rhTk/FRZCUNIFbUrZbK%2BWMqabKvJUqUkyriaytp9KJmwsWQC7l2rUWoL1faTFbKdViI2RKrF8zLXTLlW89AtriWL3tSq81azjVPzdZq4VvzdW3LBeI81HKxUvPBY6/1lrw3jklSGtVYag1AvjVSxN%2BqhXaulWm01LK41aqRSoi1grXVJrpQWotyqrW5siYWo1ZS3XkRrc6ytbrWk8trSk%2BtQbfVBI7TUrt6b2IeN7VG4tbxS1NsyWUmNPby1TpaTGxtszC0NOnRO5dvzxWBsHbGydnaF1BqXW0wuiD64V2rrXEuZcK5NyiMsVQmAbjWBYCkLusyIg3A6TCpNXKM1Io/V%2Bo1gLf0%2BFnaQgD36d1bqHZEiDmTznMojTyuDKCEMgd3e%2Bz9fK3WIYmh4zDHT35VoTQa/9n6iOtstWW8D5GEGUZdRht5H6VHv1w4qupmGWN1paVR2DNwuOuKVYQsDcCANCNeDhy15LRPke4ZJhjbaglibk0GgNoG83If4%2BJ3Dang1Ma0/O9FXqkNKYM4JklDHfUEaHnRn91Th1kY6TZktUHQnSYvgB5zEm7ONL3tRmTKivNsdCQ5mjgXuNKrwe5wezGgvoYWX5vj6A4sKZC0llLxm/lGOi/UD9yWjUyrwf5jzn6h4qZ3Roo%2BoWZNOfKzmm5Pgcuebq2h6pxWYv8bK%2BZhr5T9OBcMxZ3zjW%2BP9e65Mo%2B7Xcs3GS6GozeDqslZm6Kub2X0uzcG0V0%2BlgT112vZXGulJdsNybikaoD6n2YBfZ3bubzBBYc3j5spk27uAeCK5sdImL4vesY93JR6eXffIap%2BLz2sNA4q%2Bhz7g9Advfq/Co%2B/3ZkvaIz5thOXkeeuAwp0HhHMdCbhVD%2Bod2BPQdTaR0hxP37xd40jrTD2x3sfU5E5HDTWuZeTbd2TKS2dsJx%2BJmNunCeU/B8q3THOAdmdJ2L0HLGRdxMFxp2nTm8cyKG71iXyvueRdW0rrzMaEs2mZ51%2Bnla0u0/C7DspbmjcW9JwbxTSK7v5dh4VoxC3ofTa85a63tPneM6yxNo3tWtcbbd0HrrNS2dRfD6zlbgfzcR/Q5Vve7uiedbl7k5Pw2E8Z7j4l33ieGNZ/V0Ep3hfMvF/R57kX1K2tbfMDtq9DcL2UnQKIDpDBMAkMHMlrgdgUcMbbyIDv7sS%2BAJ70PPvA/MtD5H9npRE%2Bp%2BesIbPzvhvu/HBUJP/vy296r9H6FwcW%2Bp%2B758Pv%2Bfnij%2B953/yuF5%2BHf1EHEPbf0%2B8H38P8cZ/S%2BdlH3v%2BSp/L%2Bnqb%2B7ea%2BD%2BR%2BX%2BN%2Bxe7%2BmC4BL%2BPWf%2BMBm%2BEB6A8BIBo%2BBy22RcR256B2lE6QDAqAeIG%2Beg2%2BHgJGe8%2BBhBHGC%2Bxw1%2BZBZOFBBBeIJm4%2BtBpBy%2BcKlBzBjGl%2Bm%2BdBp%2BqgXB1BvBXw/Bt%2BR8QhLBj%2BfBpBAhkhPBvcABfe9B9u8h4uihn%2B7BP%2BjBVBUhShdgKhoS8hYGsByhPWah/%2BSBsh42Oh3BYBVhZhNhHsTB1aEKJ6QhzYuBqg%2B%2BMCVEBABA/ejh9%2BahH%2B/hgRBhZ%2BaBzhuhemIhAR6AQRURghLhLBfhARggJ%2ByRMR9%2BY%2B0hYRGRN%2B8hORlh%2BRmRRRSRbamBWReILelE2BjcdCaRBhXAyW0R3BUmiBpRrR1Re8NONBXRQ8bRvRlmGmTsTRGRz%2BQxsOLCJR6RXAkxPR0x2S1Gg4cxCx8hoSVmjRqxzRW%2BUx4hwhGhpRexixBxehxwcxJx8hCuAC4xXAIg6xKRThsRRxlxjxMRxe9hxx7x3BxeWxBcRc%2B%2BtRp6e2vhOxCR3R9%2BYioR8RLRgx3hSRYisxgRkJiJDGKxFxgRCxUJ9mnRsJ2JaJauyJCRBJw%2BoBoSGB/RsJJx9%2BumMJgRNJSRumxJ9xqJZJo%2BumGJYRCRDx8JtJZuVJDJpJc%2BzxLJvJkR7JPWlJAJYcqApICArAwA0QCgzcAAnnYF3jnCwKgHYDAtqXYIvPrjOqMc7DqS5nDsRgoaaQaeOlBoug5vqeaYCg2g6WabaRaTGjVAAo6d5lBm6l6XqW6XbiDt6W6TpiGYGTacGR9iaT6eGTGaGVGdTmOv9k7HGUnuhqmdaeQTmS8dmWcQWeofmf7iWWWumUXn0RYj6SWWIrOuWRXrWbGUGTEjEq6TacXp8W2dod2XmfWbYjEoitWS2esmWWGaEp8aOVGQiZKW8DkV2bORUdOSKUWdWUuaAa8DkZOYaacW8BYYmduWoa8GoVudyIeacQSlILCAwNIGUFIKQCwNICYHeagNIIHLwPwMkKIOINMDbLQHeQQI%2BZebCAANYgA0hFgAB0H4sgZQ0FJg1oDwDwNUZQJsjA0gNId5NgdAJgJg95gFpAL5Ugd5ggIAuFAFUgT5sIcAsASAaA9geAneZAFAEAdFdgDFmAxAIAwAggLsBhCAqAM89sDA1gxAJFEAHS%2BFHSCgTAxAqp0gf5pAdFbsBgAA8iwAwHJRRXeVgDYIqZ3vhfgB8M0HgI0CRVpaQI%2Bk0CHJIFIApXXNebZf%2BcQHgFhVpVeYqSgPwPwIwHgB0iRZALCDqQQNdmZVXLoOgBoDEJwO%2BbwLQA8NXCpbINXAAOoVz7a6XiAICRWCDAWqnMDAWYBVxMD1A3BlDgTEVfkSB0BXk3l3kPnmWEU3DWhFhVw1SqDADIDICqAQAtxtzAX3AQC4CEAkDqC/k%2BCaArBsWMWjXZT3Bvl8C8D/mAXZCwgICYBMBYCcUQAgUgBlCyAQUmA1SHUfjgQ2zSx4xoVSAYWkBYW/i4X1VPkEXSDEWkWkDkWUU1VSBcB1X4WEVLVuWwiNCiXXZgVAA
Comment 2 Richard Biener 2018-03-23 11:45:48 UTC
If there's a good specification of __builtin_convertvector it certainly makes sense to support that in a compatible way for the generic vector extension.
It would need to be handled by tree-vect-generic.c lowering it to
VEC_PACK_* / VEC_UNPACK_* / VIEW_CONVERT (for noop) sequences.

I suppose this bug is about unoptimal code being generated currently.

If so please open an enhacement request for __builtin_convertvector.
Comment 3 Matthias Kretz (Vir) 2018-03-23 13:09:19 UTC
Just opened PR85052 for tracking __builtin_convertvector support.
Comment 4 Marc Glisse 2018-03-30 09:36:50 UTC
See PR77399.
Comment 5 Devin Hussey 2019-01-06 00:38:36 UTC
ARM/AArch64 NEON use these:

From            To           Intrinsic      ARMv7-a          AArch64
intXxY_t     -> int2XxY_t    vmovl_sX       vmovl.sX         sshll #0?
uintXxY_t.   -> uint2XxY_t   vmovl_uX       vmovl.uX         ushll #0?
[u]int2XxY_t -> [u]intXxY_t  vmovn_[us]X    vmovn.iX         xtn
floatXxY_t   -> intXxY_t     vcvt[q]_sX_fX  vcvt.sX.fX       fcvtzs
floatXxY_t   -> uintXxY_t    vcvt[q]_uX_fX  vcvt.uX.fX       fcvtzu
intXxY_t     -> floatXxY_t   vcvt[q]_fX_sX  vcvt.fX.sX       scvtf
uintXxY_t    -> floatXxY_t   vcvt[q]_fX_uX  vcvt.fX.uX       ucvtf
float32x2_t  -> float64x2_t  vcvt_f32_f64   2x vcvt.f64.f32  fcvtl
float64x2_t  -> float32x2_t  vcvt_f64_f32   2x vcvt.f32.f64  fcvtn

Clang optimizes vmovl to vshll by zero for some reason. 

float32x2_t <-> float64x2_t requires 2 VFP instructions on ARMv7-a.
Comment 6 Matthias Kretz (Vir) 2023-03-21 20:07:20 UTC
Most of the conversions are optimized perfectly now. Only the following conversions are still missing for AVX-512:
https://godbolt.org/z/9afWbYod6

#include <cstdint>

template <class T, int N, int Size = N * sizeof(T)>
using V [[gnu::vector_size(Size)]] = T;

template <class From, class To> V<To, 4> cvt4(V<From, 4> x) {
    return V<To, 4>{To(x[0]), To(x[1]), To(x[2]), To(x[3])};
}
template <class From, class To> V<To, 8> cvt8(V<From, 8> x) {
    return V<To, 8>{
        To(x[0]), To(x[1]), To(x[2]), To(x[3]),
        To(x[4]), To(x[5]), To(x[6]), To(x[7])
    };
}
template <class From, class To> V<To, 16> cvt16(V<From, 16> x) {
    return V<To, 16>{
        To(x[0]), To(x[1]), To(x[2]), To(x[3]),
        To(x[4]), To(x[5]), To(x[6]), To(x[7]),
        To(x[8]), To(x[9]), To(x[10]), To(x[11]),
        To(x[12]), To(x[13]), To(x[14]), To(x[15])
    };
}

#define _(name, from, to, size) \
auto name(V<from, size> x) { return cvt##size<from, to>(x); }
// integral -> double
_(vcvtudq2pd, uint32_t, double, 4)
_(vcvtudq2pd, uint32_t, double, 8)

// integral -> float
_(vcvtqq2ps ,  int64_t, float, 16)
_(vcvtuqq2ps, uint64_t, float, 16)

// float -> integral
_(vcvttps2qq, float, int64_t, 16)

_( cvttps2udq, float, uint32_t,  4)
_(vcvttps2udq, float, uint32_t,  8)
_(vcvttps2uqq, float, uint64_t, 16)

// double -> integral
_(vcvttpd2udq, double, uint32_t, 4)
Comment 7 Hongtao.liu 2023-03-22 01:25:50 UTC
Yes, Looks like the pattern name is misdefined.
it shoud be fixuns_trunc, but we have ufix_trunc.
Comment 8 Hongtao.liu 2023-03-22 01:40:20 UTC
(In reply to Hongtao.liu from comment #7)
> Yes, Looks like the pattern name is misdefined.
> it shoud be fixuns_trunc, but we have ufix_trunc.

No, we have the right name but generate extra instructions for uns.

 8012(define_expand "fixuns_trunc<mode><sseintvecmodelower>2"
 8013  [(match_operand:<sseintvecmode> 0 "register_operand")
 8014   (match_operand:VF1 1 "register_operand")]
 8015  "TARGET_SSE2"
 8016{
 8017  if (<MODE>mode == V16SFmode)
 8018    emit_insn (gen_ufix_truncv16sfv16si2 (operands[0],
 8019                                          operands[1]));
 8020  else
 8021    {
 8022      rtx tmp[3];
 8023      tmp[0] = ix86_expand_adjust_ufix_to_sfix_si (operands[1], &tmp[2]);
 8024      tmp[1] = gen_reg_rtx (<sseintvecmode>mode);
 8025      emit_insn (gen_fix_trunc<mode><sseintvecmodelower>2 (tmp[1], tmp[0]));
 8026      emit_insn (gen_xor<sseintvecmodelower>3 (operands[0], tmp[1], tmp[2]));
 8027    }
 8028  DONE;
Comment 9 Hongtao.liu 2023-03-22 02:10:28 UTC
With the patch, we can generate optimized code expect for those 16 {u,}qq cases, since the ABI doesn't support 1024-bit vector.

1 file changed, 16 insertions(+), 2 deletions(-)
gcc/config/i386/sse.md | 18 ++++++++++++++++--

modified   gcc/config/i386/sse.md
@@ -8014,8 +8014,9 @@ (define_expand "fixuns_trunc<mode><sseintvecmodelower>2"
    (match_operand:VF1 1 "register_operand")]
   "TARGET_SSE2"
 {
-  if (<MODE>mode == V16SFmode)
-    emit_insn (gen_ufix_truncv16sfv16si2 (operands[0],
+  /* AVX512 support vcvttps2udq for all 128/256/512-bit vectors.  */
+  if (<MODE>mode == V16SFmode || TARGET_AVX512VL)
+    emit_insn (gen_ufix_trunc<mode><sseintvecmodelower>2 (operands[0],
 					  operands[1]));
   else
     {
@@ -8413,6 +8414,12 @@ (define_insn "*float<floatunssuffix>v2div2sf2_mask_1"
    (set_attr "prefix" "evex")
    (set_attr "mode" "V4SF")])
 
+(define_expand "floatuns<si2dfmodelower><mode>2"
+  [(set (match_operand:VF2_512_256VL 0 "register_operand")
+	(unsigned_float:VF2_512_256VL
+	  (match_operand:<si2dfmode> 1 "nonimmediate_operand")))]
+   "TARGET_AVX512F")
+
 (define_insn "ufloat<si2dfmodelower><mode>2<mask_name>"
   [(set (match_operand:VF2_512_256VL 0 "register_operand" "=v")
 	(unsigned_float:VF2_512_256VL
@@ -8694,6 +8701,13 @@ (define_insn "fix_truncv4dfv4si2<mask_name>"
    (set_attr "prefix" "maybe_evex")
    (set_attr "mode" "OI")])
 
+
+/* The standard pattern name is fixuns_truncmn2.  */
+(define_expand "fixuns_truncv4dfv4si2"
+  [(set (match_operand:V4SI 0 "register_operand")
+	(unsigned_fix:V4SI (match_operand:V4DF 1 "nonimmediate_operand")))]
+  "TARGET_AVX512VL && TARGET_AVX512F")
+
 (define_insn "ufix_truncv4dfv4si2<mask_name>"
   [(set (match_operand:V4SI 0 "register_operand" "=v")
 	(unsigned_fix:V4SI (match_operand:V4DF 1 "nonimmediate_operand" "vm")))]
Comment 10 GCC Commits 2023-03-31 01:04:23 UTC
The master branch has been updated by hongtao Liu <liuhongt@gcc.gnu.org>:

https://gcc.gnu.org/g:fe42e7fe119159f7443dbe68189e52891dc0148e

commit r13-6951-gfe42e7fe119159f7443dbe68189e52891dc0148e
Author: liuhongt <hongtao.liu@intel.com>
Date:   Thu Mar 30 15:43:25 2023 +0800

    Rename ufix_trunc/ufloat* patterns to fixuns_trunc/floatuns* to align with standard pattern name.
    
    There's some typo for the standard pattern name for unsigned_{float,fix},
    it should be floatunsmn2/fixuns_truncmn2, not ufloatmn2/ufix_truncmn2
    in current trunk, the patch fix the typo, also change all though
    ufix_trunc/ufloat patterns.
    
    Also vcvttps2udq is available under AVX512VL, so it can be generated
    directly instead of being emulated via vcvttps2dq.
    
    gcc/ChangeLog:
    
            PR target/85048
            * config/i386/i386-builtin.def (BDESC): Adjust icode name from
            ufloat/ufix to floatuns/fixuns.
            * config/i386/i386-expand.cc
            (ix86_expand_vector_convert_uns_vsivsf): Adjust comments.
            * config/i386/sse.md
            (ufloat<sseintvecmodelower><mode>2<mask_name><round_name>):
            Renamed to ..
            (<mask_codefor>floatuns<sseintvecmodelower><mode>2<mask_name><round_name>):.. this.
            (<mask_codefor><avx512>_ufix_notrunc<sf2simodelower><mode><mask_name><round_name>):
            Renamed to ..
            (<mask_codefor><avx512>_fixuns_notrunc<sf2simodelower><mode><mask_name><round_name>):
            .. this.
            (<fixsuffix>fix_truncv16sfv16si2<mask_name><round_saeonly_name>):
            Renamed to ..
            (fix<fixunssuffix>_truncv16sfv16si2<mask_name><round_saeonly_name>):.. this.
            (ufloat<si2dfmodelower><mode>2<mask_name>): Renamed to ..
            (floatuns<si2dfmodelower><mode>2<mask_name>): .. this.
            (ufloatv2siv2df2<mask_name>): Renamed to ..
            (<mask_codefor>floatunsv2siv2df2<mask_name>): .. this.
            (ufix_notrunc<mode><si2dfmodelower>2<mask_name><round_name>):
            Renamed to ..
            (fixuns_notrunc<mode><si2dfmodelower>2<mask_name><round_name>):
            .. this.
            (ufix_notruncv2dfv2si2): Renamed to ..
            (fixuns_notruncv2dfv2si2):.. this.
            (ufix_notruncv2dfv2si2_mask): Renamed to ..
            (fixuns_notruncv2dfv2si2_mask): .. this.
            (*ufix_notruncv2dfv2si2_mask_1): Renamed to ..
            (*fixuns_notruncv2dfv2si2_mask_1): .. this.
            (ufix_truncv2dfv2si2): Renamed to ..
            (*fixuns_truncv2dfv2si2): .. this.
            (ufix_truncv2dfv2si2_mask): Renamed to ..
            (fixuns_truncv2dfv2si2_mask): .. this.
            (*ufix_truncv2dfv2si2_mask_1): Renamed to ..
            (*fixuns_truncv2dfv2si2_mask_1): .. this.
            (ufix_truncv4dfv4si2<mask_name>): Renamed to ..
            (fixuns_truncv4dfv4si2<mask_name>): .. this.
            (ufix_notrunc<mode><sseintvecmodelower>2<mask_name><round_name>):
            Renamed to ..
            (fixuns_notrunc<mode><sseintvecmodelower>2<mask_name><round_name>):
            .. this.
            (ufix_trunc<mode><sseintvecmodelower>2<mask_name>): Renamed to ..
            (<mask_codefor>fixuns_trunc<mode><sseintvecmodelower>2<mask_name>):
            .. this.
    
    gcc/testsuite/ChangeLog:
    
            * g++.target/i386/pr85048.C: New test.
Comment 11 Hongtao.liu 2023-03-31 01:07:36 UTC
Fixed in GCC13.
Comment 12 Uroš Bizjak 2023-03-31 07:28:13 UTC
(In reply to Hongtao.liu from comment #9)
> With the patch, we can generate optimized code expect for those 16 {u,}qq
> cases, since the ABI doesn't support 1024-bit vector.

Can't these be vectorized using partial vectors? GCC generates:

_Z9vcvtqq2psDv16_l:
	vmovq	56(%rsp), %xmm0
	vmovq	40(%rsp), %xmm1
	vmovq	88(%rsp), %xmm2
	vmovq	120(%rsp), %xmm3
	vpinsrq	$1, 64(%rsp), %xmm0, %xmm0
	vpinsrq	$1, 48(%rsp), %xmm1, %xmm1
	vpinsrq	$1, 96(%rsp), %xmm2, %xmm2
	vpinsrq	$1, 128(%rsp), %xmm3, %xmm3
	vinserti128	$0x1, %xmm0, %ymm1, %ymm1
	vcvtqq2psy	8(%rsp), %xmm0
	vcvtqq2psy	%ymm1, %xmm1
	vinsertf128	$0x1, %xmm1, %ymm0, %ymm0
	vmovq	72(%rsp), %xmm1
	vpinsrq	$1, 80(%rsp), %xmm1, %xmm1
	vinserti128	$0x1, %xmm2, %ymm1, %ymm1
	vmovq	104(%rsp), %xmm2
	vcvtqq2psy	%ymm1, %xmm1
	vpinsrq	$1, 112(%rsp), %xmm2, %xmm2
	vinserti128	$0x1, %xmm3, %ymm2, %ymm2
	vcvtqq2psy	%ymm2, %xmm2
	vinsertf128	$0x1, %xmm2, %ymm1, %ymm1
	vinsertf64x4	$0x1, %ymm1, %zmm0, %zmm0

where clang manages to vectorize the function to:

  vcvtqq2ps 16(%rbp), %ymm0
  vcvtqq2ps 80(%rbp), %ymm1
  vinsertf64x4 $1, %ymm1, %zmm0, %zmm0
Comment 13 Matthias Kretz (Vir) 2024-04-19 13:49:13 UTC
Should I open a new PR for the remaining ((u)int64, 16) <-> (float, 16) conversions?

https://godbolt.org/z/x3xPMYKj3

Note that __builtin_convertvector produces the code we want.

template <class T, int N, int Size = N * sizeof (T)>
using V [[gnu::vector_size (Size)]] = T;

template <class From, class To>
V<To, 16>
cvt16 (V<From, 16> x)
{
#if BUILTIN
  return __builtin_convertvector (x, V<To, 16>);
#else
  return V<To, 16>{ To (x[0]),  To (x[1]),  To (x[2]),  To (x[3]),
                    To (x[4]),  To (x[5]),  To (x[6]),  To (x[7]),
                    To (x[8]),  To (x[9]),  To (x[10]), To (x[11]),
                    To (x[12]), To (x[13]), To (x[14]), To (x[15]) };
#endif
}

#define _(name, from, to, size)                                               \
  auto name (V<from, size> x) { return cvt##size<from, to> (x); }
// integral -> float
_ (vcvtqq2ps, int64_t, float, 16)
_ (vcvtuqq2ps, uint64_t, float, 16)

// float -> integral
_ (vcvttps2qq, float, int64_t, 16)
_ (vcvttps2uqq, float, uint64_t, 16)
Comment 14 Hongtao Liu 2024-04-22 00:37:56 UTC
(In reply to Matthias Kretz (Vir) from comment #13)
> Should I open a new PR for the remaining ((u)int64, 16) <-> (float, 16)
> conversions?
> 
> https://godbolt.org/z/x3xPMYKj3
> 
> Note that __builtin_convertvector produces the code we want.
> 

With -mprefer-vector-width=512, GCC generate produces the same code.
Default tuning for -march=skylake-avx512 is -mprefer-vector-width=256.
Comment 15 Matthias Kretz (Vir) 2024-04-22 08:21:07 UTC
So it seems that if at least one of the vector builtins involved in the expression is 512 bits GCC needs to locally increase prefer-vector-width to 512? Or, more generally:

prefer-vector-width = max(prefer-vector-width, 8 * sizeof(operands)..., 8 * sizeof(return-value))

The reason to default to 256 bits is to avoid zmm register usage altogether (clock-down). But if the surrounding code already uses zmm registers that motivation is moot.

Also, I think this shouldn't be considered auto-vectorization but rather pattern recognition (recognizing a __builtin_convertvector).
Comment 16 Hongtao Liu 2024-04-22 11:52:47 UTC
(In reply to Matthias Kretz (Vir) from comment #15)
> So it seems that if at least one of the vector builtins involved in the
> expression is 512 bits GCC needs to locally increase prefer-vector-width to
> 512? Or, more generally:
> 
> prefer-vector-width = max(prefer-vector-width, 8 * sizeof(operands)..., 8 *
> sizeof(return-value))
> 
> The reason to default to 256 bits is to avoid zmm register usage altogether
> (clock-down). But if the surrounding code already uses zmm registers that
> motivation is moot.
> 
> Also, I think this shouldn't be considered auto-vectorization but rather
> pattern recognition (recognizing a __builtin_convertvector).

The related question is "should GCC set prefer-vector-width=512" when 512-bit intrinsics is used. There may be a situation where users don't want compiler to generate zmm except for those 512-bit intrinsics in their program, i.e the hot loop is written with 512-bit intrinsics for performance purpose, but for other places, better no zmm usage.