This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug target/66369] gcc 4.8.3/5.1.0 miss optimisation with vpmovmskb
- From: "marcus.kool at urlfilterdb dot com" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Thu, 04 Jun 2015 17:50:34 +0000
- Subject: [Bug target/66369] gcc 4.8.3/5.1.0 miss optimisation with vpmovmskb
- Auto-submitted: auto-generated
- References: <bug-66369-4 at http dot gcc dot gnu dot org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66369
--- Comment #8 from Marcus Kool <marcus.kool at urlfilterdb dot com> ---
(In reply to UroÅ Bizjak from comment #5)
> Created attachment 35693 [details]
> Patch to add zero-extended MOVMSK patterns
>
> This patch adds zero-extended MOVMSK patterns.
>
> However, one more cast from (int) to (unsigned int) is needed in the source,
> due to the definition of the intrinsic:
>
> long v;
>
> regchx256 = _mm256_set1_epi8( ch );
> regset256 = _mm256_loadu_si256( (__m256i const *) set );
> v = (unsigned int) _mm256_movemask_epi8
> ( _mm256_cmpeq_epi8(regchx256,regset256) );
Can you confirm that the code has
return __builtin_ctzl(v);
Thanks for the patch, but the required cast to unsigned int is
counter-intuitive and it is likely that nobody will use this cast in their code
and hence miss the optimisation. Isn't there a more elegant solution?