This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug target/66369] gcc 4.8.3/5.1.0 miss optimisation with vpmovmskb


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66369

--- Comment #8 from Marcus Kool <marcus.kool at urlfilterdb dot com> ---
(In reply to UroÅ Bizjak from comment #5)
> Created attachment 35693 [details]
> Patch to add zero-extended MOVMSK patterns
> 
> This patch adds zero-extended MOVMSK patterns.
> 
> However, one more cast from (int) to (unsigned int) is needed in the source,
> due to the definition of the intrinsic:
> 
>    long v;
> 
>    regchx256 = _mm256_set1_epi8( ch );
>    regset256 = _mm256_loadu_si256( (__m256i const *) set );
>    v = (unsigned int) _mm256_movemask_epi8
>                        ( _mm256_cmpeq_epi8(regchx256,regset256) );
Can you confirm that the code has
     return __builtin_ctzl(v);

Thanks for the patch, but the required cast to unsigned int is
counter-intuitive and it is likely that nobody will use this cast in their code
and hence miss the optimisation.  Isn't there a more elegant solution?

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]