This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH i386 AVX512] [20/n] AVX-512 integer shift pattern.


On Wed, Aug 20, 2014 at 3:08 PM, Kirill Yukhin <kirill.yukhin@gmail.com> wrote:
> Hello,
> On 15 Aug 20:35, Uros Bizjak wrote:
>> On Fri, Aug 15, 2014 at 1:56 PM, Kirill Yukhin <kirill.yukhin@gmail.com> wrote:
>> Again, please split insn pattern to avoid:
>>
>> +  "TARGET_SSE2
>> +   && <mask_mode512bit_condition>
>> +   && ((<MODE>mode != V16HImode && <MODE>mode != V8HImode)
>> +       || TARGET_AVX512BW
>> +       || !<mask_applied>)"
>>
>> insn constraints. The insn constraint should use baseline TARGET_* and
>> mode iterator should use TARGET_* that results in "baseline TARGET_ &&
>> iterator TARGET_" for certain mode. If these are properly used, then
>> there is no need to use <MODE>mode checks in the insn constraint.
> I've refactored the pattern. But it should be mentioned,
> that it still uses <MODE>mode checkes implicitly.
> Problem is that masking allowed either for ZMM or (XMM|YMM)+TARGET_AVX512VL,
> supposing that TARGET_AVX512F is on for both cases.
> That is what "mask_mode512bit_condition" check:
>   (define_subst_attr "mask_mode512bit_condition" "mask" "1" "(<MODE_SIZE> == 64 || TARGET_AVX512VL)")
>
> So, this pattern:
> (define_insn "<shift_insn><mode>3<mask_name>"
>   [(set (match_operand:VI2_AVX2_AVX512BW 0 "register_operand" "=x,v")
>         (any_lshift:VI2_AVX2_AVX512BW
>           (match_operand:VI2_AVX2_AVX512BW 1 "register_operand" "0,v")
>           (match_operand:SI 2 "nonmemory_operand" "xN,vN")))]
>   "TARGET_SSE2 && <mask_mode512bit_condition> && <mask_avx512bw_condition>"
>
> is expanded to (16hi, mask, ashift):
> (define_insn ("ashlv16hi3_mask")
>      [
>         (set (match_operand:V16HI 0 ("register_operand") ("=x,v"))
>             (vec_merge:V16HI (ashift:V16HI (match_operand:V16HI 1 ("register_operand") ("0,v"))
>                     (match_operand:SI 2 ("nonmemory_operand") ("xN,vN")))
>                 (match_operand:V16HI 3 ("vector_move_operand") ("0C,0C"))
>                 (match_operand:HI 4 ("register_operand") ("Yk,Yk"))))
>     ] ("(TARGET_AVX512F) && ((TARGET_SSE2 && (32 == 64 || TARGET_AVX512VL) && TARGET_AVX512BW) && (TARGET_AVX2))")
>
> (TARGET_AVX512F) came from `mask' subst, (32 == 64 || TARGET_AVX512VL) is what
> I've described above, (TARGET_AVX512BW) came from coresponding subst attr and
> TARGET_AVX2 belongs to mode iterator.

This looks OK to me, enabling the insn using "&& <mask_..._condition>"
is the established practice. As the general approach, the final insn
enable condition should be formed from baseline condition, conditional
mode and define-subst enables in the most logical way.

> Bootstrapped and avx512-regtested.
>
> gcc/
>         * config/i386/sse.md
>         (define_mode_iterator VI248_AVX2): Delete.
>         (define_mode_iterator VI2_AVX2_AVX512BW): New.
>         (define_mode_iterator VI48_AVX2): Ditto.
>         (define_insn ""<shift_insn><mode>3<mask_name>"): Add masking.
>         Split into two similar patterns, which use different mode
>         iterators: VI2_AVX2_AVX512BW and VI48_AVX2.
>         * config/i386/subst.md (define_subst_attr "mask_avx512bw_condition"):
>         New.

OK for mainline.

Thanks,
Uros.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]