This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [patch, fortran] Enable FMA for AVX2 and AVX512F for matmul


On Thu, Mar 02, 2017 at 10:09:31AM +0200, Janne Blomqvist wrote:
> > Here's something from the new matmul_r8_avx2:
> >
> >     156c:       c4 62 e5 b8 fd          vfmadd231pd %ymm5,%ymm3,%ymm15
> >     1571:       c4 c1 79 10 04 06       vmovupd (%r14,%rax,1),%xmm0
> >     1577:       c4 62 dd b8 db          vfmadd231pd %ymm3,%ymm4,%ymm11
> >     157c:       c4 c3 7d 18 44 06 10    vinsertf128
> > $0x1,0x10(%r14,%rax,1),%ymm0,%ymm0
> >     1583:       01
> >     1584:       c4 62 ed b8 ed          vfmadd231pd %ymm5,%ymm2,%ymm13
> >     1589:       c4 e2 ed b8 fc          vfmadd231pd %ymm4,%ymm2,%ymm7
> >     158e:       c4 e2 fd a8 ad 30 ff    vfmadd213pd
> > -0x800d0(%rbp),%ymm0,%ymm5
> 
> Great, looks good!
> 
> > ... and here from matmul_r8_avx512f:
> >
> >     1da8:       c4 a1 7b 10 14 d6       vmovsd (%rsi,%r10,8),%xmm2
> >     1dae:       c4 c2 b1 b9 f0          vfmadd231sd %xmm8,%xmm9,%xmm6
> >     1db3:       62 62 ed 08 b9 e5       vfmadd231sd %xmm5,%xmm2,%xmm28
> >     1db9:       62 62 ed 08 b9 ec       vfmadd231sd %xmm4,%xmm2,%xmm29
> >     1dbf:       62 62 ed 08 b9 f3       vfmadd231sd %xmm3,%xmm2,%xmm30
> >     1dc5:       c4 e2 91 99 e8          vfmadd132sd %xmm0,%xmm13,%xmm5
> >     1dca:       c4 e2 99 99 e0          vfmadd132sd %xmm0,%xmm12,%xmm4
> >     1dcf:       c4 e2 a1 99 d8          vfmadd132sd %xmm0,%xmm11,%xmm3
> >     1dd4:       c4 c2 a9 99 d1          vfmadd132sd %xmm9,%xmm10,%xmm2
> >     1dd9:       c4 c2 89 99 c1          vfmadd132sd %xmm9,%xmm14,%xmm0
> >     1dde:       0f 8e d3 fe ff ff       jle    1cb7
> > <matmul_r8_avx512f+0x1cb7>
> 
> Good, it's using fma, but why is this using xmm registers? That would
> mean it's operating only on 128 bit blocks at a time so no better than
> plain AVX. AFAIU avx512 should use zmm registers to operate on 512 bit
> chunks.

Well, it uses sd, i.e. the scalar fma, not pd, so those are always xmm regs
and only a single double in them, this must be some scalar epilogue loop or
whatever; but matmul_r8_avx512f also has:
    140c:       62 72 e5 40 98 c1       vfmadd132pd %zmm1,%zmm19,%zmm8
    1412:       62 72 e5 40 98 cd       vfmadd132pd %zmm5,%zmm19,%zmm9
    1418:       62 72 e5 40 98 d1       vfmadd132pd %zmm1,%zmm19,%zmm10
    141e:       62 72 e5 40 98 de       vfmadd132pd %zmm6,%zmm19,%zmm11
    1424:       62 72 e5 40 98 e1       vfmadd132pd %zmm1,%zmm19,%zmm12
    142a:       62 e2 e5 40 98 c6       vfmadd132pd %zmm6,%zmm19,%zmm16
    1430:       62 f2 e5 40 98 c8       vfmadd132pd %zmm0,%zmm19,%zmm1
    1436:       62 f2 e5 40 98 f0       vfmadd132pd %zmm0,%zmm19,%zmm6
    143c:       62 72 e5 40 98 fd       vfmadd132pd %zmm5,%zmm19,%zmm15
    1442:       62 72 e5 40 98 f4       vfmadd132pd %zmm4,%zmm19,%zmm14
    1448:       62 72 e5 40 98 eb       vfmadd132pd %zmm3,%zmm19,%zmm13
    144e:       62 f2 e5 40 98 d0       vfmadd132pd %zmm0,%zmm19,%zmm2
    1454:       62 b2 e5 40 98 ec       vfmadd132pd %zmm20,%zmm19,%zmm5
    145a:       62 b2 e5 40 98 e4       vfmadd132pd %zmm20,%zmm19,%zmm4
    1460:       62 b2 e5 40 98 dc       vfmadd132pd %zmm20,%zmm19,%zmm3
    1466:       62 b2 e5 40 98 c4       vfmadd132pd %zmm20,%zmm19,%zmm0
etc. where 8 doubles in zmm regs are processed together.

	Jakub


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]