[Bug target/105617] [12/13 Regression] Slp is maybe too aggressive in some/many cases

rguenth at gcc dot gnu.org gcc-bugzilla@gcc.gnu.org
Tue May 17 06:48:34 GMT 2022


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105617

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
   Last reconfirmed|                            |2022-05-17
     Ever confirmed|0                           |1
             Status|UNCONFIRMED                 |NEW

--- Comment #10 from Richard Biener <rguenth at gcc dot gnu.org> ---
(In reply to Hongtao.liu from comment #9)
> (In reply to Hongtao.liu from comment #8)
> > (In reply to Hongtao.liu from comment #7)
> > > Hmm, we have specific code to add scalar->vector(vmovq) cost to vector
> > > construct, but it seems not to work here, guess it's because &r0,and thought
> > > it was load not scalar? 
> > Yes, true for as gimple_assign_load_p
> > 
> > 
> > (gdb) p debug_gimple_stmt (def)
> > 72# VUSE <.MEM_46>
> > 73r0.0_20 = r0;
> It's a load from stack, and finally eliminated in rtl dse1, but here the
> vectorizer doesn't know.

Yes, it's difficult for the SLP vectorizer to guess whether rN will come
from memory or not.  Some friendlier middle-end representation for
add-with-carry might be nice - the x86 backend could for example fold
__builtin_ia32_addcarryx_u64 to use a _Complex unsinged long long for the
return, ferrying the carry in __imag.  Alternatively we could devise
some special GIMPLE_ASM kind ferrying RTL and not assembly so the
backend could fold it directly to RTL on GIMPLE with asm constraints
doing the plumbing ... (we'd need some match-scratch and RTL expansion
would still need to allocate the actual pseudos).

  <bb 2> [local count: 1073741824]:
  _1 = *srcB_17(D);
  _2 = *srcA_18(D);
  _30 = __builtin_ia32_addcarryx_u64 (0, _2, _1, &r0);
  _3 = MEM[(const uint64_t *)srcB_17(D) + 8B];
  _4 = MEM[(const uint64_t *)srcA_18(D) + 8B];
  _5 = (int) _30;
  _29 = __builtin_ia32_addcarryx_u64 (_5, _4, _3, &r1);
  _6 = MEM[(const uint64_t *)srcB_17(D) + 16B];
  _7 = MEM[(const uint64_t *)srcA_18(D) + 16B];
  _8 = (int) _29;
  _28 = __builtin_ia32_addcarryx_u64 (_8, _7, _6, &r2);
  _9 = MEM[(const uint64_t *)srcB_17(D) + 24B];
  _10 = MEM[(const uint64_t *)srcA_18(D) + 24B];
  _11 = (int) _28;
  __builtin_ia32_addcarryx_u64 (_11, _10, _9, &r3);
  r0.0_12 = r0;
  r1.1_13 = r1;
  _36 = {r0.0_12, r1.1_13};
  r2.2_14 = r2;
  r3.3_15 = r3;
  _37 = {r2.2_14, r3.3_15};
  vectp.9_35 = dst_19(D);
  MEM <vector(2) long unsigned int> [(uint64_t *)vectp.9_35] = _36;
  vectp.9_39 = vectp.9_35 + 16;
  MEM <vector(2) long unsigned int> [(uint64_t *)vectp.9_39] = _37;

so for the situation at hand I don't see any reasonable way out that
doesn't have the chance of regressing things in other places (like
treat loads from non-indexed auto variables specially or so).  The
only real solution is to find a GIMPLE representation for
__builtin_ia32_addcarryx_u64 that doesn't force the alternate output
to memory.


More information about the Gcc-bugs mailing list