[Bug tree-optimization/87008] [8/9 Regression] gimple mem-to-mem assignment badly optimized

rguenth at gcc dot gnu.org gcc-bugzilla@gcc.gnu.org
Wed Aug 22 09:17:00 GMT 2018


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87008

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2018-08-22
                 CC|                            |jamborm at gcc dot gnu.org,
                   |                            |rguenth at gcc dot gnu.org
   Target Milestone|---                         |8.3
     Ever confirmed|0                           |1

--- Comment #3 from Richard Biener <rguenth at gcc dot gnu.org> ---
(In reply to Marc Glisse from comment #2)
> Or just:
> 
> struct A { double a, b; };
> struct B : A {};
> double f(B x){
>   B y;
>   A*px=&x;
>   A*py=&y;
>   *py=*px;
>   return y.a;
> }
> 
>   MEM[(struct A *)&y] = MEM[(const struct A &)&x];
>   y_6 = MEM[(struct A *)&y];
>   y ={v} {CLOBBER};
>   return y_6;
> 
> where y_6 should be read directly from x. SRA doesn't dare touch it. SCCVN
> does see that reading from y is equivalent to reading from x, but unless
> something else is already reading from x, it keeps the read from y.

Yeah, SCCVN doesn't change the read to one from x because generally it
cannot know y will go away and reading from x possibly enlarges its
lifetime (no stack sharing).

To really handle this we need to expose the way we'd expand such aggregate
copies to RTL already at GIMPLE stage.  SRA could be the pass that should
eventually do that (but of course avoid exposing copy loops or calls to
memcpy we might expand to ...).  So it boils down to heuristics again...


More information about the Gcc-bugs mailing list