[Bug tree-optimization/83253] -ftree-slsr causes performance regression
rguenth at gcc dot gnu.org
gcc-bugzilla@gcc.gnu.org
Mon Dec 4 09:38:00 GMT 2017
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83253
Richard Biener <rguenth at gcc dot gnu.org> changed:
What |Removed |Added
----------------------------------------------------------------------------
Keywords| |missed-optimization
Status|UNCONFIRMED |NEW
Last reconfirmed| |2017-12-04
CC| |amker at gcc dot gnu.org,
| |jakub at gcc dot gnu.org,
| |rguenth at gcc dot gnu.org,
| |wschmidt at gcc dot gnu.org
Ever confirmed|0 |1
--- Comment #1 from Richard Biener <rguenth at gcc dot gnu.org> ---
It looks more like a backend issue to me. I can reproduce it on x86_64 but
only with -march=nocona, not with generic or say, core-avx2.
Note that I think it is odd that SLSR ever does this replacement since
(on GIMPLE...) no instruction is saved.
I guess the expmed costing stuff doesn't consider the computations in
address context (though it must consider using lea on x86 for x + C*y?).
Eventually the IVO costing would be more accurate for address uses, but then
SLSR might want to create TARGET_MEM_REFs ...
More information about the Gcc-bugs
mailing list