[Bug rtl-optimization/70164] [6 Regression] Code/performance regression due to poor register allocation on Cortex-M0

law at redhat dot com gcc-bugzilla@gcc.gnu.org
Wed Mar 23 19:02:00 GMT 2016


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70164

Jeffrey A. Law <law at redhat dot com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Priority|P1                          |P2

--- Comment #13 from Jeffrey A. Law <law at redhat dot com> ---
Essentially this is the same problem we have with DOM using context sensitive
equivalences to copy propagate into subgraphs, but in CSE.  I'm increasingly of
the opinion that such equivalences DOM find should be used for simplification
only, not for copy propagation.  That opinion would apply for CSE as well.

I'm not sure if we can put the pieces in place for gcc-6, but I think that's
the direction we ought to be going.

The alternative would be to do some kind of range splitting.  What we'd want to
know is do we have a context sensitive equivalency and would splitting the
range in the dominated subgraph result in a graph that is more easily/better
colorable.  In this case, the subgraph creates all the conflicts so it's an
obvious split point, but I'm not sure how easily we could generalize that.

Either way I don't think this should be a release blocking issue.  Moving to
P2, but keeping the target gcc-6.


More information about the Gcc-bugs mailing list