This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug tree-optimization/71361] [7 Regression] Changes in ivopts caused perf regression on x86


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71361

amker at gcc dot gnu.org changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |amker at gcc dot gnu.org

--- Comment #1 from amker at gcc dot gnu.org ---
It doesn't look like a regression from the dump, I suspect it's because how gcc
handles symbol (arr_1/arr_2) in m32 PIE code.   I will have a look.
BTW, the patch itself is right, it triggers cost model issue again in which
wrong/inaccurate cost gives better result.   I am doing experiments rewriting
the whole cost computation part.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]