This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug tree-optimization/71361] [7 Regression] Changes in ivopts caused perf regression on x86
- From: "amker at gcc dot gnu.org" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Wed, 01 Jun 2016 07:59:25 +0000
- Subject: [Bug tree-optimization/71361] [7 Regression] Changes in ivopts caused perf regression on x86
- Auto-submitted: auto-generated
- References: <bug-71361-4 at http dot gcc dot gnu dot org/bugzilla/>
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71361
amker at gcc dot gnu.org changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |amker at gcc dot gnu.org
--- Comment #1 from amker at gcc dot gnu.org ---
It doesn't look like a regression from the dump, I suspect it's because how gcc
handles symbol (arr_1/arr_2) in m32 PIE code. I will have a look.
BTW, the patch itself is right, it triggers cost model issue again in which
wrong/inaccurate cost gives better result. I am doing experiments rewriting
the whole cost computation part.