This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug tree-optimization/33928] [4.3/4.4 Regression] 22% performance slowdown from 4.2.2 to 4.3/4.4.0 in floating-point code



------- Comment #36 from lucier at math dot purdue dot edu  2008-09-04 20:39 -------
I don't really understand the status of this bug.

Before 4.3.0, it was P!, and Mark said he said he'd "like to see us start to
explain these kinds of dramatic performance changes."

There was quite a bit of detective work that ended with "for some reason
gcc-4.3 transforms only _some_ instructions (line 708+ in _.085t.fre dump)
...".

Richard opined that it was an "alias partitioning problem", but Uros noted that
for the original code instead of the reduced testcase expanding some parameter
to its maximum still doesn't fix the problem.

So (a) we don't know what the current code is doing wrong, and (b) we don't
know why 4.2 got it right.

So I don't think Mark got what he wanted, and now it's P2, and each release the
target release for fixing it gets pushed back.

I've been testing mainline on this bug sporadically, especially when an entry
in gcc-patches mentions some words that also appear on this PR, to see if it's
fixed.  I'm a bit concerned that the target of 4.3.* is becoming increasingly
out of reach, as changes committed to that branch seem to be more and more
conservative because it's a release branch.

I don't think the code for this bug is terribly atypical for machine-generated
code; it would be nice to be able to remove this performance regression. 
Unfortunately, I'm in no position to do so.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33928


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]