This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug tree-optimization/18704] [4.0 Regression] Inlining limits cause 340% performance regression
- From: "rguenth at tat dot physik dot uni-tuebingen dot de" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: 29 Nov 2004 11:04:59 -0000
- Subject: [Bug tree-optimization/18704] [4.0 Regression] Inlining limits cause 340% performance regression
- References: <20041128181553.18704.rguenth@tat.physik.uni-tuebingen.de>
- Reply-to: gcc-bugzilla at gcc dot gnu dot org
------- Additional Comments From rguenth at tat dot physik dot uni-tuebingen dot de 2004-11-29 11:04 -------
Looking at the 3.4 branch the defaults for the relevant inlining parameters are
the same. So the difference in performance has to be accounted to different
tree-node counting (or to differences in the accounting during inlining).
As we throttle inlining params if -Os is specified in opts.c:
if (optimize_size)
{
/* Inlining of very small functions usually reduces total size. */
set_param_value ("max-inline-insns-single", 5);
set_param_value ("max-inline-insns-auto", 5);
flag_inline_functions = 1;
may I suggest to throttle inline-unit-growth there, too (though it
shouldn't have an effect with so small max-inline-insns-single). And
then provide the documented limit (150) for inline-unit-growth?
One may even argue that limiting overall unit growth is not important,
as it is already limited by max-inline-insns-* and large-function-*.
Also both inline-unit-growth and large-function-growth cause inlining
to stop at the threshold leaving one with an unbalanced inlining decision.
Why were these (growth) limits invented? Were there some particular testcases
that broke down otherwise?
--
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18704