This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Inlining and estimate_num_insns


Steven Bosscher wrote:
On Sunday 27 February 2005 14:58, Jan Hubicka wrote:

is a problem for a few people, but if you look at the performance
numbers of things like SPEC, CSiBE, and MySQL, there doesn't appear
to be any performance problem.  But your patch will probably still
blow compile time through the roof for those applications as well.

I don't know. But I also think our inlining limits are way too high, at least unnecessary high with the patch applied. I'll try do find some more representative packages to test - any hints? In the mean time would it be ok to apply the patch to mainline to have the automatic testers do some tests and then afterwards maybe revert it, if we do not like it? I'll take the previous approval of Jan as taken back for now.

Well, I still think that the metric without artificial assignemnts is more realistic


And you know this how?  When I last toyed with this, I compared the
actual number of RTL insns generated for a function with the estimate
we made.  I have not seen a comparison of that so far.  Maybe we are
now way underestimating the sizes.

I have done some measurements with libiberty on ia32 with -Os text sizes compared to the estimated insns before and after the patch to 4.0. You can see estimate-vs-size graphs at http://www.tat.physik.uni-tuebingen.de/~rguenth/gcc/sz.before.png http://www.tat.physik.uni-tuebingen.de/~rguenth/gcc/sz.after.png both graphs more or less fit a a*x+b distribution, which is good. Before the patch, fitting results in a = 1.4144 +/- 0.01943 (1.373%) b = 10.7692 +/- 3.092 (28.71%) after the patch we get a = 1.9384 +/- 0.03411 (1.76%) b = 9.07103 +/- 3.96 (43.66%) which may hint as original 4.0 be "better" or may hint at nothing, as the sample size is only the 163 functions of libiberty.

One could now start varying INSNS_PER_CALL and see if that changes
anything.  Or introduce another parameter that accounts for the
overhead of having a function separate rather than inlined (aka
we estimate the empty function to size zero, but it really is at
least a ret; instruction).

Richard.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]