This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: gcc compile-time performance

> Here's another oddity.
> Why is predict.c using the target floating-point emulation routines to do 
> its branch probability calculations?  There must be a faster way of doing 
> this that is good enough for the level of estimation needed here -- the 
> probabilities are at best approximate.
> When profiling a compilation of combine.c (a function with no floating 
> point code), I was amazed to find that we spend 2.5% of the total 
> compilation time in earith() and its children.

This is curious, I was benchmarking it on similar testcase before sending
the patch and it was about 0.5% of total compilation time spent in
branch probability pass...
> Surely either native floating-point code, or even some simple fixed-point 

Native floating point code is problem, unfortunately since for i386 you get
different results in optimized and non-optimized builds breaking bootstrap.
Fixed point code is problem, as we are interested in comparisons relative
to the highest fequency in the program.  This may be the entry block for
tree-structured function, but it may be the internal loop and at high loop
nests there is more than 2^30 differences between these two we can affort
in integral arithmetics.

This has been disucssed (originally I used hort arithmetics) and we found
no other sollution - other that may be interesting is logarithmics computations
but we need the additions to work resonably well too.

Of course I may implement simplier floating point emulator for this special
urpose, but it is ugly as well.


> code, would be good enough here.
> R.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]