This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: compiling very large functions.


On 11/5/06, Richard Guenther <richard.guenther@gmail.com> wrote:
> I lean to leave the numbers static even if they do increase as time goes
> by.  Otherwise you get two effects, the first optimizations get to be
> run more, and you get the wierd non linear step functions where small
> changes in some upstream function effect the down stream.

Ok, I guess we can easily flag each function as having
 - many BBs
 - big BBs
 - complex CFG (many edges)
and set these flags at CFG construction time during the lowering phase
(which is after the early inlining pass I believe).

IMHO any CFG-based criteria should be using dynamic numbers, simply because they are available at all times. Large BBs is a more interesting one, because in general they don't get smaller during optimizations.

What Kenny suggests here is not new, BTW.  I know that gcse already
disables itself on very large functions (see
gcse.c:is_too_expensive()), and probably some other passes do this as
well. A grep for OPT_Wdisabled_optimization *should* show all the
places where we throttle or disable passes, but it appears that
warnings have not been added consistently when someone throttled a
pass.

AFAIK not one of the tree optimizers disables itself, but perhaps we
should. The obvious candidates would be the ones that require
recomputation of alias analysis, and the ones that don't update SSA
info on the fly (i.e. require update_ssa, which is a horrible compile
time hog).

Gr.
Steven


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]