This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Does gcc automatically lower optimization level for very large routines?


On Fri, Dec 20, 2019 at 02:57:57AM +0100, Dmitry Mikushin wrote:
> Trying to plan memory consumption ahead-of-work contradicts with the nature
> of the graph traversal. Estimation may work very well for something simple
> like linear or log-linear behavior.

Almost everything we do is (almost) linear.

> But many compiler algorithms are known
> to be polynomial or exponential

Many?  There are a few (register allocation is a well-known example),
but anything more than almost linear is quite easy to make blow up.  It
is also not super hard in most cases to make things linear, it just
needs careful attention.

> (or even worse in case of bugs).

Well, sure, if there is a bug *anything* can go wrong ;-)


Segher


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]