This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Does gcc automatically lower optimization level for very large routines?


Yes, much more. When you traverse a CFG, the analysis develops into a tree
(for example a tree of uses). That is, every basic block could be
*recursively* a root of an individual linear iteration for up to all basic
blocks. Sum them up, and you will get a polynomial expression. I don't
insist that this is the best way, but often the way it is.

пт, 20 дек. 2019 г. в 23:52, Segher Boessenkool <segher@kernel.crashing.org
>:

> On Fri, Dec 20, 2019 at 02:57:57AM +0100, Dmitry Mikushin wrote:
> > Trying to plan memory consumption ahead-of-work contradicts with the
> nature
> > of the graph traversal. Estimation may work very well for something
> simple
> > like linear or log-linear behavior.
>
> Almost everything we do is (almost) linear.
>
> > But many compiler algorithms are known
> > to be polynomial or exponential
>
> Many?  There are a few (register allocation is a well-known example),
> but anything more than almost linear is quite easy to make blow up.  It
> is also not super hard in most cases to make things linear, it just
> needs careful attention.
>
> > (or even worse in case of bugs).
>
> Well, sure, if there is a bug *anything* can go wrong ;-)
>
>
> Segher
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]