This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: gcc/resource.c mark_target_live_regs Question


<<But I'm most definitely *not* in favor of achieving compile-time
performance by using quadratic (or worse) algorithms and then putting
artificial limiters on the amount of input data considered.
>>

Why not? In some cases, this may be the optimal approach. I don't see
any reason for taking a general position against this approach.

We perfectly well know that generation of optimal code is NP-complete,
and thus requires exponential algorithms. The issue is how to provide
reasonably optimal code with reasonable performance, and achieving that
balance requires may approaches.

Most certainly it seems reasonable to tolerate some non-linearity in
the handling of basic blocks, but then limit the size of basic blocks.
The point here is that we may get better code by using such algorithms
even if we have to break up big blocks, even for those blocks themselves.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]