This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: gcc/resource.c mark_target_live_regs Question
- To: lucier at math dot purdue dot edu, mark at codesourcery dot com
- Subject: Re: gcc/resource.c mark_target_live_regs Question
- From: dewar at gnat dot com
- Date: Sun, 18 Feb 2001 09:35:08 -0500 (EST)
- Cc: gcc at gcc dot gnu dot org
<<But I'm most definitely *not* in favor of achieving compile-time
performance by using quadratic (or worse) algorithms and then putting
artificial limiters on the amount of input data considered.
>>
Why not? In some cases, this may be the optimal approach. I don't see
any reason for taking a general position against this approach.
We perfectly well know that generation of optimal code is NP-complete,
and thus requires exponential algorithms. The issue is how to provide
reasonably optimal code with reasonable performance, and achieving that
balance requires may approaches.
Most certainly it seems reasonable to tolerate some non-linearity in
the handling of basic blocks, but then limit the size of basic blocks.
The point here is that we may get better code by using such algorithms
even if we have to break up big blocks, even for those blocks themselves.