This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: PR 23551: why should we coalesce inlined variables?
On 7/8/07, Alexandre Oliva <aoliva@redhat.com> wrote:
> Richard writes:
>> I propose to revert this patch for now.
> I agree. I think the patch should be reverted as the benefit does
> not justify the cost.
If we want to privilege memory use over debug information, I guess
this patch is the way to go. Any privileging of non-inlined-function
variables over inlined-function variables is arbitrary, since
decisions on what to inline are mostly arbitrary nowadays, and they
change from one compiler release to another, and even between targets.
Why do you think they are arbitrary? If it was arbitrary, then we
would have some weird results with benchmarks like tramp3d (which
depends on inlining). There is a heuristic which is well defined.
And yes that heuristic is tuned every release because the compiler is
still evolving and other things are going on like getting early
optimization which requires the heuristic to be tuned. Yes the cost
matrix is target dependent because different target have different
move costs and different calling cost. Now if you don't like that,
then say something instead of saying it is arbitrary. Because it is
obviously arbitrary. I guess you are mixing up two different issues
here. If we did not change the inlining heuristic between release,
people will complain we are getting slower. I have not seen that many
people complain about us getting worse debugging info for optimizated
builds between releases.
I don't think it makes sense to try to offer useful meta-information
for some compilations of a function while not offering it for other
similar compilations of the same function.
Now the question comes down to this, how many users have complained
about memory usage inside GCC compared to them complaining about
optimized debugging info? I saw the first is more complained about,
just by looking at bug reports. You know why that is, because people
rather have their code compile. And most of them will drop to a lower
optimization level while debugging if they can't debug their code at
the current optimization level. And if they can't they complain about
that, debugging templates, etc. all should to be higher priority than
debugging optimized code really.
Now tools that use debugging info to look at values at specific spots
should know that they can't always get the most accurate view with
optimized code (they never will be able to). In fact higher level
loop optimizers will be able to do weird things rotate the loop nest,
twist it, change it so much that even the user does not know what the
compiler did. This is true of any high level optimization, we can do
only so much for debugging info at higher levels of optimization.
Some times code no longer resembles anything the user wrote so how do
you think debugging info will look then? Yes we can help this one
case but others are just going to be out of luck.
Note any patch in this area should get a testcase or two to show where
it can help in either debugging or optimization. I think we should
reject any new patch (except one that reverts the current one) without
a testcase.
-- Pinski