This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Designs for better debug info in GCC


Richard Guenther wrote:
On Nov 22, 2007 8:22 PM, Frank Ch. Eigler <fche@redhat.com> wrote:
Mark Mitchell <mark@codesourcery.com> writes:

[...]
     Who is "we"?  What better debugging are GCC users demanding?  What
debugging difficulties are they experiencing?  Who is that set of users?
What functional changes would improve those cases?  What is the cost of
those improvements in complexity, maintainability, compile time, object
file size, GDB start-up time, etc.?
That's what I'm asking.  First and foremost, I want to know what,
concretely, Alexandre is trying to achieve, beyond "better debugging
info for optimized code".  Until we understand that, I don't see how we
can sensibly debate any methods of implementation, possible costs, etc.
It may be asking to belabour the obvious.  GCC users do not want to
have to compile with "-O0 -g" just to debug during development (or
during crash analysis *after deployment*!).  Developers would like to
be able to place breakpoints anywhere by reference to the source code,
and would like to access any variables logically present there.
Developers will accept that optimized code will by its nature make
some of these fuzzy, but incorrect data must be and incomplete data
should be minimized.

That they put up with the status quo at all is a historical artifact
of being told so long not to expect any better.

As it is (without serious overhead) impossible to do both, you either have to live with possibly incorrect but elaborate or incomplete but correct debug information for optimized code. Choose one ;)

I don't think you can use the phrase "serious overhead" without rather extensive statistics. To me, -O1 should be reasonably debuggable, as it always was back in earlier gcc days. It is nice that -O1 is somewhat more efficient than it was in those earlier days, but not nice enough to warrant a severe regression in debug capabilities. To me anyone who is so concerned about performance as to really appreciate this difference will likely be using -O2 anyway.

The trouble is that we have set as the criterion for -O1 all the
optimizations that are reasonably cheap in compile time. I think
it is essential that there be an optimization level that means

All the optimizations that are reasonably cheap to implement
and that do not impact debugging information significantly
(except I would say it is OK to impact the ability to change
variables).

For me it would be fine for -O1 to mean that but if there is a
a consensus that an extra level (-Od or whatever) is worth while
that's fine by me.

I find working on the Ada front end that it used to be that I could
always use -O1, OK for debugging, and OK for performance. Now I have
to switch between -O0 for debugging, and then I use -O2 for performance
(for me, the debuggability of -O1 and -O2 are equivalent in this
context, both hopeless, so I might as well use -O2). So I no longer
use -O1 at all (the extra compile time for -O2 is negligible on my
fast note book).


What we (Matz and myself) are trying to do is provide elaborate debug information with the chance of wrong (I'd call it superflous, or extra) debug information. Alexandre seems to aim at the world-domination solution (with the serious overhead in terms of implementation and verboseness).

Richard.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]