This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Designs for better debug info in GCC (was: Re: [vta] don't let debug insns get in the way of simple vect reduction)


On Nov  5, 2007, "Richard Guenther" <richard.guenther@gmail.com> wrote:

> On 11/5/07, Alexandre Oliva <aoliva@redhat.com> wrote:
>> libgfortran had some vectorization cases that wouldn't be applied in
>> the presence of debug stmts referencing the same variables.  Fixed
>> with the patch below, to be installed shortly.

> (I'm just picking a random patch of this kind for this mail)

> I see you have to touch lots of places to teach them about debug
> insns.

Yes.  There's no escaping for that.  There are two options:

- keep them separate, and modify the code that manipulates the IL so
  as to update them as needed, or

- keep them in the IL, and modify the code to disregard them as
  needed.

I've pondered both alternatives, and decided that the latter was the
only testable path.  If we had a reliable debug information tester, we
could proceed incrementally with the first alternative; it might be
viable, but I don't really see that it would make things any simpler.
If anything, you'd need to introduce a lot of new code to manipulate
the separate representation, unless this separate representation was
very similar in structure to the existing representation, and in any
case you'd have to add code all over the place to keep it up to date.

With the approach I've taken, there's something that's testable: as
long as there are codegen changes, something needs to be fixed.
Besides, the information is encoded in a form that is automatically
handled by most compilation passes, so updates for pretty much all
transformations are already in place, without any additional code.

The only additional code is what's needed to detect missing updates
and to ensure the debug notes don't interfere with code generation.
I've managed to implement these such that they don't take any
additional memory unless you actually request the additional debug
information, and such that they almost never bring any compile-time
performance hit.  That's one of the reasons that guided the placement
of DEBUG_INSN just next to the other INSNs: such that INSN_P is
optimized to a range test (as it was before, but now with a different
boundary), and INSN_P && !DEBUG_INSN_P is optimized to the original
range test.  In most other places, it's just yet another entry in a
switch table, so again it's zero-cost in terms of performance.  And at
points where it would be more costly, there's a test guarding the
complex processing to tell whether the feature is enabled that
requires that additional processing.  Hard to beat that.

> I believe in the long long thread earlier this year people suggested
> to use a on-the-side representation for the extra information.

Yes.  And I thought I'd already made it clear why this on-the-side
representation won't get you as far as I needed to go.  Basically, it
leads to a situation in which you can't possibly represent correct
debug information, or you end up adding annotations to the instruction
flow anyway, which means you have to deal with them or give up correct
debug information.

Since one of the requirements I was given was that debug information
be correct (as in, if I don't know where a variable is, debug
information must say so, rather than say the variable is somewhere it
really isn't), going without additional annotations just wouldn't
work.  Therefore, I figured I'd have to bite the bullet and take the
longer path, even though I don't dispute that it is possible to
achieve many improvements with the simpler approach.

However, eventually the simpler approach runs into a wall, and I
couldn't afford to get to that point and then backtrack to the
complete approach, because the wall couldn't be surpassed.

> With the different approach I and Matz started (and to which we
> didn't yet spend enough time to get debug information actually
> output - but I hope we'll get there soon), on the tree level the
> extra information is stored in a bitmap per SSA_NAME (where
> necessary).

This will fail on a very fundamental level.  Consider code such as:

f(int x, int y) {
  int c;
  /* other vars */

  c = x;
  do_something_with(c, ...); // doesn't touch x or y

  c = y;
  do_something_else_with(c, ...); // doesn't touch x or y
}

where do_something_*with are actually complex computations, be that
explicit code, be it macros or inlined functions.

This can (and should) be trivially optimized to:

f(int x, int y) {
  /* other vars */

  do_something_with(x, ...); // doesn't touch x or y

  do_something_else_with(y, ...); // doesn't touch x or y
}

But now, if I 'print c' in a debugger in the middle of one of the
do_something_*with expansions, what do I get?

With the approach I'm implementing, you should get x and y at the
appropriate points, even though variable c doesn't really exist any
more.

With your approach, what will you get?

There isn't any assignment to x or y you could hook your notes to.

Even if you were to set up side representations to model the
additional variables that end up mapped to the incoming arguments,
you'd have 'c' in both, and at the entry point.  How would you tell?

Sure, you could hand-wave that both assignments were effectively moved
to the entry point of the function, and that only the last one
prevails.  I guess this wouldn't be wrong per se.  But would it be the
best we could do for the users?

Say, if do_something_with is a loop, and you're monitoring some
condition that depends on c and other variables at a point in the
middle of an iteration, would you be happy if that didn't work because
the compiler told you 'c' evaluated to 'y' rather than 'x'?

Do you realize that the only way you could possibly make the above
work as expected by the user is by adding notes at the point of the
assignment?  And that, once you add such notes, they'd have to map
back to the value the variable was supposed to gain at that point, and
that thus you must keep them accurate as further optimizations mess
with that value.  E.g., when f is inlined into another function, its x
and y are certainly going to disappear, like they do now, because of
copy propagation, and then you'd have to update the notes for the
original assignments to c that you've already discarded.

And, well, if you're going to have to add and update notes anyway, why
not just bite the bullet and use them all over?

Maybe to save on memory?  I guess this could be a good reason for
that.  We could indeed add a bitmap to gimple assignments indicating
which user variables, if any, they modify.  And then, if we move them
out of place, we can drop the bitmap in the new location, and replace
the original location with a note.

But this has a number of problems:

1. every single gimple assignment grows by one word, to hold the
pointer to the bitmap.  But most gimple assignments are to temporaries
variables, and these don't need annotations.  Creating different kinds
of gimple assignments would make things quite complex, so I'd rather
not go down that path.  So, you'd use a lot more memory, even when the
feature is not in use at all, and you might likely use more memory
than adding separate notes for user assignments like I do.  And this
doesn't even count the actual bitmaps.

2. this can't possibly handle assignments to parts of large variables.
My current implementation only tracks gimple regs, but there's no
reason why it can't be easily extended to handle component refs of
largish variables that end up in registers rather than memory, and for
other SRA-able variables, even when they aren't fully scalarized.
How'd you handle this with a look-aside bitmap?  I guess you could
generate uids or even decls for other expressions, but it seems to me
that this would waste even more space, and get things even more
complicated, no?

3. the marked assignment doesn't (can't possibly) denote the correct
point at which all variables in its bitmap were assigned to.  It only
marks the earliest such point for all the variables.  Overlapping
ranges and incorrect debug info are a consequence of this.

> On the RTL level we chose to extend the SET insn by a
> bitmap argument (refering to those bitmaps).

Same problems here.  The memory problem becomes even more critical:
even for compilation without debug info, you grow by 33% the most
common RTX element that, again, most often assigns to temporaries, and
this is without counting the space for the actual bitmaps.

> I realize that the GCC development model does not really support
> development in the open or steering of technical approaches.
> Probably due to lack of time and interest.  Still I'd ask people to
> actually look at both approaches and give advice to us implementors.

+1

> (And IMHO, debug insns in the IL are the wrong way to go and I would
> be very unhappy seeing this code get in GCC - no personal offense
> intended)

No offense taken.  I hope I've shown why it can't be helped to add
debug annotations to the IL if we're to have any hope of getting
correct debug information throughout compiler transformations, and
that the decision I've made to accomplish this is one that stands a
chance of being reasonably validated automatically.

I'm somewhat sick of seeing debug information being treated as a
second-class citizen, just because it doesn't cause the main program
to crash or to produce incorrect results.

The more people use monitoring tools that are based on
standards-defined debug information, the more important it is that
such information be reliable and accurate.  Otherwise, even if the
program is compiled into code that runs perfectly, the systems built
using this program, its debug information and the monitoring tools
that use this information will fail just as severely as if we had
generated incorrect code for the program in the first place.

I believe that adding debug annotations to the instruction stream, as
if they were references to the appropriate values, but subject to the
condition that debug information must not alter the generated code, is
the most viable approach to get to reliable and accurate debug
information.

That said, I do realize it's a long path, and certainly much longer
than other approaches that could get you say 80% there, but no further
than that and, worse, without any way for the compiler to tell when it
hasn't got there so as to inform the user about it.

I just hope having something that gets us 80% there won't become an
impediment for a solution that gets us 95% there, while setting
foundations that will enable us to get to 100% over time.

Debug information correctness ought to be treated no different from
code generation correctness.

Which is not to say that debug information needs to be absolutely
perfect and complete.  Having an optimization pass that improves code
while keeping its behavior correct is a good thing, even if the
optimization could be further improved, just like having debug
information that reports the location of a variable as unknown at some
points, and accurately at all others, even if the debug info generator
could be further improved so as to find out where the variable is at
some of the points where it lost track of them.

But having the debug info generator report an incorrect location for a
variable is as bad as having the optimization pass change the meaning
of the program.

I believe that keeping debug annotations as part of the IL are the
simplest and most effective way to ensure that optimization passes do
deal with them, rather than disregarding them as something
unimportant.  Besides, the way I've designed them, most of the passes
deal with them in the very same fashion as they deal with all other
expressions; it's just that some of them need minor tweaks to avoid
codegen changes when debug info is being generated.

We could avoid the risk of codegen changes with -g by always
generating the annotations, even when not generating debug info, and
discarding them at the end, when they could no longer affect the
generated code.  But then, if there are codegen changes (most often
missed optimizations), they will be present both with -g and -g0, so
this is undesirable.  Nevertheless, it's an option I've considered
adding, and that would be quite easy and helpful to introduce it for
field tests of this new debug info infrastructure, such that users
aren't left out in the cold if their code works with -O2 -g but not
with -O2, or vice-versa.  I haven't added it yet, for I've had no need
for it, but it's a matter of minutes.  It shouldn't be default,
though, for it would use more memory than needed for -g0.

-- 
Alexandre Oliva         http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member         http://www.fsfla.org/
Red Hat Compiler Engineer   aoliva@{redhat.com, gcc.gnu.org}
Free Software Evangelist  oliva@{lsd.ic.unicamp.br, gnu.org}


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]