Some thoughts and quetsions about the data flow infrastructure
Richard Kenner
kenner@vlsi1.ultra.nyu.edu
Tue Feb 13 14:23:00 GMT 2007
> Regs_ever_live is the poster child of this. In theory regs_ever_live is
> easy, it is just the set of hard registers that are used. In practice
> this is a disaster to keep track of because it was only updated
> occasionally and its values are "randomly" changed by the backends in
> totally undocumented ways. Maintaining regs_ever_live requires a lot of
> special mechanism that slows down a the incremental scanning.
The history here, by the way, is that it was originally very simple and just
supposed to provide a "quick and easy" way of having a conservative view of
which registers *weren't* ever used. So it was set when a register might
possibly be used. That was indeed easy.
But then people wanted to be able to know *for sure* which registers were
used, so mechanisms were added to clear it out when we knew a register
*wasn't* used, which added the complexity you mention.
This is a problem with a lot of the ad-hoc structures used: they were
originally meant for one specific purpose and often used very locally and
were reasonably well-designed for that purpose, but then were used more
globally and/or for other purposes and they weren't quite so well designed
for that purpose anymore, but nobody went to the troule to change them.
I strongly support a new, common infrastructure that will allow all of these
older structures to be replaced. But the history is important in my opinion
because it means that we need to think as generally as possible and to ensure
we come up with as broad a structure as possible in order both to replace the
current structures, but also to support many other uses in the future. What
what I understand, the current mechanism does that, but I think it's
important to keep this criterion in mind when evaluating any possible
"competitors".
More information about the Gcc
mailing list