Pass Activity Log
The goal of the initial stages of this project is to provide some feedback on how useful passes are to the compilation process. We currently run 100's of passes all the time, and we don't really know how effective or necessary many of them are. We also don't know whether the cleanups they run are truly doing much. It would be nice to do some actual analysis and do something with the results.
nothing is committed yet, but the branch where the work will take place is in ' pass-activity-log '
For the moment, I'll just copy the text of my initial note here to provide the idea of what this is about. I'll rewrite chunks of this as I work out more details and define it better.
- original note comments
- It would sure be nice to streamline our pass pipeline. One could spend the rest of ones life doing this by hand. The 2 biggest problems would seem to be:
- we have no idea whether a pass actually does much
- and if it did do something, we have no idea whether it was actually useful
So I was thinking that maybe we could modify passes to report what they did. When CHECKING_ENABLED is on (or something like that), every pass reports what it did, possibly to the pass manager. Initially I was thinking it might have something like:
- number of statements changed
- number of statements added
- number of statements deleted
- number of names added
- number of names deleted
And then a report is issued for the compilation listing every pass that ran and a summary of this data for each occurrence. Each of the TODO cleanups run by each pass should also have its data listed.
Then we could run the compiler over a whack of testcases, accumulate all these reports, and generate some data on which passes and cleanups are not doing very much and are candidates for closer inspection.
One could then turn off one or more of these passes, run again, and see whether there appeared to be much impact on the other passes. A closer inspection may identify that perhaps the pass should be somewhere else in the pipeline, run only at -O3, completely eliminated, or modified in some way.
There are also categories of optimizations:
- optimization performers - The workhorses that actually do useful things.
- optimization enablers - introduce situations which will enable a later optimizer.
- cleanups - Those which remove crud or undo enabler work which was not profitable.
And these should probably be treated differently.
Enablers tend to work in concert with optimizations and sometime also cleanups. You need to look at the data for them together to see if useful work was done. If the cleanup is usually undoing everything the enabler did, then the enabler isn't really enabling, its just chewing cycles (or there is a flaw in the optimization, in any case, it all deserves a closer look if the group isn't accomplishing much)
It also possible that a pass should only be run on a specific architecture or set of arches. I see no reason why we shouldn't allow the pass pipeline to be tuned for specific architectures. Not everything that is good for a 32 register machine is good for one with 8 registers and vice versa.
This could then be further extended into the RTL passes, and there are some other extensions that could be useful. It would be nice for instance if we could statically guess at whether the runtime was affected or not, and by how much.
Many modifications that optimizations make aren't really going to be measurable by simply testing execution speed.
- The scheduler perhaps could spit out a summary of what it thinks the number of cycles through the execution of the predicted path/key/all blocks would be.
- The loop optimizers could submit summary info about loops, and tied in with the scheduler info, we could guess at whether an optimization affected the cycles in a loop by generating reports with and without the pass and comparing the cycles estimate of the core loops and main path. * We can also compare code size based on the object code produced, and could work on the -Os pass pipeline as well.
- these reports would be useful for some of the automated pass shuffling experiments as well I would think.
And so on. Once properly set up, you could actually automate quite a bit of this and maybe get some very interesting data.
I think it would also be a good idea to set up a generic logging mechanism for this. There are other tasks within the compiler that a generic logging mechanism would be useful for. I've seen requests for optimizers to generate reports on what they did or didn't do and why, providing hints to the programmer about how to change their code to get better optimizations. I think we even had/have a branch for this sort of thing. It seems like a good opportunity to get something generic in place for future use.