This is the mail archive of the gcc-regression@gcc.gnu.org mailing list for the GCC project.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
Other format: | [Raw text] |
1GB of memory is still a lot even when it is much less than what GCCWebsphere is several orders of magnitude larger than gcc's back end. It is over 1gb of compiled code.
would need. I am pretty sure ICC can get around doing IPA without
actually allocating as much memory as all the .o files takes together.
Concerning GCC memory, we need about 700MB of memory to compile GCC
backend itself. We do have serious problems then because the GGC
overhead increase out of resonable bounds (so we burn about 70% ofMost optimizing compilers are able to do full unit at a time compilation. Furthermore, those compilers that do link time optimization are clearly doing it. For the most part the link time optimizers are just the regular optimizer that has been hacked with a front end that reads the special .o information.
compilation time in GGC as long as I can reacall) and these things get
yet worse when machine goes to swap.
I will take a homework and look how other available production C compilers (at least SGIpro) are dealing with this problem.
Just to get back to basics. I started looking at the cgraph code to see how to fix this now for this release. I have a few questions.If this is what I have to do for now, this is what I will do. I had just thought that there was more here.
For 4.0 lets don't change basic ordering of the passes - I have very bad experiences with these kind of changes that should just work but usually just break everything. We should do more experimenting on tree-profiling branch. Perhaps I can get it working there as an alternative option (-ftwo-pass-ipa) and we can see how much it will pay back?
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |