Its ceretainly possible given https://blog.mozilla.org/nnethercote/2013/09/12/bleg-for-a-new-machine-2/ (I'm not sure what versions njn is using but probably doesn't matter too much). I rofiled build a build of content/ in the firefox tree for a --enable-optimize --disable-debug build you can see the data I gathered at http://people.mozilla.org/~tsaunders/gcc-perf.data . The two things that came up high in profiles were C++ parsing stuff and memory allocation. I suppose there isn't too much to be done about the memory allocation issue in the short term, but I wonder if there's changes to the parser that would help at least some.
I'm going to resolve this as duplicate of PR24208. We have got already a number of more specific bug reports about either memory use, performance, etc. If you have something *specific* to report please open a *specific* bug. *** This bug has been marked as a duplicate of bug 24208 ***
That GCC could be much faster is well-known but how to make it so is less clear and there is simply not enough people to do the work required. Currently there is as many people working just on Clang than on the whole GCC (including all languages, middle-end and backends). Dimitrios Apostolou did some nice work on benchmarking, but his actual patches did not bring impressive improvements: http://gcc.gnu.org/wiki/OptimisingGCC I don't know how much of this work ended up in GCC (if any). There are many ideas on what could speed-up C++ parsing: http://gcc.gnu.org/wiki/Speedup_areas#C.2B-.2B-_improvements BTW, reducing memory allocation (or using smarter data structures) is likely to bring significant improvements: http://gcc.gnu.org/wiki/Speedup_areas#Some_general_thoughts_on_memory_usage But at the end, someone needs to come up with very detailed benchmark data to identify the real compile-time hogs and patches to address them.