This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Threading the compiler




    * /From/: Mike Stump <mrs at apple dot com>
    * /To/: GCC Development <gcc at gcc dot gnu dot org>
    * /Date/: Fri, 10 Nov 2006 12:38:07 -0800
    * /Subject/: Threading the compiler

------------------------------------------------------------------------
We're going to have to think seriously about threading the compiler. Intel predicts 80 cores in the near future (5 years). http:// hardware.slashdot.org/article.pl?sid=06/09/26/1937237&from=rss To use this many cores for a single compile, we have to find ways to split the work. The best way, of course is to have make -j80 do that for us, this usually results in excellent efficiencies and an ability to use as many cores as there are jobs to run. However, for the edit, compile, debug cycle of development, utilizing many cores is harder.

You should give make -j80 a try before you dismiss it as not enough. I wrote a paper in 1991 "GNU & You: Building a Better World" (a play on X11's "make world" invocation) describing using massively parallel machines as a compile server and how to write correct parallel Makefiles with GNU make. As you say, you get excellent efficiencies from this.


The edit/compile/debug cycle of development isn't going to benefit appreciably from a multithreaded compiler. You can only edit one file at a time, the debugging stage isn't going to benefit at all. And in general, large programs tend to be split into many source files, where many parallel invocations of gcc work just fine. The actual time to compile a single source module tends to be small.

Before you launch into this idea, you should obtain profile traces that show you have any idle CPU cycles in a particular compilation, cycles that could be profitably used once a thread scheduler enters the picture. Personally, on my dual-core AMD X2, make -j3 works just fine to keep both cores above 98% until the build is done, on projects I currently maintain.

Back in 1991, make -j20 worked well enough to keep an 8 processor Alliant FX busy building X11, in about a twelfth of the time it took to build serially, once I'd excised that abominable imake crap and replaced it with pure GNU make. (One hour builds down to about 5 minutes, as I recall.)

Of course, if your program has fewer than 80 source files you may not get 100% utilization out of the machine, but at that point are you really going to care?

--
 -- Howard Chu
 Chief Architect, Symas Corp.  http://www.symas.com
 Director, Highland Sun        http://highlandsun.com/hyc
 OpenLDAP Core Team            http://www.openldap.org/project/


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]