This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

comments on getting the most out of multi-core machines


my 'day job' is a medium sized software operation.  we
have between 5 and 50 programmers assigned to a given
project; and a project is usually a couple of thousand
source files (mix of f77,c,c++, ada).  all this source
get's stuck in between 50 and 100 libraries, and the
end result is less than a dozen executables...one of
which is much larger than the rest.
 
it is a rare single file that takes more than 30
seconds to compile (at least with gcc3 and higher). 
linking the largest executable takes about 3 minutes.

(sorry to be so long-winded getting to the topic!!)

ordinary case is i change a file and re-link.  takes
less than 3.5 minutes.  even if gcc was infinitely
fast, it would still be 3 minutes.

the other case is compiling everything from scratch
(which is done regularly).  Using a tool like SCons
which can build a total dependency graph, i have
learned that roughly j100 would be ideal.  of course i
am stuck with -j4 today.  given enough cores to throw
the work on, best case is still 3.5 minutes.  

(of course, this is a simplified analysis)

my point in all of this is that effort at the higher
policy levels (by making the build process
mult-threaded at the file level) pays off today and
for the near future.  

changing gcc to utilize multi-core systems may be a
lot harder and less beneficial than moving up the
problem space a notch or two.


regards,
bud davis 






Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]