This is the mail archive of the
mailing list for the GCC project.
Re: GCC Multi-Threading Ideas
On 1/24/20 1:28 PM, Allan Sandfeld Jensen wrote:
On Freitag, 24. Januar 2020 17:29:06 CET Nicholas Krause wrote:
On 1/24/20 3:18 AM, Allan Sandfeld Jensen wrote:
On Freitag, 24. Januar 2020 04:38:48 CET Nicholas Krause wrote:
On 1/23/20 12:19 PM, Nicholas Krause wrote:
On 1/23/20 3:39 AM, Allan Sandfeld Jensen wrote:
On Montag, 20. Januar 2020 20:26:46 CET Nicholas Krause wrote:
Unfortunately due to me being rather busy with school and other
will not be able to post my article to the wiki for awhile. However
there is a rough draft here:
Oxk/edit that may change a little for people to read in the meantime.
This comment might not be suited for your project, but now that I
it: If we want to improve gcc toolchain buildspeed with better
I think the most sensible would be fixing up gold multithreading and
it by default. We already get most of the benefits of multicore
by running multiple compile jobs in parallel (yes, I know you are
cases where that for some reason doesn't work, but it is still the
most situations). The main bottleneck is linking. The code is even
there in gold and have been for years, it just haven't been deemed
being enabled by default.
Is anyone even working on that?
You would need both depending on the project, some are more compiler
bottle necked and others linker. I mentioned that issue at Cauldron as
the other side would be the linker.
Sorry for the second message Allan but make -j does not scale well
beyond 4 or
8 threads and that's considering a 4 core or 8 machine.
It doesn't? I generally build with -j100, but then also use icecream to
distribute builds to multiple machines in the office. That probably also
makes linking times more significant to my case.
I ran a gcc build on a machine with make -j32 and -j64 that had 64 cores.
There was literally only a 4 minute increase in build speed. Good question
Right. I guess it entirely depends on what you are building. If you are
building gcc, it is probably bound by multiple configure runs, and separate
iterations. What I usually build is Qt and Chromium, where thousands of files
can be compiled from a single configure run (more than 20000 in the case of
Chromium), plus those configure runs are much faster. For Chromium there is
almost a linear speed up with the number of parallel jobs you run up to around
100. With -j100 I can build Chromium in 10 minutes, with 2 minutes being
linking time (5 minutes linking if using bfd linker). With -j8 it takes 2
But I guess that means multithreading the compiler can make sense to your
case, even if it doesn't to mine.
The question I would have is make -j on one machine as that's my
point. You can distribute it out with icecream or other thinks but
again that doesn't always help. In the paper in does mention
interaction with make and build systems to make it scale better
if possible. Not sure how QT or Chromium would be directly
affected. However its a considered use case where make scales
fine but may do better with coupling into the internal compiler
multi-threading. Again even 10 minutes should be make better
if make interacted with gcc multi-threading well.