This is the mail archive of the
mailing list for the GCC project.
Re: if-conversion a performance bottleneck
- To: rth at cygnus dot com (Richard Henderson)
- Subject: Re: if-conversion a performance bottleneck
- From: Brad Lucier <lucier at math dot purdue dot edu>
- Date: Fri, 5 May 2000 08:12:34 -0500 (EST)
- Cc: mrs at windriver dot com (Mike Stump), lucier at math dot purdue dot edu, gcc at gcc dot gnu dot org, matzmich at cs dot tu-berlin dot de
> On Thu, May 04, 2000 at 10:31:59PM -0700, Mike Stump wrote:
> > Unfortunately -j3 gives you not three jobs, but an exponential cascade
> > of 3 jobs per recursive make level.
> I know. But 3 or 15 is still a lot better than 1000, which might
> be what you get with just -j.
> > I found that -j3 -l10 will at least try and limit them from expanding
> > too much, which is useful if your swap limited (just how did those 2
> > emacen grow to be 60M each, and netscape inflate to 90M? :-().
> That's what gigabytes of RAM are for. ;-)
I have gigabytes of RAM; I've watched what happens when I say, e.g.,
make -j9 bootstrap
make -j 9 bootstrap
or whatever, and it doesn't seem to actually set off a bunch of parallel
jobs---definitely, by the time the stage1 compiler starts compiling
things, I'm down to one job at a time. That's with the 2.2.13 kernel
on alpha with make 3.77; is this a known problem? Should I upgrade
BTW, the maximum load with make -j is < 80 (without libgcc) and
things seem to stay rational because make gets less time as the
load rises and can't spawn jobs at such a high rate.