This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFA] update ggc_min_heapsize_heuristic()


On 2017.04.10 at 10:56 +0100, Richard Earnshaw (lists) wrote:
> On 09/04/17 21:06, Markus Trippelsdorf wrote:
> > On 2017.04.09 at 21:10 +0200, Markus Trippelsdorf wrote:
> >> On 2017.04.09 at 21:25 +0300, Alexander Monakov wrote:
> >>> On Sun, 9 Apr 2017, Markus Trippelsdorf wrote:
> >>>
> >>>> The minimum size heuristic for the garbage collector's heap, before it
> >>>> starts collecting, was last updated over ten years ago.
> >>>> It currently has a hard upper limit of 128MB.
> >>>> This is too low for current machines where 8GB of RAM is normal.
> >>>> So, it seems to me, a new upper bound of 1GB would be appropriate.
> >>>
> >>> While amount of available RAM has grown, so has the number of available CPU
> >>> cores (counteracting RAM growth for parallel builds). Building under a
> >>> virtualized environment with less-than-host RAM got also more common I think.
> >>>
> >>> Bumping it all the way up to 1GB seems excessive, how did you arrive at that
> >>> figure? E.g. my recollection from watching a Firefox build is that most of
> >>> compiler instances need under 0.5GB (RSS).
> >>
> >> 1GB was just a number I've picked to get the discussion going. 
> >> And you are right, 512MB looks like a good compromise.
> >>
> >>>> Compile times of large C++ projects improve by over 10% due to this
> >>>> change.
> >>>
> >>> Can you explain a bit more, what projects you've tested?.. 10+% looks
> >>> surprisingly high to me.
> >>
> >> I've checked LLVM build times on ppc64le and X86_64.
> > 
> > Here are the ppc64le numbers (llvm+clang+lld Release build):
> > 
> > --param ggc-min-heapsize=131072 :
> >  ninja -j60  15951.08s user 256.68s system 5448% cpu 4:57.46 total
> > 
> > --param ggc-min-heapsize=524288 :
> >  ninja -j60  14192.62s user 253.14s system 5527% cpu 4:21.34 total
> > 
> 
> I think that's still too high.  We regularly see quad-core boards with
> 1G of ram, or octa-core with 2G.  ie 256k/core.
> 
> So even that would probably be touch and go after you've accounted for
> system memory and other processes on the machine.

Yes, the calculation in ggc_min_heapsize_heuristic() could be adjusted
to take the number of "cores" into account. 
So that on an 8GB 4-core machine it would return 512k. And less than
that for machines with less memory or higher core counts.

> Plus, for big systems it's nice to have beefy ram disks as scratch
> areas, it can save a lot of disk IO.
> 
> What are the numbers with 256M?

Here are the numbers from a 4core/8thread 16GB RAM Skylake machine.
They look less stellar than the ppc64le ones (variability is smaller):

 --param ggc-min-heapsize=131072
11264.89user 311.88system 24:18.69elapsed 793%CPU (0avgtext+0avgdata 1265352maxresident)k

 --param ggc-min-heapsize=393216
10655.42user 347.92system 23:01.17elapsed 796%CPU (0avgtext+0avgdata 1280476maxresident)k

 --param ggc-min-heapsize=524288
10565.33user 352.90system 22:51.33elapsed 796%CPU (0avgtext+0avgdata 1506348maxresident)k

-- 
Markus


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]