This is the mail archive of the
mailing list for the GCC project.
Re: [RFA] update ggc_min_heapsize_heuristic()
- From: Markus Trippelsdorf <markus at trippelsdorf dot de>
- To: "Richard Earnshaw (lists)" <Richard dot Earnshaw at arm dot com>
- Cc: Alexander Monakov <amonakov at ispras dot ru>, gcc at gcc dot gnu dot org
- Date: Mon, 10 Apr 2017 12:15:37 +0200
- Subject: Re: [RFA] update ggc_min_heapsize_heuristic()
- Authentication-results: sourceware.org; auth=none
- References: <20170409144125.GA10606@x4> <alpine.LNX.email@example.com> <20170409191019.GA294@x4> <20170409200621.GC294@x4> <firstname.lastname@example.org>
On 2017.04.10 at 10:56 +0100, Richard Earnshaw (lists) wrote:
> On 09/04/17 21:06, Markus Trippelsdorf wrote:
> > On 2017.04.09 at 21:10 +0200, Markus Trippelsdorf wrote:
> >> On 2017.04.09 at 21:25 +0300, Alexander Monakov wrote:
> >>> On Sun, 9 Apr 2017, Markus Trippelsdorf wrote:
> >>>> The minimum size heuristic for the garbage collector's heap, before it
> >>>> starts collecting, was last updated over ten years ago.
> >>>> It currently has a hard upper limit of 128MB.
> >>>> This is too low for current machines where 8GB of RAM is normal.
> >>>> So, it seems to me, a new upper bound of 1GB would be appropriate.
> >>> While amount of available RAM has grown, so has the number of available CPU
> >>> cores (counteracting RAM growth for parallel builds). Building under a
> >>> virtualized environment with less-than-host RAM got also more common I think.
> >>> Bumping it all the way up to 1GB seems excessive, how did you arrive at that
> >>> figure? E.g. my recollection from watching a Firefox build is that most of
> >>> compiler instances need under 0.5GB (RSS).
> >> 1GB was just a number I've picked to get the discussion going.
> >> And you are right, 512MB looks like a good compromise.
> >>>> Compile times of large C++ projects improve by over 10% due to this
> >>>> change.
> >>> Can you explain a bit more, what projects you've tested?.. 10+% looks
> >>> surprisingly high to me.
> >> I've checked LLVM build times on ppc64le and X86_64.
> > Here are the ppc64le numbers (llvm+clang+lld Release build):
> > --param ggc-min-heapsize=131072 :
> > ninja -j60 15951.08s user 256.68s system 5448% cpu 4:57.46 total
> > --param ggc-min-heapsize=524288 :
> > ninja -j60 14192.62s user 253.14s system 5527% cpu 4:21.34 total
> I think that's still too high. We regularly see quad-core boards with
> 1G of ram, or octa-core with 2G. ie 256k/core.
> So even that would probably be touch and go after you've accounted for
> system memory and other processes on the machine.
Yes, the calculation in ggc_min_heapsize_heuristic() could be adjusted
to take the number of "cores" into account.
So that on an 8GB 4-core machine it would return 512k. And less than
that for machines with less memory or higher core counts.
> Plus, for big systems it's nice to have beefy ram disks as scratch
> areas, it can save a lot of disk IO.
> What are the numbers with 256M?
Here are the numbers from a 4core/8thread 16GB RAM Skylake machine.
They look less stellar than the ppc64le ones (variability is smaller):
11264.89user 311.88system 24:18.69elapsed 793%CPU (0avgtext+0avgdata 1265352maxresident)k
10655.42user 347.92system 23:01.17elapsed 796%CPU (0avgtext+0avgdata 1280476maxresident)k
10565.33user 352.90system 22:51.33elapsed 796%CPU (0avgtext+0avgdata 1506348maxresident)k