This is the mail archive of the
mailing list for the GCC project.
Re: [RFA] update ggc_min_heapsize_heuristic()
- From: Alexander Monakov <amonakov at ispras dot ru>
- To: Markus Trippelsdorf <markus at trippelsdorf dot de>
- Cc: gcc at gcc dot gnu dot org
- Date: Sun, 9 Apr 2017 21:25:08 +0300 (MSK)
- Subject: Re: [RFA] update ggc_min_heapsize_heuristic()
- Authentication-results: sourceware.org; auth=none
- References: <20170409144125.GA10606@x4>
On Sun, 9 Apr 2017, Markus Trippelsdorf wrote:
> The minimum size heuristic for the garbage collector's heap, before it
> starts collecting, was last updated over ten years ago.
> It currently has a hard upper limit of 128MB.
> This is too low for current machines where 8GB of RAM is normal.
> So, it seems to me, a new upper bound of 1GB would be appropriate.
While amount of available RAM has grown, so has the number of available CPU
cores (counteracting RAM growth for parallel builds). Building under a
virtualized environment with less-than-host RAM got also more common I think.
Bumping it all the way up to 1GB seems excessive, how did you arrive at that
figure? E.g. my recollection from watching a Firefox build is that most of
compiler instances need under 0.5GB (RSS).
> Compile times of large C++ projects improve by over 10% due to this
Can you explain a bit more, what projects you've tested?.. 10+% looks
surprisingly high to me.
> What do you think?
I wonder if it's possible to reap most of the compile time benefit with a bit
more modest gc threshold increase?