This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Avoid ggc_collect () after WPA forking


On Wed, 19 Mar 2014, Martin Liška wrote:

> 
> On 03/19/2014 03:55 PM, Richard Biener wrote:
> > On Wed, 19 Mar 2014, Martin Liška wrote:
> > 
> > > There are stats for Firefox with LTO and -O2. According to graphs it
> > > looks that memory consumption for parallel WPA phase is similar.
> > > When I disable parallel WPA, wpa footprint is ~4GB, but ltrans memory
> > > footprint is similar to parallel WPA that reduces libxul.so linking by
> > > ~10%.
> > Ok, so I suppose this tracks RSS, not virtual memory use (what is
> > "used" and what is "active")?
> 
> Data are given by vmstat, according to:
> http://stackoverflow.com/questions/18529723/what-is-active-memory-and-inactive-memory
> 
> *Active memory*is memory that is being used by a particular process.
> *Inactive memory*is memory that was allocated to a process that is no longer
> running.
>
> So please follow just 'blue' line that displays really used memory. According
> to man, vmstat tracks virtual memory statistics.

But 'blue' is neither active nor inactive ... what is 'used'?  Does
it correspond to 'swpd'?

If it is virtual memory in use then this is expected to grow when 
fork()ing as the virtual memory space is obviously copied (just the pages 
are still shared).

For me allocating a GB memory and clearing it increases "active" by
1GB and then forking doesn't increase any of the metrics vmstat -a
outputs in any significant way.

> > And it is WPA plus LTRANS stages, WPA ends where memory use first goes
> > down to zero?
> > I wonder if you can identify the point where parallel streaming
> > starts and where it ends ... ;)
> 
> Exactly, WPA ends when it goes to zero.

So the difference isn't that big (8GB vs. 7.2GB), and is likely attributed
to heap memory we allocate during the stream-out.  For example
we need some for the tree-ref-encoders (I remember that can be a
significant amount of memory, but I improved that already as far as
possible...).  So yes, we _do_ allocate memory during stream-out
and that is now required N times.

> > Btw, I have another patch in my local tree, limiting the
> > exponential growth of blocks we allocate when outputting sections.
> > But it shouldn't be _that_ bad ... maybe you can try if it has
> > any effect?
> 
> I can apply it.

Thanks,
Richard.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]