This is the mail archive of the
mailing list for the GCC project.
Re: Larger Compilations / Speed
- From: Colin Douglas Howell <chowell2 at pacbell dot net>
- To: "F. Schaefer" <drfransch at netscape dot net>
- Cc: GCC at GNU dot org
- Date: Wed, 13 Aug 2003 23:29:44 -0700
- Subject: Re: Larger Compilations / Speed
- References: <2D507203.23B11A68.02FAE337@netscape.net>
F. Schaefer wrote:
Joe Buck <email@example.com> wrote:
On Wed, Aug 13, 2003 at 10:14:12AM -0400, Andrew Pinski wrote:
The OS should cache the files for execution (on Darwin it does)
so it looks more like an OS problem rather than a gcc problem.
On Linux and every other modern Unix-like OS, the files are also cached
Well, I admit that it's been some years since I studied some UNIX
internals. But, if I remember it right, the files are kept in a
buffer-cache in RAM. Then when requested they are copied into the
process data region. It is this copying, of course, that is a little
slow. Not to speak about what is done to the cache by the copying of
this large amount of data.
In modern OSes with efficient memory management, files from the buffer
cache are not copied into the process memory space. Instead, individual
pages from the files are simply remapped into the process memory space
in a demand-paged fashion, just as they would be when the executable was
first loaded from disk. In this case, loading the executable only
requires setting up the process page table appropriately; no copying of
memory is required.