This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: GCC 4.1: Buildable on GHz machines only?


On Mon, 16 May 2005, DJ Delorie wrote:


We already do that for when checking is enabled, well the GC heuristics
are tuned such that it does not change which is why
--enable-checking=release is always faster than without it.

Right, but it doesn't call ulimit(), so other sources of memory leakage wouldn't be affected. I'm thinking if the gcc driver set a per-process limit of, say, 128M, developers would learn to care about working set performance.

I like the idea, but will it really work? While compiling MICO I hardly see mem usage below 128MB on 512MB/1GB RAM boxes, perhaps more on 512MB due to memory usage heuristic(s) -- so I assume setting hard ulimit to 128MB will just result in build process crashing instead of slowdown and swapping, which would man get while using mem=128m as a linux boot param. Or am I completely mistaken?


Thanks,
Karel
--
Karel Gardas                  kgardas@objectsecurity.com
ObjectSecurity Ltd.           http://www.objectsecurity.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]