memory consumption

Jay K jay.krell@cornell.edu
Sun Jun 6 17:41:00 GMT 2010


shorty story: nevermind, sorry, ulimit!


long story:

-O1 doesn't help.
41K lines, no optimization, no debugging, file at a time compilation, function at a time codegen, 64MB seems excessive. ?


gcc cross compilation is nice, but it is hampered by having to get a "sysroot".



But anyway I finally read the ulimit and gcc manuals...since address space should be essentially unlimited, I thought this all odd.


bash-4.1$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) 131072
file size               (blocks, -f) unlimited
max memory size         (kbytes, -m) 12338744
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 64
virtual memory          (kbytes, -v) 4194304


bash-4.1$ ulimit -d 1000000
bash-4.1$ ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) 1000000
file size               (blocks, -f) unlimited
max memory size         (kbytes, -m) 12338744
open files                      (-n) 4096
pipe size            (512 bytes, -p) 8
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 64
virtual memory          (kbytes, -v) 4194304


and http://gcc.gnu.org/install/specific.html


"Depending on
the OS version used, you need a data segment size between 512 MB and
1 GB, so simply use ulimit -Sd unlimited.

   "

Oops!


I had run into this on AIX so should have known better.
 Besides that, that page documents a lot of nonobvious useful stuff (e.g. I've also bootstrapped on HP-UX, going via K&R 3.x).
 (On the other hand, there are also non-huge files in libjava at least a few years ago that use excessive stack. Excessive stack seems worse than excessive heap).


Sorry sorry, mostly nevermind, move along..
(still seems excessive, but I also don't see the point in such OS limits, give me all the address space and let me trash if working set is high -- gcc should be more concerned with working set than address space, and I have no data on working set here nor there)


  - Jay

----------------------------------------
> To: jay.krell@cornell.edu
> CC: gcc-help@gcc.gnu.org
> Subject: Re: memory consumption
> From: iant@google.com
> Date: Sat, 5 Jun 2010 22:53:05 -0700
>
> Jay K  writes:
>
>> I hit similar problems building gcc in virtual machines that I think had 256MB. I increased them to 384,
>>
>>
>> Maybe gcc should monitor its maximum memory? And add a switch
>> -Werror-max-memory=64MB, and use that when compiling itself, at
>> least in bootstrap with optimizations and possibly debugging
>> disabled? Or somesuch?
>
> A --param setting the amount of memory required is a good idea for
> testing purposes. However, frankly, it would very unlikely that we
> would set it to a number as low as 64MB. New computers these days
> routinely ship with 1G RAM. Naturally gcc should continue to run on
> old computers, but gcc is always going to require virtual memory, and
> on a virtual memory system I really don't think 512MB or 1G of virtual
> memory is unreasonable these days.
>
> It would be folly to let old computers constrain gcc's ability to
> optimize on modern computers. A better approach is to use gcc's
> well-tested ability to cross-compile from a modern computer to your
> old computer.
>
>
>> I guess I can just make do with 4.3.5 built with host cc.
>> Maybe I'll try splitting up some of the files. Is that viable to be applied for real?
>
> I think you will make better progress by using -O1 when you compile.
>
> Ian
 		 	   		  



More information about the Gcc-help mailing list