Passing 250 or so files from a larger software project (about 3MB of sourcecode) to gcc at a time makes gcc use more than 400MB of memory. Possible more as I had to stop the compilation. I created a much simpler example. 100 equal .c files each containing: static void mainX() {} where X varies from 1 to 10000. This gives me a slightly different behaviour. Compiling only one of the files consumes about 60MB memory. Compiling all of the 100 files consumes up to 142MB of memory. Then gcc crashes. If I'm understanding how gcc compiles multiple files at the same time correctly, then gcc should release most of the memory used for compiling one file before it begins compiling the next. Is this correct? If so, then I would expect overall memory usage during compilation of the 100 equal .c files to be not much more than 60MB. Reading specs from C:/MinGW/bin/../lib/gcc/mingw32/3.4.2/specs Configured with: ../gcc/configure --with-gcc --with-gnu-ld --with-gnu-as --host= mingw32 --target=mingw32 --prefix=/mingw --enable-threads --disable-nls --enable -languages=c,c++,f77,ada,objc,java --disable-win32-registry --disable-shared --e nable-sjlj-exceptions --enable-libgcj --disable-java-awt --without-x --enable-ja va-gc=boehm --disable-libgcj-debug --enable-interpreter --enable-hash-synchroniz ation --enable-libstdcxx-debug Thread model: win32 gcc version 3.4.2 (mingw-special) Let me know if you need more information.
Do you have a program which generates those files? Also is this at -O0 or -O2?
Yes I have, but I was lazy and wrote it in C#. I've put them up for download here: http://212.242.245.122/100files.tar.gz (2.5MB) There is also the command to invoke gcc (run.bat) No -O flag is used.
on the mainline I get an stack overflow in the GC.
(In reply to comment #3) > on the mainline I get an stack overflow in the GC. And I have a work around for that. But that is only a work around, we are creating too many varients of FUNCTION_TYPE which seems wrong. We really should be sharing these FUNCTION_TYPEs. This will greatly reduce the memory usage overall in all compilation.
Does this mean that there is a limit to how many files you can compile at a time (due to limited memory)? Can't the garbage collector run between each compilation?
(In reply to comment #5) > Does this mean that there is a limit to how many files you can compile at a > time (due to limited memory)? Can't the garbage collector run between each > compilation? Yes for limited by memory but GCC should not be as a hug of memory usage as shown by me looking into how to solve memory usage. Yes the garbage collector runs but the problem is we keep around too much still after the compiler like FUNCTION_TYPEs which seems like most of those should be able to be shared. I will attach later today the a tar ball with one of the files since that is really all that is needed to reproduce this bug.
Confirmed, mainly comment #6.
IMA will not be fixed. Actually, it never worked properly to begin with... GCC 4.5 has -flto, but I don't expect it will do much better than IMA, from a memory usage point of view. GCC 4.5 also has -fwhopr, but that is experimental. To the reporter: sorry this took so long, with, I suppose, not a very satisfying end. Keep an eye on GCC 4.6, it may bring you something better than IMA.