This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: setrlimit()/C++ new() seg fault
- To: othman at cs dot wustl dot edu
- Subject: Re: setrlimit()/C++ new() seg fault
- From: Wolfram Gloger <wmglo at dent dot med dot uni-muenchen dot de>
- Date: 10 Nov 1999 11:29:58 -0000
- CC: gcc at gcc dot gnu dot org
- References: <199911082150.PAA17341@taumsauk.cs.wustl.edu>
Hello,
> I've been experimenting with limiting the amount of memory available to
> a process by setting its data segment resource limit from within the
> process by using setrlimit().
If you really want to prevent excessive memory allocation, the limit
to set is RLIMIT_AS, not RLIMIT_DATA (as the latter can be easily
circumvented) but this has nothing to do with gcc.
> I've been testing if this technique
> would work by attempting to allocate more memory on the heap than the
> limit set in setrlimit() call. Unfortunately, I get a seg fault in the
> standard C++ library (gcc 2.95.2 on a Debian GNU/Linux "potato" i686
> box).
I can reproduce this. It seems that the mechanism to throw exceptions
is depending on malloc() still working. If you reduce the data limit
so low as you've done in your sample, then malloc() will always return
NULL.
There are several places in gcc/frame.c where malloc() is used without
checking for a NULL result, in this case it's
static inline void
start_fde_sort (fde_accumulator *accu, size_t count)
{
accu->linear.array = (fde **) malloc (sizeof (fde *) * count);
accu->erratic.array = (fde **) malloc (sizeof (fde *) * count);
accu->linear.count = 0;
accu->erratic.count = 0;
}
which sets the arrays to NULL and therefore you get the crash
shortly thereafter.
I don't know whether it's possible or worthwhile to fix this.
Regards,
Wolfram.