This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
Re: ctor_copy_dtor test allocates ridiculous amounts of memory
On Sat, Oct 07, 2000 at 12:34:31PM -0700, Benjamin Kosnik wrote:
> > where csz01 has been initialized to str01.max_size(). Flipping
> > through basic_string.h, it looks like that's a constant equal to
> > one-quarter the max virtual address space - 1GB in this case.
>
> right
>
> this is the full test block, in context you can see what the test is
> doing:
...
> this test did end up passing for you, correct?
With resource limits applied, it catches std::bad_alloc and proceeds.
Without, it exhausts swap and gets killed. (Or something else does.)
> > My question to you, then, is - is that a sensible upper limit? And is
> > it sensible to actually create a string of that size during testing?
>
> as a pathological corner case, this is legitimate.
>
> as for your second question: is it sensible to allocate this kind of
> memory during testing? I've always thought it ok to test stuff like this
> (ie pathological cases) but perhaps I'm off base.
I can see why you'd want to test this sort of thing, but I don't think
it's acceptable to allocate that amount of memory during testing. I'd
have to have more than a gigabyte of swap for the test to succeed as
intended. I don't think most people have that much.
You'll only catch bad_alloc if there are resource limits; if there
aren't, the OS will pretend it has infinite memory and then kill
random processes when it runs out of memory. And I mean _random_.
It's depressingly common for it to shoot all the system daemons before
it hits the process causing the problem. Or it might just deadlock
inside the kernel. I don't run with resource limits on by default,
because I can't reliably predict how much memory the compiler will
want for any given run.
Also, I tend to run builds in the background while working. When we
hit that test, everything else on the machine is paged out and the
computer takes ~20sec to accept a single keystroke. Which is mostly
lame Linux paging heuristics, but still.
As a rule of thumb, I'd say that all execution tests should consume no
more than two megabytes of RAM. Yes, two. There's still people out
there with 8 and 16 megabyte systems.
zw