This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: stl / pthreads leak, patch


In article <1029125671.3589.9.camel@penguin1>,
Kenneth Murray<desertcoder@cox.net> writes:

> Here is a sample program that will always leak memory. It has been
> tested using both GCC-2.9.6 and GCC-3.1.1 on RedHat Linux versions 6.2,
> 7.2, and 7.3. It has been tested using single and multi-processor
> machines:

The programs posted contain no memory leaks where a memory leak is
defined as a pointer to allocated memory that is lost forever.  Here
is a definitional example of such a memory leak:

void foo (void)
{
  void* p = malloc (128);

  use_p;
  // no call to free within foo().
}

In this case, a reference to that memory pointed to by p is lost
forever.  As I understand it (don't have valgrind on my system),
valgrind and other memory allocation checkers will report this as a
memory leak.  If foo was called X times, the memory leak could be
characterized as:

(128+malloc_overhead1)*X+malloc_overhead2 bytes.

Here is a "memory leak" findable by pairing malloc() with free() calls
(i.e. valgrind will also find it):

void bar (void)
{
  static void* p;

  mutex_guard();

  if (!p) p = malloc (128);

  use_p;

  mutex_unguard();
}

Here, a reference to memory pointer to by p is cached within the
program for later reuse but is never formally "freed".  Many
programmer do not consider this a memory leak since steady-state
analysis after a bounded start-up phase would not classify it as a
memory leak; it is not a systematic leak based on the number of calls
to bar().  It appears that valgrind reports this situation distinctly
from the case shown in foo().

Both libstdc++-v2 and libstdc++-v3 cache memory internally to greatly
aid performance yet, as far as we know, no pointers are ever lost as
in foo().

Regarding your second program, I have a theory but can spend no more
time looking at it.  I believe that as you create a more complex
processClient() and fire more concurrent threads, the odds of them all
running to completion before 10 seconds approaches zero.

To test my theory, you would have to retain pthread_t handles in an
array, start them as joinable and then properly wait for all of them
before main() terminates.  Basically you are testing a
non-deterministic situation.  If you convert your test as I suggest,
then I'd expect a valgrind report of somthing like this:

When strMultiplier = 10, Valgrind reports:

==11851== LEAK SUMMARY:
==11851==    definitely lost: 0 bytes in 0 blocks.
==11851==    possibly lost:   0 bytes in 0 blocks.
==11851==    still reachable: 9528 bytes in 4 blocks.


When strMultiplier = 100, Valgrind reports:

==18111== LEAK SUMMARY:
==18111==    definitely lost: 0 bytes in 0 blocks.
==18111==    possibly lost:   0 bytes in 0 blocks.
==18111==    still reachable: [200000?] bytes in [50?] blocks.

All/most of the ``still reachable'' memory would be in the
libstdc++-v3 cache.

If you don't like this caching behavior, then you will have to
reconfigure your library by following the included documentation.
This can radically affect performance on some platforms but to avoid
all charges of libel, I will not name them. ;-)

Regards,
Loren


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]