This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Help: Unwinding the C++ stack...throw, longjmp & threads




Craig wrote:
>Josh wrote:

>>Of course the chance of getting a signal that you want to throw on
>>just as malloc() returns, combined with the situation where you
>>actually care about the memory leak at this juncture, combined
>>with the situation where the last catch that is released isn't
>>going to just free a whole epoch of allocations, is an
>>extremely low probability circumstance.  I understood the
>>example but thought of this case as being lost in the 'noise'.

>Not that I've been paying close attention to this thread or anything,
>but if you're talking about how a *system* should be designed,
>rather than how a specific *application* (say, a game or a kiosk
>display or the guidance system for a nuclear missile), there's no
>such thing as a window like this being lost in the "noise".

Thanks Craig, for a long and thoughtful essay.
A couple of points of response.  First off, some attention to the 
rest of the thread would be required to understand the context of the
text quoted above.  

Part of that context was contained in the rest of the excerpted
post, which went on to describe a scheme for solving the problem
that I understood to be proposed by Joe:  a) preserving the
legacy malloc() interface and b) avoiding even the possibility of
a memory leak in the presence of asynchronous exceptions.

A couple of people, including you, have jumped on the word
choice "lost in the noise", but nobody seemed to object to the
argument that one could, after all, design a malloc that didn't
have the possibility of leaking in the presence of asynchronous
exceptions.  At least the fact that I included the discussion 
should have been a good clue that I didn't mean to suggest that 
"lost in the noise" referred to every need and situation.  Also, 
your response doesn't really address even all of what is quoted above.
Clearly, I am not saying that the time window is lost in the noise,
but rather that the *combination* of the time window for a signal
at just that juncture, with a signal that is a case to throw(),
with a one-time memory leak (of small size - otherwise make use
of the other interface), with an architecture that isn't reclaiming
the memory through other means after such a throw - all of this
combined is less likely to cause a visible performance problem
in practice than say, a bug due to a compiler error.  Perhaps
you disagree with that, but at least please disagree with my
actual opinion.

>I've seen too many projects and products take *huge* hits in perceived
>robustness, utility, and wasted developer/debugging time due to supposedly
>"vanishingly small" windows like this being pried open, by various
>combinations of circumstances, to the point of repeated failure, to
>ever again accept "lost in the noise" as an excuse for designing in such
>windows.

See above.  "Lost in the noise", while perhaps an unfortunate choice
of words, was not meant to refer to the time window itself.

>Further, few people who say things like "lost in the noise" understand
>the fine distinctions between *types* of noise.  In this case, I hope
>you don't consider synchronous bugs (bugs triggered by normal,
>straightline code) as in any way similar to asynchronous bugs.

No, I don't.  However, I think that I can effectively make
a list of a relatively small number of circumstances where
it would be conceivably appropriate to create an asynchronous
exception by throwing from a signal handler, and that 

i.designing code that is as least as robust, if not more so, in 
the presence of 
this list of possibilities than it would have been if 
exceptions could not be thrown from signal handlers

is easier than

ii. designing exception-safe classes and procedures that will
be robust in the face of synchronous throws from any functions
they might call - so long as the definition of when to throw
a synchronous exception is left in terms of statements like
"the function cannot fulfill its contract given its arguments".
  

>Excellent programmers can do a great job squashing the former
>kind 

You've really raised the stakes in your commentary - talking about
missile systems and the like.  Here is my skeptical line in the
sand:  

I don't believe, relative to
this level of stakes, that even excellent programmers are
fully dealing with the consequences of ii.  Even if they
could do it, they are not doing it in practice
(I don't have occasion to look at any missile control
code, but the code I do see isn't robust to every
circumstance that could trigger a synchronous throw.
 
What is minimally required to deal with even ii. is that
a program be designed to move between states that have
the kind of atomicity of a reliable database, so that
exceptions cause a rollback of all variables and resources
to a previous safe state.  But, of course it is much worse
than this because many applications deal with changing
environments, so the previous "safe" state may no
longer be safe - e.g. the missile has moved.
The idea that designing exception safe classes makes
their use safe in the presence of exceptions, independently
of a context, is, I think, an illusion.

>My guess is that if, indeed, the problem mentioned is "lost in the noise",
>then the overall mechanism must be thoroughly explained as not able to
>cope with those sorts of coding constructs (to wit, pretty much any
>means a programmer might typically use to test how far some chunk of
>imperative code progressed before hitting a signal).

Just because the possibility of throwing from a signal handler is
introduced, that doesn't mean that every signal will result in a
throw, any more than the possibility that any random function could
call _exit() needs to be protected for.


>P.S. In case anyone's thinking "gee, Craig, if you think about it,
>you're essentially saying most programmers, and therefore most programs,
>don't really work in the presence of exceptions, especially asynchronous
>ones", all I can say is, yes, I've already thought about that, and,
>indeed, tentatively come to that conclusion.  

Well I agree with you - at the level of stakes you are  talking
about, most programs and programmers are broken.  However, I may
have some differences regarding assessment of where the greatest risks
originate.


- Josh





Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]