This is the mail archive of the
mailing list for the GCC project.
Re: Minimal GCC/Linux shared lib + EH bug example
----- Original Message -----
From: "Martin v. Loewis" <firstname.lastname@example.org>
> "David Abrahams" <email@example.com> writes:
> > Don't change anything just for me. A change should be made because it's
> > right thing to do.
> It's not obvious what the right thing to do is, here.
I normally wouldn't say this, but today I think the right answer is
> typeinfo comparison will lose performance. It is not clear that
> "traditional" (i.e. most) applications, which just compile and link a
> "program" should be penalized to support more exotic applications.
Speculating, of course: I doubt it would have much impact because users
typically know better than to use RTTI and EH in their inner loops; every
good C++ book I've seen warns that these features are not always fast.
Also, it could be made to cost almost nothing by storing a hash of the
string in an extended area of the type_info record and comparing that
first. However, I don't think this is "the right answer"; it's just a
> That's a difficult judgement, and not one I'm willing to make.
> I believe that it is a useful approach to keep the notion that shared
> libraries are part of a "program", and that the program still ought to
> implement the semantics of the relevant standards.
I totally agree. In that case you'll have to accept my shared linking
semantics, I think. In today's model, depending on the order in which it is
loaded w.r.t. other "programs", a "program" may or may not share with its
> In this specific
> case, any Python extension would be a "program" (though free-standing,
> since it has a different entry point).
So, what happens when multiple "programs" link to the same shared library?
> > Yes, we can make shared libs act like static libs when they're
> > linked in the usual way, but other arrangements are quite common,
> > and users have a mental model for those as well. It's not clear that
> > the model corresponds with reality, of course, but it's worth
> > supporting well-defined semantics when possible.
> Unfortunately, apart from the obvious cases, it is pretty difficult to
> give a well definition; it is much easier to declare problematic cases
> as undefined.
Isn't that always the way? I guess I'll just have to make the right answer
seem more obvious <0.1 wink>
> You probably cannot convince compiler vendors to follow
> what you consider a reasonable semantics unless you specify what that
> semantics is, and contribute code or money to change their
There's one other way: the pressure of standards. However, that takes a
long time, and seldom works without a reference implementation...
> > 1. For each symbol, there is an undirected graph which determines how
> > shared.
> > 2. Nodes of the graph correspond to shared libraries. There is also a
> > for the sole executable.
> > 3. At each boundary between nodes where there is global symbol sharing,
> > either via explicit linking, or via dlopen with RTLD_GLOBAL, an edge is
> > formed between nodes in a symbol's graph iff the symbol is used
> > unresolved) in BOTH nodes
> > 4. A symbol's definition is shared between nodes A and B iff A is
> > from B in that symbol's graph. Note that this is only true if there is
> > continuous chain of global sharing between A and B, and all
> > nodes use the symbol as well.
> That item 4 is in violation of the ELF spec. If both A and B define
> the same symbol, and if the symbol is weak, and if A is reachable from
> B, then the dynamic linker shall chose the definition of in B, not the
> definition in A.
I think you misunderstand me. What you wrote doesn't contradict item 4:
since the graph is bidirectional, if A is reachable from B then B is
reachable from A. I don't care /which/ definition is chosen - they're
required to be the same anyway - I only care that they're shared.