This is the mail archive of the
mailing list for the GCC project.
Re: Results for 3.4-bi 20021213 (experimental) testsuite on
- From: Nathan Sidwell <nathan at codesourcery dot com>
- To: John David Anglin <dave at hiauly1 dot hia dot nrc dot ca>
- Cc: zack at codesourcery dot com, gcc-testresults at gcc dot gnu dot org
- Date: Sun, 15 Dec 2002 22:18:45 +0000
- Subject: Re: Results for 3.4-bi 20021213 (experimental) testsuite on
- References: <200212152117.gBFLHMHN003142@hiauly1.hia.nrc.ca>
John David Anglin wrote:
I think the main point of weak symbols is that they can be defined
in multiple translation units and overloaded. If non-definedness
or undefinedness is the main feature, then we shouldn't be defining
This test of a weak function address is in the gthr-*.h headers,
(see __gthread_active_p), so I figured it was ok. The gcc docs do not
give a formal description of what supporting weak symbols means. I went
with the elf specs which (IIRC) are
* a weak definition is overridden by a non-weak definition
* an unresolved weak declaration is resolved to zero.
[stuff about hpux fptrs]
From what you say, it does look like something is wrong with the
hpux stuff. You are correct that !fn is equivalent 0 == &fn, however,
pedantically a function's address can never be non-null. The only
ones that can be are weak functions - there's some code in gcc
to inhibit the 'functions address cannot be null' for such weak decls.
One way to fix this might be to have a "stub" version of __gcov_init and
a call argument that would allow testing whether the real __gcov_init is
linked in or not.
That might well work, but IMHO the hpux is not supporting weak. I, of
course, don't feel strongly about this, but would like to clarify
what weak support means before trying to work around this problem.
Nathan Sidwell :: http://www.codesourcery.com :: CodeSourcery LLC
The voices in my head said this was stupid too
email@example.com : http://www.cs.bris.ac.uk/~nathan/ : firstname.lastname@example.org