This is the mail archive of the
mailing list for the GCC project.
Re: Results for 3.4-bi 20021213 (experimental) testsuite on
- From: "John David Anglin" <dave at hiauly1 dot hia dot nrc dot ca>
- To: nathan at codesourcery dot com (Nathan Sidwell)
- Cc: zack at codesourcery dot com, gcc-testresults at gcc dot gnu dot org
- Date: Sun, 15 Dec 2002 16:17:21 -0500 (EST)
- Subject: Re: Results for 3.4-bi 20021213 (experimental) testsuite on
> >>The HP linker doesn't like undefined symbols, weak or otherwise.
> I'm confused, i thought the point of weak symbols was to allow their
> non-definedness. Why is HPUX defining SUPPORTS_WEAK?
I think the main point of weak symbols is that they can be defined
in multiple translation units and overloaded. If non-definedness
or undefinedness is the main feature, then we shouldn't be defining
SUPPORTS_WEAK. However, then we must use explicit template instantiation
and we lose a huge amount of compatibility with existing c++ code that
relies on implicit template instantiation. This comes up time after
time on the 32-bit port and I don't think we want to not define
The 64-bit port supports all the other attributes of weak symbols
and it passes all the weak C and C++ tests in the testsuite.
However, the testing of undefinedness in the manner shown below has
problems at a number of levels and I am not sure that it can be made
to work. GCC makes a lot of implicit assumptions as to how function
pointers are handled by the linker and dynamic loader in the
At the first level, !__gcov_init tests the first word of the function
descriptor. This word is reserved. On the hppa64 port, function
addresses are in the third word of the descriptor and gp in the fourth
word. So, we are testing the wrong word in the function descriptor.
I suspect we need to examine the canonicalization of function pointers
on the hppa64-hpux port as we did recently for hppa-linux.
C99 indicates that !E is equivalent to 0==E. If I read correctly, when
one operand is a null pointer constant, it is converted to the type of
the other operand. Thus, we are comparing two function pointers in the
operation and function pointer canonicalization should occur.
I have checked under hppa-linux and !__gcov_init does not force
canonicalization of the function pointer. Thus, GCC is assuming
some rather special behavior on the part of the linker and dynamic
loader. If this is going to work on hppa64, we would need
something equivalent to canonicalize_funcptr_for_compare to extract
the correct data for the function pointer undefinedness test.
However, as I have noted, it looks like we should just be using
At the second level, GNU ld will link a simple test program with an
undefined weak symbol. However, the function descriptor appears to be
garbage and there is no R_PARISC_IPLT relocation for the weak function.
This may be a bug. I don't know if there would be a way to check for
undefinedness if there was an R_PARISC_IPLT relocation, or if there is
a way to correct the data in the function descriptor. More investigation
is needed with respect to the GNU linker. However, we are still stuck
when using HP linker as it won't allow undefined weak.
So, in summary, the !__gcov_init test for undefinedness is much more
involved and os dependent than would appear on the surface. I'm not
sure it should be present in mainline GCC code.
> #if __GNUC__ && !CROSS_COMPILE && SUPPORTS_WEAK
> /* If __gcov_init has a value in the compiler, it means we
> are instrumenting ourselves. We should not remove the
> counts file, because we might be recompiling
> ourselves. The .da files are all removed during copying
> the stage1 files. */
> extern void __gcov_init (void *)
> __attribute__ ((weak));
> if (!__gcov_init)
> unlink (da_file_name);
> unlink (da_file_name);
One way to fix this might be to have a "stub" version of __gcov_init and
a call argument that would allow testing whether the real __gcov_init is
linked in or not.
J. David Anglin email@example.com
National Research Council of Canada (613) 990-0752 (FAX: 952-6605)