This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
c++/8861: mangling floating point literal in template arg expression
- From: catherin at ca dot ibm dot com
- To: gcc-gnats at gcc dot gnu dot org
- Date: 7 Dec 2002 20:53:06 -0000
- Subject: c++/8861: mangling floating point literal in template arg expression
- Reply-to: catherin at ca dot ibm dot com
>Number: 8861
>Category: c++
>Synopsis: mangling floating point literal in template arg expression
>Confidential: no
>Severity: critical
>Priority: medium
>Responsible: unassigned
>State: open
>Class: sw-bug
>Submitter-Id: net
>Arrival-Date: Sat Dec 07 12:56:00 PST 2002
>Closed-Date:
>Last-Modified:
>Originator: Catherine Morton
>Release: 3.2
>Organization:
>Environment:
>Description:
I have a question about the way gcc is mangling a floating point literal used in a template argument
( this has changed from gcc 3.0.4 to 3.2 and I think the 3.2 output is in error )
( note that I'm using 3.0.4 from aix and 3.2 from linux - both on the same IBM hardware )
Here is a simple(?) example:
// --------------------- cut -----------------------------------
template <int I> struct A {
A(int);
};
const int N1 = 19;
enum E {N2 = 27};
template <int I, char J> void f(A<I+int(12.34)>) {}
void g()
{
f<1,'x'>(37);
}
// ------------------------------ cut ------------------------------
gcc 3.0.4 (on aix) mangles the name as follows:
_Z1fILi1ELc120EEv1AIXplT_cviLd4028AE147AE147AEEEE
the floating point literal 12.34 is
Ld4028AE147AE147AE
this makes sense since the C++ ABI says:
If floating-point arguments are accepted as an extension, their values should be encoded using a fixed-length lowercase hexadecimal string
corresponding to the internal representation , high-order bytes first, without leading zeroes.
(note that the ABI says lowercase hex while gcc always mangles it as upper-case hex)
As a simple example I can reproduce the Hex number from double a = 12.34 with the following small test which reads the internal representation:
#include <stdio.h>
int main(void )
{
double a = 12.34;
char *p ;
int i ;
p = ( char * ) &a ;
for ( i = 0 ; i < 8 ; i++ )
{
printf ( "%02x \n", ( unsigned char ) p[i] ) ;
}
}
output:
40
28
ae
14
7a
e1
47
ae
This prints the internal representation matching the mangled name generated on 3.0.4 gcc
(note that this simple program prints the same output whether its compiled on 3.0.4 (aix) or 3.2 (linux)
Here's what I cannot figure out:
using gcc 3.2 (on linux - same hardware), the mangled name is:
_Z1fILi1ELc120EEv1AIXplT_cviLd0000000000700A3DA3D7C5704020000EEE
the float literal 12.34 is mangled as: Ld0000000000700A3DA3D7C5704020000E
I cannot figure out where this hex encoding came from... I think it must be a bug.
information from gcc 3.0.4 in aix (gcc -v)
Reading specs from /afs/tor/common/progs/gcc-3.0.4/aix43/lib/gcc-lib/powerpc-ibm-aix4.3.3.0/3.0.4/specs
Configured with: ../gcc-3.0.4/configure --prefix=/afs/tor/common/progs/gcc-3.0.4/aix43
Thread model: single
gcc version 3.0.4
information from gcc 3.2 on linux (gcc -v)
Reading specs from /usr/lib/gcc-lib/powerpc-suse-linux/3.2/specs
Configured with: ../configure --enable-threads=posix --prefix=/usr --with-local-prefix=/usr/local --infodir=/usr/share/info --mandir=/usr/share/man --libdir=/usr/lib --enable-languages=c,c++,f77,objc,java,ada --enable-libgcj --with-gxx-include-dir=/usr/include/g++ --with-slibdir=/lib --with-system-zlib --enable-shared --enable-__cxa_atexit powerpc-suse-linux
Thread model: posix
gcc version 3.2
>How-To-Repeat:
>Fix:
>Release-Note:
>Audit-Trail:
>Unformatted: