This is the mail archive of the
gcc-help@gcc.gnu.org
mailing list for the GCC project.
Re: Strange floating point problems on SH4 with gcc 4.1.0
- From: Andrew de Quincey <adq_dvb at lidskialf dot net>
- To: gcc-help at gcc dot gnu dot org
- Date: Thu, 27 Jul 2006 10:16:06 +0100
- Subject: Re: Strange floating point problems on SH4 with gcc 4.1.0
- References: <200607270243.01137.adq_dvb@lidskialf.net>
Sorry this is taking a while, I'm still catching up on where I got to two
weeks ago. Anyway, the original error I am debugging occurs in the following
line of nsComponentManager.cpp/nsComponentManagerImpl::Init().
PL_DHashTableSetAlphaBounds(&mFactories,
0.875,
PL_DHASH_MIN_ALPHA(&mFactories, 2));
In fact its the PL_DHASH_MIN_ALPHA macro, which is defined as:
#define PL_DHASH_MIN_ALPHA(table, k) \
((float)((table)->entrySize / sizeof(void *) - 1) / ((table)->entrySize /
sizeof(void *) + (k)))
As I'm getting a floating point arithmetic exception, I immediately
thought "oh division by 0". However I have verified that none of the values
being used here are 0:
table->entrySize is 8
sizeof(void*) is 4
and k is 2 obviously.
The error seems to occur when the int is cast to a float... it somehow becomes
0.
Looking at the ASM in gdb shows the error happens at:
0x297b5030 <_ZN22nsComponentManagerImpl4InitEPK18nsStaticModuleInfoj+472>:
fdiv fr1,fr2
Which would indicate that for some reason fr2 is unexepectedly 0. GDB refuses
to dump out floating point registers on this platform so I cannot confirm
directly :(
Originally, I added some debug tracing printfs before that to dump out
variables as follows:
int x = 20;
float k = (float) x;
float y = (float)(mFactories.entrySize / sizeof(void *) - 1);
printf("%f\n", k);
printf("%i %i %i\n", mFactories.entrySize, sizeof(nsFactoryTableEntry),
sizeof(void*));
printf("%f\n", y);
printf("%i\n", x);
printf("%f %f\n", k, (float) x);
printf("%f\n", y / (float) x);
fflush(stdout);
The first printout reports the value of 'k' to be 0.007812... I know people
report rounding problems erroneously as gcc errors, but 20 rounded to
0.007812 ain't right :)
I get a floating point error from the second printf in the __udivsi3_i4
function again.
I know it sounds like a bug in glibc's printf for this platform, but the
original error I am trying to fix does not use printf().