This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
gcc 4.6.1 double to long cast on 32bit systems
- From: Karsten Ahnert <karsten dot ahnert at ambrosys dot de>
- To: gcc at gcc dot gnu dot org
- Date: Wed, 01 Aug 2012 22:52:59 +0200
- Subject: gcc 4.6.1 double to long cast on 32bit systems
Hi,
I am new to this list. If this is not the correct place for posting the
question I apologize for any inconvenience.
The following code produces strange results on a 32 bit Linux system
with gcc 4.6.1 (compilation with -m32):
double val = 52.30582;
double d = 3600.0 * 1000.0 * val;
long l = long( d );
long l2 = long( ( 3600.0 * 1000.0 * val ) );
long l3 = (long)( 3600.0 * 1000.0 * val );
long l4 = long( 3600.0 * 1000.0 * val );
cout.precision( 20 );
cout << "Original value : " << val << endl;
cout << "Double with mult : " << d << endl;
cout << "Casted to long v1 : " << l << endl;
cout << "Casted to long v2 : " << l2 << endl;
cout << "Casted to long v3 : " << l3 << endl;
cout << "Casted to long v4 : " << l4 << endl;
On a 64bit machine the output is (as expected):
Original value : 52.305819999999997094
Double with mult : 188300952
Casted to long v1 : 188300952
Casted to long v2 : 188300952
Casted to long v3 : 188300952
Casted to long v4 : 188300952
Nevertheless on a 32bit machine and gcc 4.6.1 and 4.6.3 I get:
Original value : 52.305819999999997094
Double with mult : 188300952
Casted to long v1 : 188300952
Casted to long v2 : 188300951
Casted to long v3 : 188300951
Casted to long v4 : 188300951
I obtain the results by compiling with -m32. The differences between v1
and v2 or v3, v4 are very strange and I have no explanation.
Furthermore, if I compile with optimization turned on (-O3) both results
are correct and identical. Is there an explanation for this behavior or
is this already a known issue? I have also tested with gcc-4.7 and
everthing looks correct for 32 and 64 bit.
Thank you!