This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: O2 optimization differences
- From: Nathan Sidwell <nathan at codesourcery dot com>
- To: "Sujith R." <sujith_r01 at infosys dot com>
- Cc: gcc at gcc dot gnu dot org
- Date: Wed, 29 Sep 2004 11:19:20 +0100
- Subject: Re: O2 optimization differences
- Organization: Codesourcery LLC
- References: <C48ECCD6B0FD4D4DAF1B0FC6CE00D8F80469FACC@kecmsg17.ad.infosys.com>
Sujith R. wrote:
Hello,
I am finding the output differences between O2 flagged , optimized version and a -g3 -gdwarf-2 flagged debugable version of a C++ application , which does some very heavy , floating point , number crunching.
The differences are with very small numbers (close to 0) like 1.234E-11 being treated as 0, but some subsequent calculations go for a toss.
The optimized and debug able binaries built on the same platform generate different outputs.
This we are observing on both the platforms mentioned below:
Platform A: Linux 2.1 AS, gcc 2.96, RW SourcePro 3.0
Platform B: Linux 3.0 AS, gcc 3.2.3 , RW SourcePro 6.1
However the debug able binaries generated on these two platforms give the same output.
I need some detailed information of any issues related to the O2 flag used for optimization on any of the above platforms. If anybody has faced similar issues or come across any doc which talks about this, please share.
What hardware? Are you using -funsafe-math? Are you using -ffloat-store?
What library versions? It is unclear from your description as to what the
behaviour of the optimized debuggable version is wrt the plain optimized one.
nathan
--
Nathan Sidwell :: http://www.codesourcery.com :: CodeSourcery LLC
nathan@codesourcery.com :: http://www.planetfall.pwp.blueyonder.co.uk