This is the mail archive of the
mailing list for the GCC project.
Re: -fstrict-fp-semantics (was Re: numerical instability and estimate-probability)
- From: Jan Hubicka <jh at suse dot cz>
- To: Brad Lucier <lucier at math dot purdue dot edu>
- Cc: gcc at gcc dot gnu dot org, jh at suse dot cz, rth at cygnus dot com, mrs at windriver dot com
- Date: Fri, 16 Nov 2001 15:40:18 +0100
- Subject: Re: -fstrict-fp-semantics (was Re: numerical instability and estimate-probability)
- References: <200111160035.fAG0Zax17599@banach.math.purdue.edu>
> OK, now the GCC developers themselves have gotten bitten on the ass (sorry,
> Jan ;-) by the fact that gcc generates different FP code that gives different FP
> results depending on what specific optimizations have been invoked.
> It has been a principle forever that adding -g to the compiler options does
> not change the code *one bit*.
> Can we adopt an option, say -fstrict-fp-semantics, that means that fp
> results are not changed *one bit* by any optimization?
I am not quite sure how to reach it w/o important penalty in the perofrmance.
Even if we jump into 80bit spills (I was trying to implement it, but never got
it working, unfortuantely), we still need to disable any load/store propagation
that may elliminate the truncate.
This is quite nasty. It may be reached by making every FP memory reference
volatile, but that sounds crazy too.
Also it won't solve the problem with bootstrap, as stage1 compiler may
be non-gcc one and still may produce different result :(