floating point: weird behaviour of gcc-2.95
juggy@gmx.net
juggy@gmx.net
Fri Jan 17 08:22:00 GMT 2003
What do you mean? Declaring all variables as double and using switches
such as -mieee-fp? That didn't help. Or do you mean something else?
I just checked again with several switches:
-mieee-fp
-mno-ieee-fp
-march=[i386|i486|pentium|pentiumpro]
-mno-fancy-math-387
It shows the same behaviour, whatever I use.
I also specifically checked on P3 and P4 now - same results. Oh, btw:
2.96 also shows the same behaviour.
And what is also strange: using -O3 works fine again! Why?? From the
manpage:"Optimize yet more. This turns on everything -O2 does, along
with also turning on -finline-functions"
NOTHING is written about a switch like -ffast-math/..., only
-finline-functions (which I don't think should cause this behaviour to
revert), so I am really starting to feel that this is a bug.
I mean, why does -O0, -O1, -O3, heck even -O4! (I didn't check further)
work fine but not -O2?
Btw, if you feel this doesn't belong to this list but to another, please
tell me and I'll post there.
Cheers
P.S.: Also posting to list.
Chris Croswhite wrote:
> Have you thought of setting the precision so that it always uses single and
> double IEEE standard? This then would get ride of rounding errors
> associated with the converstion of x86 extended precision fp math.
>
>
>
>
>
>>OK, it seems I need to make it clearer.
>>1.) I write Ansi-C code. I don't care to write any ASM or such since
>>this code is supposed to work on as many OS as possible
>>2.) I do NOT intend to "test to see whether x87 operations are carried
>>out in 53bit precision mode"
>>
>>It's just that for debugging and checking an ap I use switches like
>>(-pg -g etc. - no -Ox, x>0, whatsoever) at compile time. Since the
>>software computationally expensive I naturally want to improve the
>>speed as much as possible, so I use optimization switches. Now, what
>>did I do?
>>1.) check my software on test cases and real world data without
>>optimization 2.) when 1. worked I decided to optimize to be able to run
>>checks on more data
>>3.) the exact same data with the exact same program screwed up on the
>>linux boxes here. Accidentally I tested it on a BSD system as well and
>>it went alright. I would not expect THAT to happen - "if I
>>misunderstand the optimizations of GCC they should render the same
>>results on te same type of processors (all p3 or p4 around here)" I
>>thought :-(
>>
>>I tried switches like -fno-fast-math -fmath-storage.. (or whatever) to
>>check whether an optimization method is turned on that causes this
>>behaviour. I didn't find any definite cause for it. :-/
>>I also tried compiling with gcc-2.95.4 and gcc-3.2.1 on the exact same
>>(Linux 2.4.20, P3) system with these commands:
>>- gcc-3.2.1 -lm -O[0,1,2] -o soft *.c
>>- gcc-2.95.4 -lm -O[0,1,2] -o soft *.c
>>The only thing that gave different (=wrong) results was
>>gcc-2.95.4 -lm -O2 -o soft *.c
>>
>>I hope this is clearer now. If you tell me that gcc-2.95.[3|4] on Linux
>>turns on an optimization when using -O2 that is not used on BSD or with
>>gcc-3.2.1, I am happy. :-) And I'll probably delighted if you could
>>fill
>> me in on how to get the same behaviour on Linux systems with
>>gcc-2.95.[3|4] -O2 as gcc-2.95.[3|4] -O1 or gcc-3.2.1 -O[1|2] ^_-
>>
>>Cheers
>>
>>P.S.: Sorry if I send this twice - at first I just replied and it
>>didn't go to the list
>>
>>Tim Prince wrote:
>>
>>>On Friday 17 January 2003 03:29, juggy@gmx.net wrote:
>>>
>>>
>>>>Hi there,
>>>>
>>>>I recently noticed an odd behaviour of gcc. I searched the archives,
>>>>but I didn't find anything that really looks like this.
>>>>I am currently writing software that must be as numerically stable as
>>>>possible. Now the problem is that on Linux-Machines with gcc-2.95.3
>>>>and gcc-2.95.4 (I didn't have other versions to check) using the
>>>>switch -O2 results in inaccuracies (i.e. a double variable never
>>>>becomes 0) whilst -O1 is still alright.
>>>>At first I thought I might be using the switches in a wrong way, but
>>>>when I checked on a FreeBSD system (gcc-2.95.3) with the exactly same
>>>>program and the same switches it worked fine. I also tried switches
>>>>like -fno-fast-math and such, but none seemed to solve the problem.
>>>>Afterwards I checked with gcc-3.2.1, and everything worked fine with
>>>>-O2. Maybe I am missing something about using the correct switches,
>>>>but I think it is very odd that gcc on BSD and gcc on Linux show this
>>>>different a behaviour. I'd really appreciate if anyone could elighten
>>>>me on this issue.
>>>>
>>>
>>>If you're testing to see whether x87 operations are carried out in
>>>53-bit precision mode, why not say so? No, there's no switch to
>>>pass to linux run-time libraries to change precision mode, but you
>>>could use the asm() functions. WIth gcc-3.2.1, you do have switches
>>>to select x87, SSE, or SSE2 code generation, so you can control
>>>precision of intermediate operations that way. If you've truly come
>>>across something which isn't in the archives, you certainly haven't
>>>bothered to explain it.
>
>
>
>
>
More information about the Gcc-bugs
mailing list