This is the mail archive of the
gcc-help@gcc.gnu.org
mailing list for the GCC project.
Re: difference in calculation result when using gcc vs Visual studio and optimisation flag
- From: Mason <slash dot tmp at free dot fr>
- To: Jack Andrews <effbiae at gmail dot com>
- Cc: Игорь Сотниченко <igor dot sotnichenko at gmail dot com>, GCC help <gcc-help at gcc dot gnu dot org>
- Date: Thu, 26 Apr 2018 12:53:01 +0200
- Subject: Re: difference in calculation result when using gcc vs Visual studio and optimisation flag
- References: <CAAo4jWMKpaw7NUFx1+=k6t_sbCvUEOMr19sjy0YB7USKcujSsw@mail.gmail.com> <CAJvHTNybe3XyH6u6QdJNaZaavvx9hygD7xyJqfLg2XGRa3KxxQ@mail.gmail.com>
On 26/04/2018 11:26, Jack Andrews wrote:
> This reduces it to a one-platform problem. gcc -O different to gcc
> Maybe I'm being idealistic, but why should optimization change results?
Because, for example on x86 platforms, 'gcc -O0' will use the 80-bit
x87 stack regs, while 'gcc -O2' will use the 64-bit SSE regs.
It is better to give up the notion that floating point computation
are exact, and accept the fact that small errors do change the
results on different implementations (and as pointed out, even on
the same implementation with different options).
It is also worth pointing out that sometimes these small errors
accumulate into huge errors. Floating point is tricky.
cf. https://en.wikipedia.org/wiki/Loss_of_significance
Regards.