This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug rtl-optimization/323] optimized code gives strange floating point results
- From: "pepalogik at seznam dot cz" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: 22 Jun 2008 17:00:04 -0000
- Subject: [Bug rtl-optimization/323] optimized code gives strange floating point results
- References: <bug-323-4315@http.gcc.gnu.org/bugzilla/>
- Reply-to: gcc-bugzilla at gcc dot gnu dot org
------- Comment #114 from pepalogik at seznam dot cz 2008-06-22 16:59 -------
(In reply to comment #113)
> It is available when storing a result to memory.
Yes, but this requires quite a complicated workaround (solution (4) in my
comment #109). So you could say that the IEEE754 double precision type is
available even on a processor without any FPU because this can be emulated
using integers.
Moreover, if we assess things pedantically, the workaround (4) still doesn't
fully obey the IEEE single/double precision type(s), because there remains the
problem of double rounding of denormals.
> The IEEE754-1985 allows this. Section 4.3: "Normally, a result is rounded to
> the precision of its destination. However, some systems deliver results only to
> double or extended destinations. On such a system the user, which may be a
> high-level language compiler, shall be able to specify that a result be rounded
> instead to single precision, though it may be stored in the double or extended
> format with its wider exponent range. [...]"
> [...]
> AFAIK, the IEEE754-1985 standard was designed from the x87
> implementation, so it would have been very surprising that x87 didn't conform
> to IEEE754-1985.
So it seems I was wrong but the IEEE754-1985 standard is also quite "wrong".
> > Do you mean that on Windows, long double has (by default) no more precision
> > than double? I don't think so (it's confirmed by my experience).
> I don't remember my original reference, but here's a new one:
> http://msdn.microsoft.com/en-us/library/aa289157(vs.71).aspx
> In fact, this depends on the architecture. I quote: "x86. Intermediate
> expressions are computed at the default 53-bit precision with an extended range
> [...]"
I quote, too:
"Applies To
Microsoft® Visual C++®"
--
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323