[Bug libfortran/47945] REAL(8) output conversion error on MinGW32

thenlich at users dot sourceforge.net gcc-bugzilla@gcc.gnu.org
Wed Mar 2 06:45:00 GMT 2011


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=47945

Thomas Henlich <thenlich at users dot sourceforge.net> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Severity|normal                      |minor

--- Comment #6 from Thomas Henlich <thenlich at users dot sourceforge.net> 2011-03-02 06:45:01 UTC ---
I still think it makes sense to demand that the same binary value always
converts to the same decimal number, regardless of compiler vendor, or
platform.

It should even be the same for the same internal value, when the variables are
of different real kinds (which now it isn't, on mingw). Consider:

real(8) :: r8
real(16) :: r16

r8 = .14285714285714286d0
r16 = r8
write(*, '(f35.32)') r8
write(*, '(f35.32)') r16
end

output:
 0.14285714285714284921875000000000
 0.14285714285714284921269268124888

Sometimes it is necessary to specify more decimal digits than necessary in the
edit descriptor because the magnitude of the result may vary. Now, if the same
binary value would always result in the same decimal number, that would make it
easier to compare results obtained from the same program, obtained with two
different compilers or on different platforms (a method used for program
verification).

The Fortran 2008 standard even demands this behaviour (in my interpretation):

===
10.7.2.3.7 I/O rounding mode

2. In what follows, the term "decimal value" means the exact decimal number as
given by the character string, while the term "internal value" means the number
actually stored in the processor.  For example, in dealing with the decimal
constant 0.1, the decimal value is the mathematical quantity 1/10, which has no
exact representation in binary form.  Formatted output of real data involves
conversion from an internal value to a decimal value; formatted input involves
conversion from a decimal value to an internal value.

3.  When the I/O rounding mode is UP, the value resulting from conversion shall
be the smallest representable value that is greater than or equal to the
original value. When the I/O rounding mode is DOWN, the value resulting
from conversion shall be the largest representable value that is less than or
equal to the original value. [etc]
===

In the example, 0.14285714285714284921269268124888 is the largest representable
(with 32 decimal digits) value that is less than the original value (binary
1.001001001001001001001001001001001001001001001001001 * 2^-3 = decimal
0.1428571428571428492126926812488818...).

The point is, what the standard demands is the closest approximation that can
be represented with the specified output precision.

The standard does not state that an implementation may truncate the result
after an arbitrary number of decimal digits (even if that number is higher than
the possible precision of the internal bit width used).



More information about the Gcc-bugs mailing list