This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: fortran Digest 7 Apr 2006 23:12:21 -0000 Issue 831
- From: Steve Ellcey <sje at cup dot hp dot com>
- To: gcc-patches at gcc dot gnu dot org, fortran at gcc dot gnu dot org, jakub at redhat dot com, sgk at troutmask dot apl dot washington dot edu, jblomqvi at cc dot hut dot fi
- Date: Fri, 7 Apr 2006 17:02:10 -0700 (PDT)
- Subject: Re: fortran Digest 7 Apr 2006 23:12:21 -0000 Issue 831
> 2006-04-07 Jakub Jelinek <jakub@redhat.com>
>
> * io/write.c (MIN_FIELD_WIDTH, STR, STR1): Define.
> (output_float): Increase buffer sizes for IEEE quad and IBM extended
> long double.
> (write_real): Output REAL(16) as 1PG43.34E4 rather than 1PG40.31E4.
I have no problem with the patch but I do know that it will not fix the
test failures on ia64-hp-hpux11.23 (and possibly some other REAL*16
platforms). This change should fix the problem writing out LDBL_MAX,
but on ia64-hp-hpux11.23 there is no way to print out LDBL_MIN and then
read it back in and get a good value (regardless of format sizes). The
problem is that LDBL_MIN (in the system header file) is:
#define LDBL_MIN 3.36210314311209350626267781732175261E-4932L
but when you assign that to a REAL*16 variable and then print it out you
get:
3.3621031431120935062626778173217526000000e-4932
due to rounding that happened when the value was stored as a REAL*16
value. The final 1 got reduced to a 0 because the macro LDBL_MIN could
not be stored exactly as a REAL*16 value and got it rounded down to
something between the two values shown above and then when printing it
out the printed value got rounded down to ...60 instead of ...61.
Then when you try to read that back in, you get an error because the
value is less than LDBL_MIN. My local math expert says that instead of
writing/reading LDBL_MIN, I should write/read (LDBL_MIN * (1 +
LDBL_EPSILON)), but I don't know how we can do that in Fortran.
Steve Ellcey
sje@cup.hp.com