This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: Irix6 long doubles implemented wrong? (27_io/ostream_inserter_arith)
- From: Alexandre Oliva <aoliva at redhat dot com>
- To: Richard Henderson <rth at redhat dot com>
- Cc: "Kaveh R. Ghazi" <ghazi at caip dot rutgers dot edu>, gcc-bugs at gcc dot gnu dot org, gcc-patches at gcc dot gnu dot org, gcc at gcc dot gnu dot org, libstdc++ at gcc dot gnu dot org, oldham at codesourcery dot com, ro at TechFak dot Uni-Bielefeld dot DE
- Date: 09 Jan 2003 02:20:52 -0200
- Subject: Re: Irix6 long doubles implemented wrong? (27_io/ostream_inserter_arith)
- Organization: GCC Team, Red Hat
- References: <200212170531.AAA15561@caip.rutgers.edu><or4r97diei.fsf@free.redhat.lsd.ic.unicamp.br><orisxmn2fv.fsf@free.redhat.lsd.ic.unicamp.br><oradiw3e9k.fsf@free.redhat.lsd.ic.unicamp.br><200212241434.JAA22361@caip.rutgers.edu><orhed32l12.fsf@free.redhat.lsd.ic.unicamp.br><orwulx34wh.fsf@free.redhat.lsd.ic.unicamp.br><orptrn4lr0.fsf@free.redhat.lsd.ic.unicamp.br><20030107221549.GR12992@redhat.com><orptr7o91e.fsf@free.redhat.lsd.ic.unicamp.br><20030108220455.GC27635@redhat.com><orbs2rm1a4.fsf@free.redhat.lsd.ic.unicamp.br>
On Jan 9, 2003, Alexandre Oliva <aoliva@redhat.com> wrote:
> On Jan 8, 2003, Richard Henderson <rth@redhat.com> wrote:
>> On Wed, Jan 08, 2003 at 03:17:49PM -0200, Alexandre Oliva wrote:
>>> I've no idea of what LIA-1 is, but it does have as many denormal bits
>>> as normal bits, it's just that the minimum exponent for a denormal is
>>> higher than that of a plain doubles, since denormals start with the
>>> higher double still being normal.
>> Huh? No it doesn't. The minimum normalized double-double is
>> { DBL_MIN_FLT, 0 }.
> Nevermind, I was thinking having a denormal in the lower double would
> make the whole thing denormal, but in this case the lower double
> definitely isn't denormal.
On third thought :-), it actually is a denormal, not in the double
representation, but in the notion that implies that denormals don't
have as much precision as normals. Even though there is an implicit
one in the representation of the first double, if you represented the
mantissa as a sequence of 106 bits, you'd get this implicit one within
the denormal range. The fact that you can represent it as a normal is
just an artifact of the long double representation. Well, not really,
since the exponent range still enables you to represent it as normal,
but it really depends on which aspects of denormals matter. My
thought is that loss of mantissa bits is more important than whether
there is an implied one next to the MSB of the mantissa, so I'm now
trying to model that. This means LDBL_MIN_FLT should be bumped up to
represent this fact too, but I suspect this would trigger other sorts
of problems, so... There's no Right Thing (TM) to do...
Darn, who came up with this long double representation, and why? :-(
--
Alexandre Oliva Enjoy Guarana', see http://www.ic.unicamp.br/~oliva/
Red Hat GCC Developer aoliva@{redhat.com, gcc.gnu.org}
CS PhD student at IC-Unicamp oliva@{lsd.ic.unicamp.br, gnu.org}
Free Software Evangelist Professional serial bug killer