This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

The Alpha and denormalized numbers...


Bear with me for a minute while I explain something interesting I
believe I have noticed.

>From what I've read from the Alpha Microprocessor Reference manual,
during computations the Alpha generates an exception when one of the
arguments is denormalized.  This is non-negotiable behavior.  If it is
the results that is denormalized, however, then the action depends on
the setting of the floating point control register (FPCR).  Either an
exception can be generated, or the results can be set to zero.  This
makes it possible to support denormalized calculations in the software
(slow), or just ignore them all together.  GCC will generate the
required software support if -mieee is specified (slow!).  If it is not,
then the FPCR is set to zero denormalized numbers (fast but less
accurate).

The key point here is that there is no way to avoid an exception if one
of the source arguments is denormalized.  The assumption is presumably
that if the FPU never generates a denormalized number (i.e. sets them
all to zero), then it will never have a denormalized number as an
argument.  This is all fine and dandy until erfc() is used.  It returns
denormalized numbers regardless of the fact that the hardware does not
generate them (must be some bit twiddling going on -- try erfc(27) and
you will get 5.237046e-319!).  As a result the program then generates an
unavoidable (in hardware) exception if the results are used in any
subsequent calculations.

I sent a note to the boys the maintain glibc and they claimed that:

>The math library is written assuming IEEE floating point handling.  If
>this is not supported it might still work but strange effects like
>this are the result.  There is nothing wrong.

So, so much for getting them to modify the glibc library.  I took a look
at it, and as near as I can tell the problem actually comes from a
function called  __ieee754_exp() which manually calculates exponentials
bit by bit.  My question then is this.  In the absent of -mieee, should
GCC add code that automatically zeros arguments upon a underflow
exception?  This would round out the default hardware behavior.  It
zeros them in results, and the software zeros them in arguments.

This would have little impact on the execution time, as denormalized
numbers are only generated by the compiler (try double Temp=2.2e-308
followed by any FPU computation on it) and software functions like
erfc() that play with the bits themselves.  Furthermore, these numbers
only need to be zeroed once, the hardware will look after the rest with
its results zeroing behavior (i.e. unlike in full -mieee support in
which a calculation on a denormalized number usually leads to another
denormalized number which requires more denormalized support, etc.).

Personally I find it a bit problematic when the standard math library
won't even work without specifying -mieee...

-T

PS:  For anyone using Alphas in the mean time, as quick work around is
to specify -lcpml to your linker in order to use the Compaq math library
(if you don't want the slow down of having to use -mieee).  Its erfc()
function does not generate denormalized numbers.

--
 Tyson Whitehead  (-twhitehe@uwo.ca -- WSC 140-)
 Computer Engineer                          Dept. of Applied Mathematics,
 Graduate Student- Applied Mathematics      University of Western Ontario,
 GnuPG Key ID# 0x8A2AB5D8                   London, Ontario, Canada




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]