This is the mail archive of the
mailing list for the GCC project.
Re: denormals/subnormals are heading for extinction
- To: crosby at qwes dot math dot cmu dot edu, dewar at gnat dot com
- Subject: Re: denormals/subnormals are heading for extinction
- From: dewar at gnat dot com
- Date: Thu, 16 Aug 2001 19:36:29 -0400 (EDT)
- Cc: gcc at gcc dot gnu dot org, trt at cs dot duke dot edu
<<Then why are hardware folks so annoyed and duking it and trying to get rid
of them? If it was as easy as you seem to proclaim, then tell me why the
hardware people don't like denorms?
Because they tend to require an extra stage in the pipeline in some cases.
Hardware people are notorious for wanting to give you fast floating-point
with sloppy results (some truly appalling floating-point schemes have
been perpetrated by hardware people, who often are on the side of the
most extreme side of the -fast-math debate :-) For some examples, see
chapter 5 of my book. One quick example is the original fpt from IBM,
32 bit hexadecimal normaliztion (itself a major menace) truncation,
rather than rounding, and no guard digit. The lack of a guard digit
was so horrible that IBM finally agreed to install a guard digit in
the field on all 360's everywhere.
There is no reason to think of hardware designers as necessarily
knowledgable in floating-point, and in fact, to my experience, the
contrary is often true.
>>I am unqualified to judge one way or another. Nor did I.
That sounds like you do not understand the important advantages of denormals.
Again for a simple explanation, see chapter 5 of my book.
<<Thus, the decision isn't whether they slow down a program or not, but are
they worth the effort to implement.
Most certainly yes, and that is why all major architectures, with only very
few exceptions, do provide for handling of denormals.
<<You seemed to be pretending that there is no cost to denormals. There is.
Of course there is a cost. What the IEEE committee decided after much debate
(you might want to read the debate, there are several papers at the time
that outline this debate -- again, references can be found in my book --
you probably have to go to a library, I doubt you can find this stuff online)
is that yes, denormals are worth the cost.
No one ever said there is no cost to the implementation (having written
several full IEEE software implementations, I know very well what the cost
is), where did you get that idea?