This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Change definition of complex::norm


Some comments on various parts of this thread, with no attempt
to get the attributions right.

Re:

> In fact, it would be better to do it the other way around -
> define abs(x) as norm_factor*norm(z/norm_factor)^2.

Surely, this formula is a typo/thinko.

> |  Dividing
> | by __s reduces accuracy.  Computing __s can overflow when abs(__z)
> | wouldn't, 
> 
> Really?

Yes, if __x and __y are between MAX_DOUBLE/2.0 and sqrt(2.)*MAX_DOUBLE/2.0, then
abs(__z) does not overflow, but __s = abs(__x) + abs(__y) does.

> G.W Stewart reported an interesting real world experience (Matrix
> Algorithms, vol 1, page 144):
> 
>    If the scale factor in the algorithm Euclid [the above mentionned
>    algorithm] is replaced by max{|a|, |b|}, the results may be less
>    accurate on a hexadecimal machine.  The reason is that the number
> 
>        \sqrt{(a/s)^2 + (b/s)^2}
> 
>    is a little bit greater than one so tht the leading three bits in its
>    representation are zero.  I discovered this fact after two days of
>    trying to figure out why an algorithm I had coded consistently
>    returned answers about a decimal digit less accurate than the
>    algorithm it was meant to replace.  Such are the minutiae of
>    computer arithmetic.

Yes, non--base 2 arithmetics are fun.  20 years ago (how time flies)
I wrote an elementary function library for UCSD Pascal on a base 100
machine, the TI 990.  Floating-point format was 7 base-100 digits for
the fraction part in 7 bytes, one byte for the sign and the biased
exponent.  Thank god for IEEE 754.

Brad Lucier


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]