This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Change definition of complex::norm


HJSTEIN@bloomberg.com (Harvey J. Stein) writes:

[...]

| Aside from doing that, the above abs() can be improved.  Dividing by
| __s = abs(__x) + abs(__y) isn't the best thing in the world.

Oh, we can certainly make a PhD dissertation about what is the best
thing in the world.

|  Dividing
| by __s reduces accuracy.  Computing __s can overflow when abs(__z)
| wouldn't, 

Really?

| and dividing by it can cause the computation of __x*__x to
| underflow.

But that is harmless (see f.e. by G.W Stewart in Matrix Algorithms, vol 1).

[...]

| Even just using the larger of abs(__x) & abs(__y) as the normalizer
| (as is done in the xlispstat code) instead of the sum of the two would
| be an improvement.

G.W Stewart reported an interesting real world experience (Matrix
Algorithms, vol 1, page 144):

   If the scale factor in the algorithm Euclid [the above mentionned
   algorithm] is replaced by max{|a|, |b|}, the results may be less
   accurate on a hexadecimal machine.  The reason is that the number

       \sqrt{(a/s)^2 + (b/s)^2}

   is a little bit greater than one so tht the leading three bits in its
   representation are zero.  I discovered this fact after two days of
   trying to figure out why an algorithm I had coded consistently
   returned answers about a decimal digit less accurate than the
   algorithm it was meant to replace.  Such are the minutiae of
   computer arithmetic.


[...]

| Using frexp, ldexp & modf

Those are not acceptable in the generic algorithm abs() or norm..

-- Gaby


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]