This is the mail archive of the
mailing list for the GCC project.
Re: Change definition of complex::norm
- To: Brad Lucier <lucier at math dot purdue dot edu>
- Subject: Re: Change definition of complex::norm
- From: Benjamin Kosnik <bkoz at redhat dot com>
- Date: Wed, 31 Oct 2001 19:15:33 -0800 (PST)
- cc: gcc at gcc dot gnu dot org, hjstein at bloomberg dot com, lucier at math dot purdue dot edu, nbecker at fred dot net, gdr at codesourcery dot com
Thanks for this mail: it was very clear and detailed.
> > 2 The effect of instantiating the template complex for any type other than
> > float, double or long double is unspecified.
> Point (2) seems to turn the issue of implementation of <complex> templates
> and operations for, e.g., int or long, into a QOI issue.
More than that. The standard specifies float, double, and long double
specializations, so there is definite room for optimizations for floating
point types, which will be the most used anyway.
For user-defined types, I think the generic complex template will fall
down. In this case, I think the smart thing to do is allow the code to
compile, but have the library get out of the way by then having the used
member functions in the given translation unit be undefined at
link time. This allows users to define their own specializations, if they
really want to do this.
(Think about what happens if generic definitions are provided, and are
wrong for the user-defined type. The user has to resort to link trickery.)
This is related to my point about facets, from my checkin earlier today.
I'm curious as to what other C++ users think is the best policy. Should
these be treated on a case-by-case basis, or should the entire library
try to conform to some general "instantion policy?"
> Here's a C test program compiled on x86 and sparc with
Great. The libstdc++-v3 numerics testsuite is pretty anemic at this
point: perhaps this could be added?
> The results are the same on sparc-sun-solaris28 and i686-pc-linux-gnu:
> [lucier@curie ~]$ ./test
> abs_1: 1.694881e+308
> abs_2: nan
> norm_1: 0.000000e+00
> norm_2: 4.940656e-324
> abs_2 uses the algorithm in std_complex.h; abs_1 uses the max of
> the absolute values of x and y as the divisor. Getting a NaN
> when the proper answer is finite is not a "Good Thing", so I think
> that the definition in std_complex.h should be changed.
Seems like pretty concrete proof to me.
> norm_2 uses the definition in std_complex.h (with the fixed abs, i.e.,
> abs_1). norm_1 uses the simpler, faster, algorithm for norm proposed
> by nbecker. Here, the simpler algorithm gives an anwer that loses
> all precision. On the other hand, I can't judge how important it
> is that a simpler, faster, algorithm gives 0.0 as the answer instead
> of 4.940656e-324.
Good question. I doubt there is any precision in this number, but who
knows. Physicists? Gaby?