This is the mail archive of the
mailing list for the GCC project.
Re: Change definition of complex::norm
- To: Benjamin Kosnik <bkoz at redhat dot com>
- Subject: Re: Change definition of complex::norm
- From: Gabriel Dos Reis <gdr at codesourcery dot com>
- Date: 01 Nov 2001 07:05:34 +0100
- Cc: Brad Lucier <lucier at math dot purdue dot edu>, gcc at gcc dot gnu dot org, hjstein at bloomberg dot com, nbecker at fred dot net, gdr at codesourcery dot com
- Organization: CodeSourcery, LLC
- References: <Pine.SOL.3.91.1011031190000.3241Afirstname.lastname@example.org>
Benjamin Kosnik <email@example.com> writes:
| Thanks for this mail: it was very clear and detailed.
| > > 2 The effect of instantiating the template complex for any type other than
| > > float, double or long double is unspecified.
| > Point (2) seems to turn the issue of implementation of <complex> templates
| > and operations for, e.g., int or long, into a QOI issue.
| More than that. The standard specifies float, double, and long double
| specializations, so there is definite room for optimizations for floating
| point types, which will be the most used anyway.
| For user-defined types, I think the generic complex template will fall
| down. In this case, I think the smart thing to do is allow the code to
| compile, but have the library get out of the way by then having the used
| member functions in the given translation unit be undefined at
| link time. This allows users to define their own specializations, if they
| really want to do this.
They already can do so with the current sitiuation.
| Great. The libstdc++-v3 numerics testsuite is pretty anemic at this
| point: perhaps this could be added?
Yes, but the qanswer may vary from one machine to another. We'll need
to add more machinery which is most of the important point why the
numeric testsuite is so anemic.
| > norm_2 uses the definition in std_complex.h (with the fixed abs, i.e.,
| > abs_1). norm_1 uses the simpler, faster, algorithm for norm proposed
| > by nbecker. Here, the simpler algorithm gives an anwer that loses
| > all precision. On the other hand, I can't judge how important it
| > is that a simpler, faster, algorithm gives 0.0 as the answer instead
| > of 4.940656e-324.
| Good question. I doubt there is any precision in this number, but who
How can one know?
| Physicists? Gaby?
In doubt, I prefer to be conservative and deliver the best that be
computed within the machine precision, i.e. retain the current