This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
complex arithmetic in gcc (and various standards)
- From: Jan van Dijk <jan at etpmod dot phys dot tue dot nl>
- To: gcc at gcc dot gnu dot org
- Date: Mon, 2 Oct 2006 10:54:56 +0200
- Subject: complex arithmetic in gcc (and various standards)
- Reply-to: jan at etpmod dot phys dot tue dot nl
Hello,
I was recently bitten by gcc's handling of complex multiplication. My program
is in C++, but since std::complex<T> uses C99's complex types, my questions
below apply to C as well.
In my program, I have r*c, where r is an fp type and c a complex<fp>. In my
calculation I sometimes encouner degenerate complex numbers, like c=(inf,0).
After multiplication with a (finite) fp value r, I expected (inf,0). Alas,
the result was (inf,nan). This problem frustrates my algorithms for
calculating complex Bessel functions for small |z|, but that aside.
I found various bug reports that deal with complex arithmetic, like
PR20758
PR24581
PR28408
I then found out that the result may even depend on the optimisation level.
Depending on that, and other factors, the evaluation is influenced by:
* tree_lower_complex_O0
* tree_lower_complex (has special code for real/imag-only numbers)
* the builtin mul-routine in libgcc2.c
* the DCE pass
-O2 gives the 'expected' results. tree_lower_complex modifies the tree in such
a way that the dce pass removes the 'non-existent' multiplications that would
result in NaN's, results are:
(1) 1*(Inf,0) ==(Inf,0)
(2) 1*(Inf,Inf)==(Inf,Inf)
For -O0 (the builtin is used) I get:
(3) 1*(Inf,0) ==(Inf,NaN)
(4) 1*(Inf,Inf)==(NaN,NaN)
Especially the last one is bad... An Inf is changed into a NaN for no good
reasons.
IMHO, the fact that there _are_ differences is most problematic. Without
-ffast-math results should not depend (that strongly) on the optimisation
level.
I started reading some of the relevant standards, only to find out that:
* the C99 and C++ standards say *nothing* about the details of compex
multiplication
* the (non-normative) annex G of the C99 standard hints that r*(x,y) should
_not_ contain terms due to an implicit 0*i in r (see the first table in
G.5.1). That is: r*(x,y) \equiv (rx,ry), so (3) and (4) above are incorrect.
* C99 (and G.5.1, clause 2) is vague about whether (r,0) should be considered
a complex number, or not. The answer to that question determines whether the
gcc approach, where a real factor in a complex multiplication is promoted to
the complex number (r,0), is correct. As a result, it is also not clear
whether (1,0)*(Inf,Inf) \equiv (Inf,Inf) or not.
Now here are my questions:
* What standard(s) is gcc trying to implement?
- c99 (tc2)?
- its annex G?
- LIA (what version)?
* If neither of these standards says explicitly: does gcc choose to consider
any number (r,0) \equiv r in complex ops?
* If that's the case, should the builtin-mul in libgcc2.c be modified to 'fix'
the -O0 case above by skipping 0-multiplications? That is, should a
0-component of a complex number be dealt with as '00', a 'strong-0' as
mentioned in the LIA? (I made that change locally?)
Note that the routine in libgcc2.c is basically a copy of the algorithm
sugested in Annex G of the C99 ISO standard document, so it should not be
changed lightly. (Warning: I am reading the May 2005 committee draft of TC2.)
Some of the bug reports mentioned above contain complaints about the relevant
standards. It seems that work on these bugs is stalled for lack of an
authoritative statement about _what_ gcc is supposed to implement . Once a
choice has been made I, an otherwise happy user, know what to expect and may
be able to help solving this problem.
I hope to hear,
with kind regards,
Jan van Dijk.
--
If I haven't seen further, it is by standing in the footprints of giants