Test program: // Copyright 2006, Google Inc. All rights reserved. // Author: mec@google.com (Michael Chastain) // // Compute (1.0, 0.0) *= -1.0 #include <complex> #include <iostream> std::complex<double> d_1(1.0, 0.0); int main() { std::complex<double> d_m1(1.0, 0.0); d_m1 *= -1.0; std::cout << "d_m1: " << d_m1 << std::endl; std::cout << std::endl; return 0; } === This gives different results with the same version of gcc at different optimization options. All of these are with glibc 2.3.5. hollerith:~/exp-i$ /home/mec/gcc-3.4.6/install/bin/g++ z4.cc && a.out d_m1: (-1,0) hollerith:~/exp-i$ /home/mec/gcc-3.4.6/install/bin/g++ -O2 z4.cc && a.out d_m1: (-1,0) hollerith:~/exp-i$ /fast/mec/gcc-4.0.3/install/bin/g++ z4.cc && a.out d_m1: (-1,0) hollerith:~/exp-i$ /fast/mec/gcc-4.0.3/install/bin/g++ -O2 z4.cc && a.out d_m1: (-1,0) hollerith:~/exp-i$ /home/mec/gcc-4.1.1/install/bin/g++ z4.cc && a.out d_m1: (-1,0) hollerith:~/exp-i$ /home/mec/gcc-4.1.1/install/bin/g++ -O2 z4.cc && a.out d_m1: (-1,-0) hollerith:~/exp-i$ /home/mec/gcc-4.2-20060624/install/bin/g++ z4.cc && a.out d_m1: (-1,0) hollerith:~/exp-i$ /home/mec/gcc-4.2-20060624/install/bin/g++ -O2 z4.cc && a.out d_m1: (-1,-0) === One might argue that (-1,0) is equals to (-1,-0). In that case, consider this code: std::complex<double> d_m1(1.0, 0.0); d_m1 *= -1.0; std::complex<double> d_i(sqrt(d_m1)); See PR 28406, which shows that sqrt((-1,0)) != sqrt((-1,-0)). So code like the above produces materially different results with gcc 4.1.1 -O0 and gcc 4.1.1 -O2.
Is this related to libstdc++ PR 20758? Or is this unrelated since it apparently isn't in the implementation of libstdc++ but in the optimizers? W.
I think this difference is ultimately due to the existenxce of a separate *_O0 version of tree_lower_complex, in tree-complex.c. Rth added it (as part of fixing 20610), I believe the generic version is right (-0), and I'm hoping he wants to have a look to this issue...
I think we can confirm it.
But this issue should be recategorized, is about lowering and optimization of complex numbers, maybe Andrew can help about that?
There are most likely a couple of different issues here. A front-end one with (1.0+0.0i)*(-1) being expanded incorrectly, there is a bug about a case like that too, see PR 24581. There might even be a libstdc++ one too all depending on how the complex class is implemented. I have not looked into libstdc++ to check yet.
(In reply to comment #2) > I think this difference is ultimately due to the existenxce of a separate *_O0 > version of tree_lower_complex, in tree-complex.c. Rth added it (as part of > fixing 20610), I believe the generic version is right (-0), and I'm hoping > he wants to have a look to this issue... mathematically (x+yi)u = xu+yui. we have x=1, y=0, u=-1, so the result is -1+(0*-1)i. so, how c++ define the 0*-1 ?
Both the front-ends deal with 0 * -1 in the same way, the result is -0 (just try). Anyway, the issue is crazy, a reduced pure C testcase (in principle identical to what the complex<double> class does) behaves exactly the other way 'round about -O0 vs -O1: #include <stdio.h> int main() { double __complex__ z; __real__ z = 1.0; __imag__ z = 0.0; z *= -1.0; printf("%e\n", __imag__(z)); } I can't believe that both 0 and -0 are correct...
(In reply to comment #7) > Both the front-ends deal with 0 * -1 in the same way, the result is -0 (just > try). Anyway, the issue is crazy, a reduced pure C testcase (in principle > identical to what the complex<double> class does) behaves exactly the other way > 'round about -O0 vs -O1: This is PR 24581 after all then.
(In reply to comment #8) > This is PR 24581 after all then. I don't know, I'm afraid there is even more to it :(
I'm re-reading the various floating-point standards and Annexes and I think this issue may turn out to be a not-a-bug. Whether those standards make sense it's another matter ;) So, what I'm reading: C99, F.8.2 says that 0 * x and 0.0 are equivalent, i.e., tranforming 0 * x to 0.0 is allowed (insofar x is not a NaN, -0, or infinite). Therefore, it seems, the optimizers of a C / C++ compiler conforming to IEC 60559 (~ IEC 559) are allowed to do that replacement at any moment and "discard" the info about the signedness of x. As submitter maintains - correctly, in my opinion - when the same compiler implements complex functions sensitive to the signedness of zero, like in C99, G, for example, and supposedly compatible with the same IEC 60559, how that signedness info is kept or moved around should be a serious concern... In general, I'm under the impression, that signed zero has been invented to cope with specific discontinuities of specific functions and to improve the consistency of floating point computations involving infinities (interesting discussion, as usual, in Goldberg, 2.2.3), not as something which in every case is kept conceptually distinct from 0 during a computation: as a matter of fact, in such standards 0 and -0 compare equal...
Subject: Re: What should be value of complex<double>(1.0,0.0) *= -1? On Thu, 7 Sep 2006, pcarlini at suse dot de wrote: > I'm re-reading the various floating-point standards and Annexes and I think > this issue may turn out to be a not-a-bug. Whether those standards make sense > it's another matter ;) So, what I'm reading: C99, F.8.2 says that 0 * x and 0.0 > are equivalent, i.e., tranforming 0 * x to 0.0 is allowed (insofar x is not a > NaN, -0, or infinite). Therefore, it seems, the optimizers of a C / C++ F.8 is *illustrative* of transformations that are *not* permitted. It doesn't permit anything.
(In reply to comment #11) > F.8 is *illustrative* of transformations that are *not* permitted. It > doesn't permit anything. Where do you read that in F.8.2 ?!? I read: 0 * x -> 0.0 The expressions 0 * x and 0.0 are not equivalent if x is NaN, infinite, or -0 Where do you read that in all the other cases (e.g., when x is a "normal" number) 0 * x and 0.0 are not equivalent?!?
And, by the way, it's also generally untrue that F8 is only illustrative of not permitted transformations. For example, a few lines above: 1 * x and x / 1 -> x The expressions 1 * x, x / 1 and x are equivalent (on IEC 60559 machines, among others) Also, there is a general statement about that in F.8/1.
Subject: Re: What should be value of complex<double>(1.0,0.0) *= -1? On Thu, 7 Sep 2006, pcarlini at suse dot de wrote: > > F.8 is *illustrative* of transformations that are *not* permitted. It > > doesn't permit anything. > > Where do you read that in F.8.2 ?!? I read: > > 0 * x -> 0.0 The expressions 0 * x and 0.0 are not equivalent if > x is NaN, infinite, or -0 > > Where do you read that in all the other cases (e.g., when x is a "normal" > number) 0 * x and 0.0 are not equivalent?!? Those are *examples* of when they are not equivalent. It so happens that they are not an exhaustive list of examples, since 0 * -finite is -0 as well. As examples, they are informative not normative. The examples are listed in F.8.2 as a warning to implementors of what not to do. The normative requirements in F.3 apply regardless; F.8.2 gives some transformations that are not valid because implementing them would cause the implementation to fail to implement F.3.
Subject: Re: What should be value of complex<double>(1.0,0.0) *= -1? On Thu, 7 Sep 2006, pcarlini at suse dot de wrote: > And, by the way, it's also generally untrue that F8 is only illustrative of not > permitted transformations. For example, a few lines above: > > 1 * x and x / 1 -> x The expressions 1 * x, x / 1 and x are equivalent > (on IEC 60559 machines, among others) Such statements also are informative, not normative. The normative requirements come from F.3 (the operations shall be the IEC 60559 operations) and IEC 60559. A transformation is permitted iff it conforms to the normative requirements under the as-if rule of 5.1.2.3. F.8.2 gives hints, but they are just hints; you can make a transformation iff you can show that it does not change the semantics required by 5.1.2.3, F.3 and the other normative requirements.
(In reply to comment #15) > Such statements also are informative, not normative. The normative > requirements come from F.3 (the operations shall be the IEC 60559 > operations) and IEC 60559. If you have IEC 60559 at hand, and it explicitely says, as normative, that 0 * -finite = -0 then, I agree that this is a bug. However, I have yet to understand why F.8.2, in particular the positive statements, can be considered only illustrative, when the entire F is normative and there are no indications of that.
Subject: Re: What should be value of complex<double>(1.0,0.0) *= -1? > If you have IEC 60559 at hand, and it explicitely says, as normative, that 0 * > -finite = -0 then, I agree that this is a bug. However, I have yet to > understand why F.8.2, in particular the positive statements, can be considered > only illustrative, when the entire F is normative and there are no indications > of that. It is true that Appendix F has "normative" in the section title, but F.8 starts out with This section identifies code transformations that might subvert IEC 60559-specified behavior, and others that do not. I read that as "this section is illustrative". I pretty much read F.8.2 as a list of things to watch out for. The right hand side of the table appears to me to be cases of where for example the transformation on the left is not valid, but I don't think it is meant as an exhaustive list of these cases. W. ------------------------------------------------------------------------- Wolfgang Bangerth email: bangerth@math.tamu.edu www: http://www.math.tamu.edu/~bangerth/
(In reply to comment #17) > It is true that Appendix F has "normative" in the section title, but > F.8 starts out with ... in any case, the IEC 60559 entry in C99status reads "Broken" ;) ;)
A side note, maybe not completely obvious to everyone and clarifying the relationship to 24581. I understand that: (+0) + (-0) = +0 therefore, when in the expansion of the complex product one of the two terms of the imaginary part is +0 and the other -0 the result doesn't show the sign. The "same" product as complex * real (like in this PR) must lead instead to -0. Thus, barring additional evidence, it may well be that the mess is limited to complex * real and real * complex, as stated in 24581, but definitely affects both C and C++ (in different, inconsistent, ways :(
First of all, the problem is that bad that even 1*z != z when *no* optimisation is requested. Consider: #include <iostream> #include <limits> #include <complex> int main() { std::complex<double> z(std::numeric_limits<double>::infinity(), 0.0); z*=1; std::cout << z << std::endl; return 0; } Using gcc-4.1.0, with -O0 this gives (inf,nan), with optimisation levels >0 it yields the expected (inf,0). Secondly, could somebody clarify how patch http://gcc.gnu.org/ml/gcc-patches/2005-02/msg00560.html is related to these issues? Part of that patch seems to deal with fixing NaN's that should be Infs. Was it ever applied? Greetings from an otherwise happy gcc-user (whose complex Bessel functions Y_n(z) and K_n(z) erroneously return NaN for |z|->0, instead of -/+Inf).
(In reply to comment #20) > First of all, the problem is that bad that even 1*z != z when *no* > optimisation is requested. Yes :( > Secondly, could somebody clarify how patch > http://gcc.gnu.org/ml/gcc-patches/2005-02/msg00560.html is related to these > issues? Part of that patch seems to deal with fixing NaN's that should be > Infs. Was it ever applied? Certainly, on 2005-02-11, and much more afterwards.