Bug 28408 - What should be value of complex<double>(1.0,0.0) *= -1?
Summary: What should be value of complex<double>(1.0,0.0) *= -1?
Status: NEW
Alias: None
Product: gcc
Classification: Unclassified
Component: c++ (show other bugs)
Version: 4.1.1
: P3 normal
Target Milestone: ---
Assignee: Not yet assigned to anyone
URL:
Keywords:
Depends on:
Blocks: 49949
  Show dependency treegraph
 
Reported: 2006-07-17 10:21 UTC by Michael Elizabeth Chastain
Modified: 2021-08-09 18:28 UTC (History)
10 users (show)

See Also:
Host: i686-pc-linux-gnu
Target: i686-pc-linux-gnu
Build: i686-pc-linux-gnu
Known to work:
Known to fail:
Last reconfirmed: 2006-09-06 17:11:56


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Michael Elizabeth Chastain 2006-07-17 10:21:08 UTC
Test program:

// Copyright 2006, Google Inc.  All rights reserved.
// Author: mec@google.com  (Michael Chastain)
//
// Compute (1.0, 0.0) *= -1.0

#include <complex>
#include <iostream>

std::complex<double> d_1(1.0, 0.0);

int main() {
  std::complex<double> d_m1(1.0, 0.0);
  d_m1 *= -1.0;
  std::cout << "d_m1: " << d_m1 << std::endl;
  std::cout << std::endl;
  return 0;
}

===

This gives different results with the same version of gcc at different optimization options.  All of these are with glibc 2.3.5.

hollerith:~/exp-i$ /home/mec/gcc-3.4.6/install/bin/g++ z4.cc && a.out
d_m1: (-1,0)

hollerith:~/exp-i$ /home/mec/gcc-3.4.6/install/bin/g++ -O2 z4.cc && a.out
d_m1: (-1,0)

hollerith:~/exp-i$ /fast/mec/gcc-4.0.3/install/bin/g++ z4.cc && a.out
d_m1: (-1,0)

hollerith:~/exp-i$ /fast/mec/gcc-4.0.3/install/bin/g++ -O2 z4.cc && a.out
d_m1: (-1,0)

hollerith:~/exp-i$ /home/mec/gcc-4.1.1/install/bin/g++ z4.cc && a.out
d_m1: (-1,0)

hollerith:~/exp-i$ /home/mec/gcc-4.1.1/install/bin/g++ -O2 z4.cc && a.out
d_m1: (-1,-0)

hollerith:~/exp-i$ /home/mec/gcc-4.2-20060624/install/bin/g++ z4.cc && a.out
d_m1: (-1,0)

hollerith:~/exp-i$ /home/mec/gcc-4.2-20060624/install/bin/g++ -O2 z4.cc && a.out
d_m1: (-1,-0)

===

One might argue that (-1,0) is equals to (-1,-0).  In that case, consider this code:

  std::complex<double> d_m1(1.0, 0.0);
  d_m1 *= -1.0;
  std::complex<double> d_i(sqrt(d_m1));

See PR 28406, which shows that sqrt((-1,0)) != sqrt((-1,-0)).  So code like the above produces materially different results with gcc 4.1.1 -O0 and gcc 4.1.1 -O2.
Comment 1 Wolfgang Bangerth 2006-07-18 13:34:19 UTC
Is this related to libstdc++ PR 20758? Or is this unrelated since it
apparently isn't in the implementation of libstdc++ but in the
optimizers?

W.
Comment 2 Paolo Carlini 2006-09-06 17:11:37 UTC
I think this difference is ultimately due to the existenxce of a separate *_O0 version of tree_lower_complex, in tree-complex.c. Rth added it (as part of fixing 20610), I believe the generic version is right (-0), and I'm hoping he wants to have a look to this issue...
Comment 3 Paolo Carlini 2006-09-06 17:11:56 UTC
I think we can confirm it.
Comment 4 Paolo Carlini 2006-09-06 18:43:33 UTC
But this issue should be recategorized, is about lowering and optimization of complex numbers, maybe Andrew can help about that?
Comment 5 Andrew Pinski 2006-09-06 18:50:11 UTC
There are most likely a couple of different issues here.  A front-end one with (1.0+0.0i)*(-1) being expanded incorrectly, there is a bug about a case like that too, see PR 24581.  There might even be a libstdc++ one too all depending on how the complex class is implemented.  I have not looked into libstdc++ to check yet.
Comment 6 Pawel Sikora 2006-09-06 19:18:56 UTC
(In reply to comment #2)
> I think this difference is ultimately due to the existenxce of a separate *_O0
> version of tree_lower_complex, in tree-complex.c. Rth added it (as part of
> fixing 20610), I believe the generic version is right (-0), and I'm hoping
> he wants to have a look to this issue...

mathematically (x+yi)u = xu+yui.
we have x=1, y=0, u=-1, so the result is -1+(0*-1)i.
so, how c++ define the 0*-1 ?
Comment 7 Paolo Carlini 2006-09-06 20:23:26 UTC
Both the front-ends deal with 0 * -1 in the same way, the result is -0 (just try). Anyway, the issue is crazy, a reduced pure C testcase (in principle identical to what the complex<double> class does) behaves exactly the other way 'round about -O0 vs -O1:

#include <stdio.h>

int main()
{
  double __complex__ z;

  __real__ z = 1.0;
  __imag__ z = 0.0;

  z *= -1.0;

  printf("%e\n", __imag__(z));
}

I can't believe that both 0 and -0 are correct...
Comment 8 Andrew Pinski 2006-09-06 20:27:40 UTC
(In reply to comment #7)
> Both the front-ends deal with 0 * -1 in the same way, the result is -0 (just
> try). Anyway, the issue is crazy, a reduced pure C testcase (in principle
> identical to what the complex<double> class does) behaves exactly the other way
> 'round about -O0 vs -O1:
This is PR 24581 after all then.
Comment 9 Paolo Carlini 2006-09-06 20:59:22 UTC
(In reply to comment #8)
> This is PR 24581 after all then.

I don't know, I'm afraid there is even more to it :( 

Comment 10 Paolo Carlini 2006-09-07 00:44:42 UTC
I'm re-reading the various floating-point standards and Annexes and I think this issue may turn out to be a not-a-bug. Whether those standards make sense it's another matter ;) So, what I'm reading: C99, F.8.2 says that 0 * x and 0.0 are equivalent, i.e., tranforming 0 * x to 0.0 is allowed (insofar x is not a NaN, -0, or infinite). Therefore, it seems, the optimizers of a C / C++ compiler conforming to IEC 60559 (~ IEC 559) are allowed to do that replacement at any moment and "discard" the info about the signedness of x. As submitter maintains - correctly, in my opinion - when the same compiler implements complex functions sensitive to the signedness of zero, like in C99, G, for example, and supposedly compatible with the same IEC 60559, how that signedness info is kept or moved around should be a serious concern... In general, I'm under the impression, that signed zero has been invented to cope with specific discontinuities of specific functions and to improve the consistency of floating point computations involving infinities (interesting discussion, as usual, in Goldberg, 2.2.3), not as something which in every case is kept conceptually distinct from 0 during a computation: as a matter of fact, in such standards 0 and -0 compare equal...
Comment 11 jsm-csl@polyomino.org.uk 2006-09-07 01:23:59 UTC
Subject: Re:  What should be value of complex<double>(1.0,0.0)
 *= -1?

On Thu, 7 Sep 2006, pcarlini at suse dot de wrote:

> I'm re-reading the various floating-point standards and Annexes and I think
> this issue may turn out to be a not-a-bug. Whether those standards make sense
> it's another matter ;) So, what I'm reading: C99, F.8.2 says that 0 * x and 0.0
> are equivalent, i.e., tranforming 0 * x to 0.0 is allowed (insofar x is not a
> NaN, -0, or infinite). Therefore, it seems, the optimizers of a C / C++

F.8 is *illustrative* of transformations that are *not* permitted.  It 
doesn't permit anything.

Comment 12 Paolo Carlini 2006-09-07 01:33:13 UTC
(In reply to comment #11)
> F.8 is *illustrative* of transformations that are *not* permitted.  It 
> doesn't permit anything.

Where do you read that in F.8.2 ?!? I read:

0 * x -> 0.0        The expressions 0 * x and 0.0 are not equivalent if
                    x is NaN, infinite, or -0

Where do you read that in all the other cases (e.g., when x is a "normal" number) 0 * x and 0.0 are not equivalent?!?
Comment 13 Paolo Carlini 2006-09-07 01:51:09 UTC
And, by the way, it's also generally untrue that F8 is only illustrative of not permitted transformations.  For example, a few lines above:

1 * x and x / 1 -> x         The expressions 1 * x, x / 1 and x are equivalent
                             (on IEC 60559 machines, among others)

Also, there is a general statement about that in F.8/1.
Comment 14 jsm-csl@polyomino.org.uk 2006-09-07 01:52:16 UTC
Subject: Re:  What should be value of complex<double>(1.0,0.0)
 *= -1?

On Thu, 7 Sep 2006, pcarlini at suse dot de wrote:

> > F.8 is *illustrative* of transformations that are *not* permitted.  It 
> > doesn't permit anything.
> 
> Where do you read that in F.8.2 ?!? I read:
> 
> 0 * x -> 0.0        The expressions 0 * x and 0.0 are not equivalent if
>                     x is NaN, infinite, or -0
> 
> Where do you read that in all the other cases (e.g., when x is a "normal"
> number) 0 * x and 0.0 are not equivalent?!?

Those are *examples* of when they are not equivalent.  It so happens that 
they are not an exhaustive list of examples, since 0 * -finite is -0 as 
well.  As examples, they are informative not normative.  The examples are 
listed in F.8.2 as a warning to implementors of what not to do.  The 
normative requirements in F.3 apply regardless; F.8.2 gives some 
transformations that are not valid because implementing them would cause 
the implementation to fail to implement F.3.

Comment 15 jsm-csl@polyomino.org.uk 2006-09-07 01:57:50 UTC
Subject: Re:  What should be value of complex<double>(1.0,0.0)
 *= -1?

On Thu, 7 Sep 2006, pcarlini at suse dot de wrote:

> And, by the way, it's also generally untrue that F8 is only illustrative of not
> permitted transformations.  For example, a few lines above:
> 
> 1 * x and x / 1 -> x         The expressions 1 * x, x / 1 and x are equivalent
>                              (on IEC 60559 machines, among others)

Such statements also are informative, not normative.  The normative 
requirements come from F.3 (the operations shall be the IEC 60559 
operations) and IEC 60559.  A transformation is permitted iff it conforms 
to the normative requirements under the as-if rule of 5.1.2.3.  F.8.2 
gives hints, but they are just hints; you can make a transformation iff 
you can show that it does not change the semantics required by 5.1.2.3, 
F.3 and the other normative requirements.

Comment 16 Paolo Carlini 2006-09-07 02:04:28 UTC
(In reply to comment #15)
> Such statements also are informative, not normative.  The normative 
> requirements come from F.3 (the operations shall be the IEC 60559 
> operations) and IEC 60559.

If you have IEC 60559 at hand, and it explicitely says, as normative, that 0 * -finite = -0 then, I agree that this is a bug. However, I have yet to understand why F.8.2, in particular the positive statements, can be considered only illustrative, when the entire F is normative and there are no indications of that.
Comment 17 bangerth@math.tamu.edu 2006-09-07 02:29:56 UTC
Subject: Re:  What should be value of complex<double>(1.0,0.0)
 *= -1?


> If you have IEC 60559 at hand, and it explicitely says, as normative, that 0 *
> -finite = -0 then, I agree that this is a bug. However, I have yet to
> understand why F.8.2, in particular the positive statements, can be considered
> only illustrative, when the entire F is normative and there are no indications
> of that.

It is true that Appendix F has "normative" in the section title, but
F.8 starts out with
  This section identifies code transformations that might subvert IEC
  60559-specified behavior, and others that do not.
I read that as "this section is illustrative". I pretty much read F.8.2 as
a list of things to watch out for. The right hand side of the table
appears to me to be cases of where for example the transformation on the
left is not valid, but I don't think it is meant as an exhaustive list of
these cases.

W.

-------------------------------------------------------------------------
Wolfgang Bangerth                email:            bangerth@math.tamu.edu
                                 www: http://www.math.tamu.edu/~bangerth/

Comment 18 Paolo Carlini 2006-09-07 02:47:19 UTC
(In reply to comment #17)
> It is true that Appendix F has "normative" in the section title, but
> F.8 starts out with

... in any case, the IEC 60559 entry in C99status reads "Broken" ;) ;)
Comment 19 Paolo Carlini 2006-09-07 09:11:02 UTC
A side note, maybe not completely obvious to everyone and clarifying the relationship to 24581. I understand that:

 (+0) + (-0) = +0

therefore, when in the expansion of the complex product one of the two terms of the imaginary part is +0 and the other -0 the result doesn't show the sign. The "same" product as complex * real (like in this PR) must lead instead to -0. Thus, barring additional evidence, it may well be that the mess is limited to complex * real and real * complex, as stated in 24581, but definitely affects both C and C++ (in different, inconsistent, ways :(
Comment 20 Jan van Dijk 2006-09-27 07:51:40 UTC
First of all, the problem is that bad that even 1*z != z when *no* optimisation is requested. Consider:

#include <iostream>
#include <limits>
#include <complex>

int main()
{
        std::complex<double> z(std::numeric_limits<double>::infinity(), 0.0);
        z*=1;
        std::cout << z << std::endl;
        return 0;
}

Using gcc-4.1.0, with -O0 this gives (inf,nan), with optimisation levels >0 it yields the expected (inf,0).

Secondly, could somebody clarify how patch 
http://gcc.gnu.org/ml/gcc-patches/2005-02/msg00560.html is related to these issues? Part of that patch seems to deal with fixing NaN's that should be Infs. Was it ever applied?

Greetings from an otherwise happy gcc-user (whose complex Bessel functions Y_n(z) and K_n(z) erroneously return NaN for |z|->0, instead of -/+Inf).
Comment 21 Paolo Carlini 2006-09-27 08:07:56 UTC
(In reply to comment #20)
> First of all, the problem is that bad that even 1*z != z when *no* 
> optimisation is requested.

Yes :(

> Secondly, could somebody clarify how patch 
> http://gcc.gnu.org/ml/gcc-patches/2005-02/msg00560.html is related to these
> issues? Part of that patch seems to deal with fixing NaN's that should be 
> Infs. Was it ever applied?

Certainly, on 2005-02-11, and much more afterwards.