Bug 15437

Summary: int vs const int computation: different answers
Product: gcc Reporter: Lani Bateman <lani>
Component: cAssignee: Not yet assigned to anyone <unassigned>
Status: RESOLVED DUPLICATE    
Severity: critical CC: gcc-bugs
Priority: P2    
Version: 3.2   
Target Milestone: ---   
Host: Target:
Build: Known to work:
Known to fail: Last reconfirmed:

Description Lani Bateman 2004-05-14 16:22:47 UTC
#include <iostream.h>

int main(){
  int x = 1000;
  const int y = 1000;

  cerr << "f(x) = " << (int)(x*0.3) << endl;
  cerr << "f(y) = " << (int)(y*0.3) << endl;
}

/*
Produces Output:

f(x) = 299
f(y) = 300

??????????????????????
*/

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

- System type: Redhat Linux on Intel machine
- No special compile options (eg g++ main.cpp)
Comment 1 Andrew Pinski 2004-05-14 16:54:37 UTC
Not a bug, the difference comes from constant prorogation (which is used for the const int case) and on 
x86 the precission is higher than the normal causing some round off errors to happen, see PR 323 for 
more information.

*** This bug has been marked as a duplicate of 323 ***
Comment 2 Lani Bateman 2004-05-14 17:30:06 UTC
Subject: Re:  int vs const int computation: different answers

Okay. Well it may or may not be intended, but if one gives a different answer
than the other, which is totally unexpected as far as I'm concerned, then it's
still a bug. This should not happen. Our company is doing computations where
this kind of thing really matters. However, we have somewhat of a workaround to
use, so I won't pursue the issue any further.

Thanks for responding so promptly though.

Lani

pinskia at gcc dot gnu dot org writes :
> 
> 
> ------- Additional Comments From pinskia at gcc dot gnu dot org  2004-05-14 16:54 -------
> Not a bug, the difference comes from constant prorogation (which is used for the const int case) and on 
> x86 the precission is higher than the normal causing some round off errors to happen, see PR 323 for 
> more information.
> 
> *** This bug has been marked as a duplicate of 323 ***
> 
> -- 
>            What    |Removed                     |Added
> ----------------------------------------------------------------------------
>              Status|UNCONFIRMED                 |RESOLVED
>          Resolution|                            |DUPLICATE
> 
> 
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15437
> 
> ------- You are receiving this mail because: -------
> You reported the bug, or are watching the reporter.
> 

Comment 3 Wolfgang Bangerth 2004-05-14 18:24:03 UTC
I think the general answer here is that converting doubles to integers 
is not a stable operation in the vicinity of the integer you expect. I 
agree that this leads to a surprising result in your particular case, but 
it would be just as easy to trigger with slightly more complex cases and 
very hard to make the compiler stable against these kinds of things. 
 
W. 
Comment 4 Lani Bateman 2004-05-14 20:05:48 UTC
Subject: Re:  int vs const int computation: different answers

I guess the only point we're trying to make is that using a different data
structure for const int as opposed to int is a bad idea. Whatever the rounding
issues, they would at least be consistent had things been done the same way in
both cases. I agree with you, though, that the double case is flaky anyway. The
issue, in that particular example, was to do with the consistency.

Example:

We used a const int to compute the height of a 2d array originally (using the
multiply by a double, which is a percentage of the height). Now we want
to read the height in as a command-line argument (as an int) and compare the
two to determine if the read in height will be compatible with previously
stored results. Whether or not the height computation is rounded to the "most
correct" numeric value doesn't matter to us, since it doesn't affect what we
are doing with the 2d array (i.e - it's just a general ballpark
value). However, since the computations were done inconsistently, they are now
incomparable. It doesn't make sense, then, that we should have to play all
sorts of games to try and fake one computation to produce the results of the
other when, in theory, there should be no difference in the two
representations. They are both "int" and should be treated the same way.

bangerth at dealii dot org writes :
> 
> 
> ------- Additional Comments From bangerth at dealii dot org  2004-05-14 18:24 -------
> I think the general answer here is that converting doubles to integers 
> is not a stable operation in the vicinity of the integer you expect. I 
> agree that this leads to a surprising result in your particular case, but 
> it would be just as easy to trigger with slightly more complex cases and 
> very hard to make the compiler stable against these kinds of things. 
>  
> W. 
> 
> -- 
> 
> 
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15437
> 
> ------- You are receiving this mail because: -------
> You reported the bug, or are watching the reporter.
> 

Comment 5 Wolfgang Bangerth 2004-05-14 20:28:20 UTC
Well, if they are integers that happen to be represented as doubles, 
then you should use round(), and not the truncating conversion to int(). 
The general idea of comparing doubles by converting them to integers 
and then comparing is wrong. Comparing floating point values always 
has to be done using floating point arithmetic and making sure that 
the difference is less than some epsilon. 
 
W. 
Comment 6 Lani Bateman 2004-05-14 20:36:00 UTC
Subject: Re:  int vs const int computation: different answers

Hi,
   If a compiler uses a banana to store an 'int',
   it should use a banana to store a 'const int'

   The const-ness should not affect the data, just the semantics.

I understand computers can't store floating point numbers perfectly.
I understand there are rounding issues.  I understand comparisons have to be 
done with care.  My issue is not about any of those things.  I understand on
one computer I might get '299', on another, get '300'.  If I'm lazy, and don't
do it carefully, it's my fault.

But here we are not talking about doubles.  We are talking about 'INT' vs
'CONST INT'.   That's all.  

Replacing an INT with a const int should ALWAYS give back the same answer.
In any computation.  (The same answer that is, if compiled with the same
compiler on the same architecture).  The rounding issues are beside the point.

I am not insisting doubles behave like ints or doubles behave identically
between compilers/architectures etc.

I am insisting 'INT' behaves like 'CONST INT'

bangerth at dealii dot org writes :
> 
> 
> ------- Additional Comments From bangerth at dealii dot org  2004-05-14 20:28 -------
> Well, if they are integers that happen to be represented as doubles, 
> then you should use round(), and not the truncating conversion to int(). 
> The general idea of comparing doubles by converting them to integers 
> and then comparing is wrong. Comparing floating point values always 
> has to be done using floating point arithmetic and making sure that 
> the difference is less than some epsilon. 
>  
> W. 
> 
> -- 
> 
> 
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15437
> 
> ------- You are receiving this mail because: -------
> You reported the bug, or are watching the reporter.
> 

Comment 7 Andreas Schwab 2004-05-14 22:11:22 UTC
If you want precise results you shouldn't start with imprecise numbers. 
Comment 8 Wolfgang Bangerth 2004-05-14 22:13:20 UTC
I see your point, but I don't think you are entirely right. If you 
store something in a 'const int' you tell the compiler that the value 
of the variable won't change. The compiler can then do some simple 
transformations based on this knowledge, without having to resort 
to doing these things at run-time. Note that within the margin of 
errors compiler-internal transformations and run-time behavior might 
differ. 
 
If you just store it in an 'int', the compiler has to do some serious 
investigations to figure out whether a variable is set only once or 
not. Only if it is sure that it can't change can it do these transformation. 
For gcc, these investigations are only performed if you ask for optimization. 
Thus, you _do_ get the same result if you ask for them: 
 
g/x> c++ x.cc ; ./a.out  
f(x) = 299 
f(y) = 300 
 
g/x> c++ x.cc -O2 ; ./a.out  
f(x) = 300 
f(y) = 300 
 
In the first case, one computation is performed at compile- the other at 
run-time. I understand that this seems weird here and maybe frustrating, 
but there are good reasons for this. 
 
W. 
Comment 9 Lani Bateman 2004-05-14 22:35:51 UTC
Subject: Re:  int vs const int computation: different answers

Thank you for understanding what we were trying to say. This is the response we
were trying to find.

> In the first case, one computation is performed at compile- the other at 
> run-time. I understand that this seems weird here and maybe frustrating, 
> but there are good reasons for this. 

I appreciate that there are good reasons for this, and understand the
reasons. However, in principle no optimization should
cause int and const int to behave differently. It's way too
counter-intuitive. Compile time side-effects must be consistent with run-time
behavior, otherwise the optimization breaks rule #1 of optimization: any
optimization should not change the resulting computation.  This is the only
point we have been trying to make all along.

Thanks for the -O2 trick, we'll use that.


Sorry for the confusion, and thanks for a good discussion.

bangerth at dealii dot org writes :
> 
> 
> ------- Additional Comments From bangerth at dealii dot org  2004-05-14 22:13 -------
> I see your point, but I don't think you are entirely right. If you 
> store something in a 'const int' you tell the compiler that the value 
> of the variable won't change. The compiler can then do some simple 
> transformations based on this knowledge, without having to resort 
> to doing these things at run-time. Note that within the margin of 
> errors compiler-internal transformations and run-time behavior might 
> differ. 
>  
> If you just store it in an 'int', the compiler has to do some serious 
> investigations to figure out whether a variable is set only once or 
> not. Only if it is sure that it can't change can it do these transformation. 
> For gcc, these investigations are only performed if you ask for optimization. 
> Thus, you _do_ get the same result if you ask for them: 
>  
> g/x> c++ x.cc ; ./a.out  
> f(x) = 299 
> f(y) = 300 
>  
> g/x> c++ x.cc -O2 ; ./a.out  
> f(x) = 300 
> f(y) = 300 
>  
> In the first case, one computation is performed at compile- the other at 
> run-time. I understand that this seems weird here and maybe frustrating, 
> but there are good reasons for this. 
>  
> W. 
> 
> -- 
> 
> 
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15437
> 
> ------- You are receiving this mail because: -------
> You reported the bug, or are watching the reporter.
>