gfortran.dg/do_3.F90 started FAILing somewhere between rev. 128091 and rev. 128123 (see testresults for that period) on i386-linux, i686-linux and x86_64-linux. Reduced testcase is: program test integer(kind=1) :: i do i = -128, 127 end do if (i /= -128) call abort end program test It works at -O1 but fails at -O2. It's hard to tell which revision introduced it, but as Jan committed most of these revs, I'm adding him to the CC list. Jan, if you have any idea what's going on here, I'd be glad to have your insight.
It's also failing on s390x-ibm-linux-gnu and powerpc-ibm-aix.
Isn't this what http://gcc.gnu.org/ml/gcc/2007-09/msg00087.html (plus/minus a few emails in the thread) is about? Using -fno-strict-overflow they pass. I think we can simply add this option to the test case; or do you think -fstrict-overflow is an over-optimization for -O2? -fstrict-overflow Allow the compiler to assume strict signed overflow rules, depending on the language being compiled. For C (and C++) this means that overflow when doing arithmetic with signed numbers is undefined, which means that the compiler may assume that it will not happen. This permits various optimizations. For example, the compiler will assume that an expression like "i + 10 > i" will always be true for signed "i". This assumption is only valid if signed overflow is undefined, as the expression is false if "i + 10" overflows when using twos complement arithmetic. When this option is in effect any attempt to determine whether an operation on signed numbers will overflow must be written carefully to not actually involve overflow. The -fstrict-overflow option is enabled at levels -O2, -O3, -Os.
(In reply to comment #2) > Isn't this what http://gcc.gnu.org/ml/gcc/2007-09/msg00087.html (plus/minus a > few emails in the thread) is about? Yes, you're right. > -fstrict-overflow > Allow the compiler to assume strict signed overflow rules, depending on the > language being compiled. Well, I think the "depending on the language being compiled" is important. I think the testcase is valid Fortran, and shouldn't fail whatever the optimization level you use.
> > -fstrict-overflow > > Allow the compiler to assume strict signed overflow rules, depending on the > > language being compiled. > Well, I think the "depending on the language being compiled" is important. I > think the testcase is valid Fortran, and shouldn't fail whatever the > optimization level you use. I'm not sure; for -O2 maybe not but for -O3? If one takes overflows (integer, floating point variables), +/-Inf, NaN (only fp) fully into account, many optimizations are no longer possible. Example: "if (i + 10 > 20)"; should this be optimized to "if(i > 10)" or not? And if yes, starting from which optimization level? See also (for FP math): http://gcc.gnu.org/wiki/GeertBosch
> Well, I think the "depending on the language being compiled" is important. I > think the testcase is valid Fortran, and shouldn't fail whatever the > optimization level you use. FX, may I recall what you wrote in PR33296: > "A program is prohibited from invoking an intrinsic procedure under > circumstances where a value to be returned in a subroutine argument or function > result is outside the range of values representable by objects of the specified > type and type parameters, unless the intrinsic module IEEE_ARITHMETIC (section > 14) is accessible and there is support for an infinite or a NaN result, as > appropriate." Although there is no intrinsic involved in the test case, I don't see the logic to consider (abuse of) overflows valid for arithmetic operations and invalid for intrinsics. Now it would probably make sense to link the -fno-range-check option to -fno-strict-overflow (or to replace the former by the later): if one violation is allowed, I do not see why the other one could be forbiden. Note that without -fno-range-check the code gives a ton of hard errors. I stick to my opinion that, from a numerical point of view, the only valid option for exceptions is ABORT(). Last point, the two recent failures are the result of inlining not performed before (the other one was due to a bug that have been fixed). Unless it is shown that such inlining causes a performance regression, I do not see the point to revert it based on the behavior of (in)valid corner test cases. To see the lot of traffic involved by the gcc choice you can follow the threads starting at http://gcc.gnu.org/ml/gcc/2006-12/msg00459.html
(In reply to comment #5) > Although there is no intrinsic involved in the test case, I don't see the logic > to consider (abuse of) overflows valid for arithmetic operations and invalid > for intrinsics. I'd be more than happy to consider that the testcase is not valid and we can optimize as we wish. I thought I remembered from a c.l.f thread that this behaviour was prohibited, but I've never been too good at reading the fine prints myself.
> this behaviour was prohibited which behavior: folding huge(0)+1 as -huge(0)-1? or considering huge(0)+1 as invalid (out of range) and doing an optimization based on the fact that any valid integer is smaller and never equal?
(In reply to comment #7) >> this behaviour was prohibited > > considering huge(0)+1 as invalid (out of range) The second one, in the context of a loop index. But the more I think about it, the more dubious it seems, so I'll keep my mouth shut from now on.
Subject: Bug 33391 Author: fxcoudert Date: Wed Oct 10 13:40:50 2007 New Revision: 129209 URL: http://gcc.gnu.org/viewcvs?root=gcc&view=rev&rev=129209 Log: PR testsuite/33391 * gfortran.dg/do_3.F90: Run with -fwrapv. Modified: trunk/gcc/testsuite/ChangeLog trunk/gcc/testsuite/gfortran.dg/do_3.F90
Testcase fixed.