Created attachment 49688 [details] bash shell script that creates the souce code, compiles, and runs the test The attached bash script creates a c++ file called int_output.cpp, then compiles and runs the program. If you run the script with no arguments you will get the following usage message: usage: ./int_output.sh print optimize print: is either 0 or 1 optimize: is -O2 or -O3 If print is 0 and optimize is -O3, the test will fail. Otherwise, the test will pass The program starts with the maximum integer, uses a stringstream to convert it to a string, and then converts it back to an integer. It then checks that the result is the maximum integer; i.e., the integer it started with. If print is 0 (1) the value of the starting and ending integers are printed. The optimize argument determines the optimization level during the g++ compliation. The only case that fails is when print is 0 and optimize is -O3. In addition, if one uses clang++, instead of g++, all the cases give the correct result.
I think you have an signed integer overflow happening. Take a look at: 10 * result + s[index++] - '0' To avoid the overflow, do instead: 10 * result + (s[index++] - '0')
That fixed my test result. Sorry I missed that. Thanks.
There is another UB in it, if you try to parse the INT_MIN value there is another signed integer overflow, because 0x80000000 is not representable in int, while -0x7fffffff-1 is. Better to compute the result in unsigned type and only at the end cast to int.
Compiling with -fsanitize=undefined (as suggested by the bug reporting guidelines) would have shown the undefined behaviour.
Compiling with g++ -fsanitize=undefined -O2 int_output.cpp -o int_output does in fact give a very useful message I scanned the page https://www.gnu.org/software/gcc/bugs/ and looked for things that might be related to my bug. Sorry I missed the most important flag (for my case) at the top. It might help others to have the individual compiler flags in bold or in a list. Thanks again Brad.