The following code compiled with -m32 (alternatively when on a 32bit system) shows wrong output with gcc 8.3.0 and gcc 9.2.1. gcc 7.4.0 is not affected. This leads to a possible RCE (remote code execution) in at least one real world executable. #include <stdio.h> void main(void) { char *a=0xf3e0080c; size_t n=235429897; char *b = a + n; printf("%p %p %d %d\n", a, a + n, a > a + n, a > b); } output with gcc 8.3.0 and 9.2.1: 0xf3e0080c 0x1e86815 0 1 output with gcc 7.4.0: 0xf3e0080c 0x1e86815 1 1 output with clang 8.0.1: 0xf3e0080c 0x1e86815 1 1 expected output: 0xf3e0080c 0x1e86815 1 1
32bit ABIs have a ssize_t of 32bits which means you can only describe 31bits really. This makes wrapping around of the pointer is undefined.
Is ssize_t C99 ? Could you point to the specs so that any reader can verify that ? And by UB you mean, gcc sometimes gives 0 and sometimes 1 ? Even if it's UB, the behavior should be consistent. Since this is a real world issue - how can I reliably detect if 'p + n' would overflow or not if the checks sometimes work and sometimes not.
Sorry i mean ptrdiff_t .
Also this was an a broken ABI mistake a long time ago. Also comparing for non equality outside an array bounds (besides one past the end) is also undefined behavior. Undefined behavior does NOT need to be consistent behavior even at runtime.
I mean comparisons which are not equals or not equals outside of array bounds is undefined.
You can use -fwrapv-pointer to make pointer-wrapping defined. In C you are only allowed to use relational compares on pointers to the same object. On x86-linux there is no valid object at address zero thus an object cannot start at 0xf3e0080c and be of your size, wrapping around (and including) zero. The optimization that is performed is rewriting a > a + n as 0 > n which is quite useful in other contexts.
Thanks for the explanations :-)
Here is a good blog post about pointer overflow: https://blog.regehr.org/archives/1395