This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug c/67999] Wrong optimization of pointer comparisons


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999

--- Comment #10 from Daniel Micay <danielmicay at gmail dot com> ---
(In reply to Florian Weimer from comment #7)
> If this is not a GCC bug and it is the responsibility of allocators not to
> produce huge objects, do we also have to make sure that no object crosses
> the boundary between 0x7fff_ffff and 0x8000_0000?  If pointers are treated
> as de-facto signed, this is where signed overflow would occur.

No, that's fine. It's the offsets that are treated as ptrdiff_t. Clang/LLVM
handle it the same way. There's a very important assumption for optimizations
that pointer arithmetic cannot wrap (per the standard) and all offsets are
treated as signed integers. AFAIK, `ptr + size` is equivalent to `ptr +
(ptrdiff_t)size` in both Clang and GCC.

There's documentation on how this is handled in LLVM IR here, specifically the
inbounds marker which is added to all standard C pointer arithmetic:

http://llvm.org/docs/LangRef.html#getelementptr-instruction

I expect GCC works very similarly, but I'm not familiar with the GCC internals.

It's not really a compiler bug because the standard allows object size limits,
but the compiler and standard C library both need to be aware of those limits
and enforce them if they exist. So it's a bug in GCC + glibc or Clang + glibc,
not either of them alone. I think dealing with it in libc is the only full
solution though due to issues like `p - q` and the usage of ssize_t for sizes
in functions like read/write.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]