This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug c/67999] Wrong optimization of pointer comparisons


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999

--- Comment #9 from Daniel Micay <danielmicay at gmail dot com> ---
(In reply to Florian Weimer from comment #8)
> (In reply to Alexander Cherepanov from comment #4)
> 
> > Am I right that the C standards do not allow for such a limitation (and
> > hence this should not be reported to glibc as a bug) and gcc is not
> > standards-compliant in this regard? Or I'm missing something?
> 
> The standard explicitly acknowledges the possibility of arrays that have
> more than PTRDIFF_MAX elements (it says that the difference of two pointers
> within the same array is not necessarily representable in ptrdiff_t).
> 
> I'm hesitant to put in artificial limits into glibc because in the mast,
> there was significant demand for huge mappings in 32-bit programs (to the
> degree that Red Hat even shipped special kernels for this purpose).

I don't think there's much of a use case for allocating a single >2G allocation
in a 3G or 4G address space. It has a high chance of failure simply due to
virtual memory fragmentation, especially since the kernel's mmap allocation
algorithm is so naive (keeps going downwards and ignores holes until it runs
out, rather than using first-best-fit).

Was the demand for a larger address space or was it really for the ability to
allocate all that memory in one go?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]