This is the mail archive of the
mailing list for the libstdc++ project.
Re: std::vector : integer overflow in size()
- From: Bo Persson <bop at gmb dot dk>
- To: libstdc++ at gcc dot gnu dot org
- Date: Sun, 15 Feb 2015 12:12:20 +0100
- Subject: Re: std::vector : integer overflow in size()
- Authentication-results: sourceware.org; auth=none
- References: <62FC950D-2B41-49C7-8CF2-06F5345C7954 at icloud dot com>
On 2015-02-15 10:07, Francois Fayard wrote:
I am puzzled by the result of std::vector<char>::max_size() on the n = 32 and n = 64 bits system I have tested. The result is 2^n â 1. To me, the libstdc++ can't handle vectors whose size are bigger than 2^(n-1) - 1. Let me explain why.
Every implementation of std::vector<T> that I know of, libstdc++ included, has three members of type T*: begin_, end_, capacity_. begin_ points to the first value of the vector and end_ points to the one after the last. The size() method takes the difference of those pointers which is of type std::ptrdiff_t (which is then casted to a std::size_t). This type is a signed integer of n bits. Therefore, it can not store the integer 2^n â 1, but only up to 2^(n â 1) â 1. That's why I would expect this last number for max_size(). Is it a bug or something that I overlooked?
The max_size() function doesn't tell you the largest size you can use,
it just tells you that going beyond max_size() definitely doesn't work.
In practice you will not even get close - memory fragmentation, the size
of program code and current x64 chips not implementing all 64 address
bits are just some of the limitations.
The standards committee is aware of this, but didn't find it meaningful
to try to improve the function.