operator new[] sometimes returns pointers to heap blocks which are too small. When a new array is allocated, the C++ run-time has to calculate its size. The product may exceed the maximum value which can be stored in a machine register. This error is ignored, and the truncated value is used for the heap allocation. This may lead to heap overflows and therefore security bugs. (See http://cert.uni-stuttgart.de/advisories/calloc.php for further references.) The test case below uses a user-defined operator new[] to test for the presence of this problem. However, the problem itself occurs also with the default operator new[], but it is probably harder to write a portable test case. #include <testsuite_hooks.h> struct foo { char data[16]; void* operator new[] (size_t size) { VERIFY(size != sizeof(foo)); VERIFY (false); return malloc(size); } }; int main() { size_t size = size_t (-1) / sizeof(foo) + 2; try { foo* f = new foo[size]; VERIFY (f == 0); VERIFY (false); } catch(std::bad_alloc&) { return 0; } }
This is undefined behavor. Really there is nothing we can do, just think about this again, the programmer should catch this when they read in the length.
Why is this undefined behavior? Would you quote chapter and verse, please? GCC's behavior violates 5.3.4(10): "A new-expression passes the amount of space requested to the allocation function as the first argument of type std::size_t. That argument shall be no less than the size of the object being created; [...]" In this case, the passed value is 16, which is much smaller than the size of the array.
But the C++ standard does not say anything about this case.
I would get a clearification from the standards comittee if I were you. multiplying a large unsigned number by 16 and getting an overflow is werid case but again, the developer should be checking the size for reality, if they don't it can cause other problems like a seg fault as malloc on linux does not return null when running out of memory.
I also wonder what the semantics are that you are expecting? I mean, you try to allocate an array that is so large that you can't address the individual bytes using a size_t, in other words one that is larger than the address space the OS provides to your program. That clearly doesn't make any sense. That being said, I understand that the behavior of this is security relevant and that any attempt to allocate more memory than is available will necessarily have to fail. Thus, our present implementation isn't standards conforming, since it returns a reasonable pointer even in the case that the allocation should have failed. I would therefore agree that this is a problem, and given that memory allocation is an expensive operation, one overflow check isn't really time critical. Unfortunately, the situation is not restricted to libstdc++'s implementation of operator new[], since that operator only gets the total size of the memory to be allocation, not the size per element and the number of elements. Therefore, by the time we get into the implementation of this operator, it is already too late. In other words, the overflow check has to happen in compiler-generated code, not in the libstdc++ implementation. I would support the introduction of such code, if necessary guarded by some flag, or unconditionally, as a matter of quality of implemetation. W.
There's no multiplication in the source code. The multiplication is an implementation detail. You can hardly use it to justify the semantics of the operation. I would expect that std::bad_alloc is thrown. But I agree that the C++ standard isn't very clear in this area. The implementation must ensure that the postcondition in 5.3.4(10) holds, but the standard doesn't provide a means to signal failure. I'm going to post a note to comp.std.c++ on this matter, but hopefully this will be fixed in GCC as a quality of implementation issue. The necessary overflow check is should be very cheap because the multiplication is always by a constant.
(In reply to comment #6) > There's no multiplication in the source code. The multiplication is an > implementation detail. You can hardly use it to justify the semantics of the > operation. Actually the multiplication is not an implementation detail. The standard says what opator new[] should be passed, just the multiplication is included. sizeof(int[i]) (well if it was valid C++, it is valid C99) is defined as a multiplication so it is not an implemenation detail.
Isn't this handled by -ftrapv?
Subject: Re: operator new[] can return heap blocks which are too small > > > > ------- Comment #8 from geoffk at gcc dot gnu dot org 2006-09-27 23:51 ------- > Isn't this handled by -ftrapv? No because sizeof is unsigned and -ftrapv only deals with signed types. -- Pinski
What does the C standard say about calloc? That's a similar case; the multiplication is in calloc. Does it have to report an error?
"The calloc function allocates space for an array of nmemb objects, each of whose size is size." There is no mentioning of overflow, but the allocated space must surely be big enough to hold the array, and calloc shall fail if it cannot fulfill the request.
Subject: Re: operator new[] can return heap blocks which are too small * mmitchel at gcc dot gnu dot org: > What does the C standard say about calloc? That's a similar case; the > multiplication is in calloc. Does it have to report an error? My interpretation is that it must return NULL. (This was fixed in GNU libc years ago.)
*** Bug 35790 has been marked as a duplicate of this bug. ***
Also note unsigned types don't overflow, they wrap. So as far as I can tell, C++ defines this as being returning too small of a size.
I think we can all agree it does not matter what we call this problem. Real world programs have security problems because of this. -fstack-protector carries a much larger run-time cost and gcc still offers it, and there is even less grounds to argue by any C or C++ standard that it's not the programmer's fault. gcc still offers it. As mentioned in the other bug, Microsoft Visual C++ already does this check. They do it like this. After the multiplication they check of the overflow flag is set, which on x86 indicates the result does not fit in the lower 32 bits. If so, instead of the truncated value it passes (size_t)-1 the operator new, which causes that operator new to fail (in the default case at least, a user may define its own operator new and that one might still return something). My favorite solution would be for the code to fail immediately. Throw an exception or return NULL, depending on which operator new the program called.
I agree. Patches welcome.
I agree, Apple would like this as well... radr://5739832
Perhaps someone could turn this into a landable patch https://bugzilla.mozilla.org/attachment.cgi?id=352646&action=edit This seemed to fix the problem for us
(In reply to comment #18) > Perhaps someone could turn this into a landable patch > https://bugzilla.mozilla.org/attachment.cgi?id=352646&action=edit > > This seemed to fix the problem for us > You just need a testcase, bootstrap and regression testing.
(In reply to comment #19) > You just need a testcase, bootstrap and regression testing. Test case (and a different patch) is here: <http://gcc.gnu.org/ml/gcc-patches/2010-02/msg00275.html>
It turns out that the C++ committee did address this in C++0X, after rejected a previous DR. See section 18.6.2.2 in N3090. This seems to require an ABI change because the size calculation can no longer happen at the call site without much code bloat. See <http://gcc.gnu.org/ml/gcc/2010-05/msg00437.html>.
New patch posted: http://gcc.gnu.org/ml/gcc-patches/2011-01/msg01593.html
Florian, your patch seems to have gone unreviewed, could you ping it? GCC is getting (fairly) criticised on the LLVM blog about this ;)
(In reply to comment #23) > Florian, your patch seems to have gone unreviewed, could you ping it? Jason reviewed it and Ian, too (off-list). I haven't yet gotten around to incorporating their feedback. I'm also not sure if I'm qualified to tackle the optimization requests.
since an exception is now required this is not an enhancement
Current proposed patch: http://gcc.gnu.org/ml/gcc-patches/2012-06/msg01689.html
Author: fw Date: Mon Aug 20 21:13:23 2012 New Revision: 190546 URL: http://gcc.gnu.org/viewcvs?root=gcc&view=rev&rev=190546 Log: Fix PR C++/19351: integer overflow in operator new[] 2012-08-20 Florian Weimer <fweimer@redhat.com> PR c++/19351 * call.c (build_operator_new_call): Add size_check argument and evaluate it. * cp-tree.h (build_operator_new_call): Adjust declaration. * init.c (build_new_1): Compute array size check and apply it. 2012-08-10 Florian Weimer <fweimer@redhat.com> PR c++/19351 * g++.dg/init/new38.C: New test. * g++.dg/init/new39.C: New test. Added: trunk/gcc/testsuite/g++.dg/init/new38.C trunk/gcc/testsuite/g++.dg/init/new39.C Modified: trunk/gcc/ChangeLog trunk/gcc/cp/call.c trunk/gcc/cp/cp-tree.h trunk/gcc/cp/init.c trunk/gcc/testsuite/ChangeLog
This is the best we can do without an ABI change. (Expanding inline the code to throw std::bad_array_new_length, as required by C++11, is a bit too messy.)
Author: jakub Date: Mon Oct 1 08:12:01 2012 New Revision: 191891 URL: http://gcc.gnu.org/viewcvs?root=gcc&view=rev&rev=191891 Log: Move PR c++/19351 ChangeLog entry to correct ChangeLog. Modified: trunk/gcc/ChangeLog trunk/gcc/cp/ChangeLog
*** Bug 260998 has been marked as a duplicate of this bug. *** Seen from the domain http://volichat.com Marked for reference. Resolved as fixed @bugzilla.