Created attachment 41146 [details] source file that triggers warning The attached code (from QEMU's test suite) triggers the following warning - I wonder if this is the same as pr 79095 ? In file included from /usr/include/string.h:639:0, from cut-down.c:4: In function ‘memcpy’, inlined from ‘iov_from_buf.constprop’ at cut-down.c:49:9, inlined from ‘test_to_from_buf_1’ at cut-down.c:128:14, inlined from ‘test_to_from_buf’ at cut-down.c:143:9: /usr/include/bits/string3.h:53:10: error: ‘__builtin_memcpy’: specified size 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Werror=stringop-overflow=] return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest)); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In function ‘memcpy’, inlined from ‘iov_to_buf.constprop’ at cut-down.c:62:9, inlined from ‘test_to_from_buf_1’ at cut-down.c:134:14, inlined from ‘test_to_from_buf’ at cut-down.c:143:9: /usr/include/bits/string3.h:53:10: error: ‘__builtin_memcpy’: specified size 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Werror=stringop-overflow=] return __builtin___memcpy_chk (__dest, __src, __len, __bos0 (__dest)); from Fedora 26's: Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,objc,obj-c++,fortran,ada,go,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-gcc-major-version-only --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --with-isl --enable-libmpx --enable-offload-targets=nvptx-none --without-cuda-driver --enable-gnu-indirect-function --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 7.0.1 20170309 (Red Hat 7.0.1-0.12) (GCC) I have some sympathy for the -1 size_t cast and it's worrying that (size_t)-1 <= iov_len could be true ???
Confirmed with the top of trunk. The __builtin_constant_p call makes the difference. The following is a small test case showing that the invalid memcpy call is, in fact, emitted by GCC. This isn't the same issue as bug 79095. Beyond the warning GCC doesn't "know" that memcpy(d, s, -1) is unavoidably invalid. $ cat b.c && gcc -O2 -S -Wall -Wextra -Wpedantic -fdump-tree-optimized=/dev/stdout b.c typedef __SIZE_TYPE__ size_t; void f (void *d, const char *s, size_t a, size_t b) { if (__builtin_constant_p (a) && a <= b) __builtin_memcpy (d, s, a); } void g (void *d, const char *s, size_t b) { f (d, s, -1, b); } ;; Function f (f, funcdef_no=0, decl_uid=1799, cgraph_uid=0, symbol_order=0) f (void * d, const char * s, size_t a, size_t b) { <bb 2> [100.00%]: return; } ;; Function g (g, funcdef_no=1, decl_uid=1804, cgraph_uid=1, symbol_order=1) Removing basic block 5 g (void * d, const char * s, size_t b) { <bb 2> [100.00%]: if (b_4(D) == 18446744073709551615) goto <bb 3>; [22.95%] else goto <bb 4>; [77.05%] <bb 3> [22.95%]: __builtin_memcpy (d_2(D), s_3(D), 18446744073709551615); [tail call] <bb 4> [100.00%]: return; } In function ‘f’, inlined from ‘g’ at b.c:11:3: b.c:6:5: warning: ‘__builtin_memcpy’: specified size 18446744073709551615 exceeds maximum object size 9223372036854775807 [-Wstringop-overflow=] __builtin_memcpy (d, s, a); ^~~~~~~~~~~~~~~~~~~~~~~~~~
Created attachment 41147 [details] a different signed/size case Here's another case (law said to attach it to the same bug), this is giving: In function ‘test_acpi_rsdt_table’, inlined from ‘test_acpi_one.constprop’ at bug2a.c:19334:5, inlined from ‘test_acpi_piix4_tcg’ at bug2a.c:19346:5: bug2a.c:19319:59: error: argument 1 range [18446744071562067968, 18446744073709551615] exceeds maximum object size 9223372036854775807 [-Werror=alloc-size-larger-than=] / __s)) __p = g_malloc0 (__n * __s); else __p = g_malloc0_n (__n, __s); __p; })); ~~~~^~~~~~~~~~~~~~~~~~~~~~~~ bug2a.c: In function ‘test_acpi_piix4_tcg’: bug2a.c:6004:10: note: in a call to allocation function ‘g_malloc0_n’ declared here gpointer g_malloc0_n (gsize n_blocks, ^~~~~~~~~~~ this is the preprocessed output of the original and that error was: In file included from /usr/include/glib-2.0/glib/glist.h:32:0, from /usr/include/glib-2.0/glib/ghash.h:33, from /usr/include/glib-2.0/glib.h:50, from /home/dgilbert/git/qemu/include/glib-compat.h:19, from /home/dgilbert/git/qemu/include/qemu/osdep.h:107, from bug2.c:13: In function ‘test_acpi_rsdt_table’, inlined from ‘test_acpi_one.constprop’ at bug2.c:80:5, inlined from ‘test_acpi_piix4_tcg’ at bug2.c:98:5: /usr/include/glib-2.0/glib/gmem.h:216:10: error: argument 1 range [18446744071562067968, 18446744073709551615] exceeds maximum object size 9223372036854775807 [-Werror=alloc-size-larger-than=] __p = g_##func##_n (__n, __s); \ ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~ /usr/include/glib-2.0/glib/gmem.h:278:42: note: in expansion of macro ‘_G_NEW’ #define g_new0(struct_type, n_structs) _G_NEW (struct_type, n_structs, malloc0) ^~~~~~ bug2.c:62:14: note: in expansion of macro ‘g_new0’ tables = g_new0(uint32_t, tables_nr); ^~~~~~ bug2.c: In function ‘test_acpi_piix4_tcg’: /usr/include/glib-2.0/glib/gmem.h:96:10: note: in a call to allocation function ‘g_malloc0_n’ declared here gpointer g_malloc0_n (gsize n_blocks, ^~~~~~~~~~~ but immediately before that g_new0 we have a : g_assert_cmpint(tables_nr, >, 0); and the range it's complaing about is FFFFFFFF80000000 and FFFFFFFFFFFFFFFF which is very odd. For ref this is the tests/bios-tables-test.c from qemu.
The warning for attachment 41147 [details] looks valid. It points to the g_malloc0_n(__n, __s) call in the else statement in the following block. If I'm reading it right, there, __s is non-zero, and __n is equal to SIZE_MAX: do { gint64 __n1 = (tables_nr), __n2 = (0); if (__n1 > __n2) ; else g_assertion_message_cmpnum (((gchar*) 0), "bug2.c", 59, ((const char*) (__func__)), "tables_nr" " " ">" " " "0", __n1, ">", __n2, 'i'); } while (0); tables = (uint32_t *) (__extension__ ({ gsize __n = (gsize) (tables_nr); gsize __s = sizeof (uint32_t); gpointer __p; if (__s == 1) __p = g_malloc0 (__n); else if (__builtin_constant_p (__n) && (__s == 0 || __n <= (0x7fffffffffffffffL * 2UL + 1UL) / __s)) __p = g_malloc0 (__n * __s); else __p = g_malloc0_n (__n, __s); __p; }));
The beautified code should be something like this: #include <stddef.h> void *g_malloc0(size_t n) __attribute__((__alloc_size__(1))); void *g_malloc0_n(size_t n, size_t s) __attribute__((__alloc_size__(1, 2))); unsigned *f (int length) { int tables_nr = (length - 24) / sizeof(unsigned); if (tables_nr <= 0) return NULL; return (unsigned *) (__extension__ ({ size_t n = tables_nr; size_t s = sizeof (unsigned); void *p; if (s == 1) p = g_malloc0 (n); else if (__builtin_constant_p (n) && (s == 0 || n <= ((size_t)-1) / s)) p = g_malloc0 (n * s); else p = g_malloc0_n (n, s); p; })); } > __s is non-zero, and __n is equal to SIZE_MAX n is not constant at the g_malloc0_n call site, so I'm not sure what deduction can be made.
My very quick analysis from the IRC chat yesterday was that the first testcase has a path that should have been detected as unexecutable. I'd work from the full testcase rather than the reduced one.
The reduced test case from comment #4 doesn't trigger a warning because in it the value of n is unknown. The original test case does trigger it because in it n's range is known. This is evident from the VRP dump which shows: Value ranges after VRP: __n_5: ~[1, 18446744071562067967] ... test_acpi_piix4_tcg () { ... g_assertion_message_cmpnum (0B, "bug2.c", 59, &__func__, "tables_nr > 0", _71, ">", 0.0, 105); __n_5 = (gsize) tables_nr_69; __p_58 = g_malloc0_n (__n_5, 4); The anti-range implies that the variable's value is either zero or greater than 18446744071562067967 (0xffffffff7fffffff) so the allocation request is either for zero bytes or some positive multiple of the excessive size. Since zero is not considered a valid size (unless it's constant) and 18446744071562067967 is too big the warning triggers. The inlining context of the call is shown in the full output of the warning. It might help shed light on where the range comes from. In function ‘test_acpi_rsdt_table’, inlined from ‘test_acpi_one.constprop’ at pr80346-2.c:19334:5, inlined from ‘test_acpi_piix4_tcg’ at pr80346-2.c:19346:5: pr80346-2.c:19319:59: warning: argument 1 range [18446744071562067968, 18446744073709551615] exceeds maximum object size 9223372036854775807 [-Walloc-size-larger-than=] / __s)) __p = g_malloc0 (__n * __s); else __p = g_malloc0_n (__n, __s); __p; })); ~~~~^~~~~~~~~~~~~~~~~~~~~~~~
(In reply to Martin Sebor from comment #6) > The reduced test case from comment #4 doesn't trigger a warning because in > it the value of n is unknown. The original test case does trigger it > because in it n's range is known. This is evident from the VRP dump which > shows: > > Value ranges after VRP: > > __n_5: ~[1, 18446744071562067967] > ... > test_acpi_piix4_tcg () > { > ... > g_assertion_message_cmpnum (0B, "bug2.c", 59, &__func__, "tables_nr > 0", > _71, ">", 0.0, 105); > __n_5 = (gsize) tables_nr_69; > __p_58 = g_malloc0_n (__n_5, 4); > > > The anti-range implies that the variable's value is either zero or greater > than 18446744071562067967 (0xffffffff7fffffff) so the allocation request is > either for zero bytes or some positive multiple of the excessive size. > Since zero is not considered a valid size (unless it's constant) and > 18446744071562067967 is too big the warning triggers. > > The inlining context of the call is shown in the full output of the warning. > It might help shed light on where the range comes from. > > In function ‘test_acpi_rsdt_table’, > inlined from ‘test_acpi_one.constprop’ at pr80346-2.c:19334:5, > inlined from ‘test_acpi_piix4_tcg’ at pr80346-2.c:19346:5: > pr80346-2.c:19319:59: warning: argument 1 range [18446744071562067968, > 18446744073709551615] exceeds maximum object size 9223372036854775807 > [-Walloc-size-larger-than=] > / __s)) __p = g_malloc0 (__n * __s); else __p = g_malloc0_n > (__n, __s); __p; })); > > ~~~~^~~~~~~~~~~~~~~~~~~~~~~~ The original code is here: http://git.qemu-project.org/?p=qemu.git;a=blob;f=tests/bios-tables-test.c;h=88dbf9785353d6ac82a7000357d4bd4c658319e7;hb=HEAD#l96 (I had boiled it down a bit for this) but what confuses me is we start if with 'tables_nr' which is an int This is x86 - so it's 32bit we then do an assert to guarantee it's positive -> So the possible range is now 0..0x7fffffff Then I expect it gets multiplied by sizeof(uint32_t) -> Range is now 0..0x1ffffffc the range gcc's warning us about looks like it sign extended a -ve 32bit integer - but our assert makes sure that can't happen.
g_assertion_message_cmpnum is not declared anymore as noreturn since glib 2.38. https://bugzilla.gnome.org/show_bug.cgi?id=692125 :-O
(In reply to Paolo Bonzini from comment #8) > g_assertion_message_cmpnum is not declared anymore as noreturn since glib > 2.38. > https://bugzilla.gnome.org/show_bug.cgi?id=692125 :-O !!!? Oh in that case, that one isn't a gcc bug I guess.
108 /* compute the table entries in rsdt */ 109 tables_nr = (rsdt_table->length - sizeof(AcpiRsdtDescriptorRev1)) / 110 sizeof(uint32_t); 111 g_assert_cmpint(tables_nr, >, 0); For GCC to "understand" the assertion it would need to see the condition and be able to prove the allocation that follows cannot be reached (e.g., because the assertion exits the program by calling a noreturn function like abort or exit when the condition is false). From the preprocessed translation unit in attachment 41147 [details] it looks as though the whole g_assert_cmpint expands to an unconditional function call (g_assertion_message_cmpnum) that's not decorated with attribute noreturn, and so the condition doesn't affect the range of the variable. Also, when working with sizes it's best to deal with unsigned types. Storing a size in an int and using it in mixed-type expressions involving size_t (like sizeof) can easily introduce the possibility of overflow (as far as GCC can see) and turn a non-negative range into a negative-positive one. In the attachment, changing the type of the tables_nr local variable from int to size_t eliminates the warning,
The warning for Martin's reduced testcase is clearly warranted. Consider the case were a == -1 and b == -1. Now if we go back to the original testcase we have this: static inline size_t iov_from_buf(const struct iovec *iov, unsigned int iov_cnt, size_t offset, const void *buf, size_t bytes) { if (__builtin_constant_p(bytes) && iov_cnt && offset <= iov[0].iov_len && bytes <= iov[0].iov_len - offset) { memcpy(iov[0].iov_base + offset, buf, bytes); return bytes; } else { return iov_from_buf_full(iov, iov_cnt, offset, buf, bytes); } } In the case where bytes = -1 (constant), iov_count != 0, offset = 0 and iov_len = -1 we can clearly call memcpy with -1 as the length. This corresponds to a call like n = iov_from_buf(iov, niov, i, ibuf + i, -1); Where niov != 0, i == 0, ibuf (don't care) and iov.iov_len == -1. I don't see anything that would inherently prevent that from occuring. So AFAICT, the warning for the first testcase is valid as well.
> So AFAICT, the warning for the first testcase is valid as well. True, but isn't the maximum object size (2^63-1 aka PTRDIFF_MAX) as bogus as 2^64-1? We are using -1 which is a bit ugly but SIZE_MAX would also warn and the warning then makes less sense. In other words, I'm not sure it's particularly useful to have a warning if GCC cannot determine the object size at all.
Given an unknown object size and a byte count of -1 we ought to be warning IMHO.
And also treat it as undefined behavior and go straight to the else... kidding, but not entirely!). The main issue is that here we _are_ testing the overflow behavior of the function, so we cannot pass sz for the last argument. I guess we can add a pragma or compiler flag to hide the warning. Thanks!
I was looking pretty hard for something the compiler could use to avoid the problematical paths. That's always my first approach since doing so removes the warning and generates better code. I just couldn't find anything useful. For the undefined behavior path that can't be removed, my preference is to insert a trap rather than going into the else clause -- hitting the trap stops the program cold which is far safer from a security standpoint. That's what we do with things like dereferencing a NULL pointer or division by zero. If our out-of-bounds array bounds analysis was better, we'd be doing it there too.
Interestingly, GCC manages to eliminate the memset at -O1 (and thus avoid warning) but not at -O2: $ gcc -O1 -S -Wall -Wextra -Wpedantic -fdump-tree-optimized=/dev/stdout ../b.c ;; Function f (f, funcdef_no=0, decl_uid=1799, cgraph_uid=0, symbol_order=0) f (void * d, const char * s, size_t a, size_t b) { <bb 2> [100.00%]: return; } ;; Function g (g, funcdef_no=1, decl_uid=1804, cgraph_uid=1, symbol_order=1) g (void * d, const char * s, size_t b) { <bb 2> [100.00%]: return; }