This is the mail archive of the
mailing list for the GCC project.
Re: Signed int overflow behaviour in the security context
Andreas Bogk <firstname.lastname@example.org> writes:
> Then maybe it shouldn't be the default in autoconf. But wasn't -O3 the
> set of optimizations considered potentially unsafe?
No. -O3 is a set of optimizations which are useful for many program
but which will cause a substantial number of programs to run slower.
-O2 is a set of optimizations which we believe will make (almost) all
programs run faster. -O1 also makes (almost) all programs run faster.
The difference between -O1 and -O2 is that it takes longer to run the
compiler with -O2.
gcc never enables unsafe optimizations except by explicit request via
-f options (e.g., -ffast-math), where "unsafe" is defined as
"violating the language standard."
I want to note, as others have done, that gcc has exploited the notion
of strict signed overflow for a long time, even at -O0. For example,
for a long time gcc has folded "((a * C1) / C2)" to "a * (C1 / C2)"
where C1 % C2 == 0. This was done even in gcc 2.95.3. This
transformation took place even at -O0. This transformation is only
valid if signed overflow is undefined.
So, as a concept, gcc relying on undefined signed overflow is not
new. And yet people are still able to write programs.
Rather than prohibit signed overflow, it would be more useful to talk
about specific optimizations which seem risky.