[Bug middle-end/99797] accessing uninitialized automatic variables

vanyacpp at gmail dot com gcc-bugzilla@gcc.gnu.org
Mon Apr 19 10:43:20 GMT 2021


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99797

Ivan Sorokin <vanyacpp at gmail dot com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |vanyacpp at gmail dot com

--- Comment #10 from Ivan Sorokin <vanyacpp at gmail dot com> ---
(Disclaimer: I'm not a GCC developer, I'm just a random guy who reads bugzilla
and
tried making some simple changes to GCC a few times)

(In reply to Martin Uecker from comment #9)
> The behavior of GCC is dangerous as the example in comment #1 show. You can
> not reason at all about the generated code.

My reasoning normally boils down to this: As the program invokes UB therefore
the exact behavior depends on the compiler, the compiler version, the OS and
other factors.

I would like to note that the optimization performed by compiler are not
designed to break user's code. They were designed to optimize some typical
redundancies in programs. It just happened that their combination breaks
unpredictably the code invoking UB.

Normally it is difficult/impossible not to break the code invoking UB
without regressing some optimizations.

Also optimizations performed by compiler change over time, so the exact
result of the breakage inevitably depends on the specific compiler
version.

In theory GCC already has an option that limits the effects of UB: -O0. I
believe this is the only forward-compatible option for that. If we want to
be more precise we can disable only -fno-tree-ccp, but these fine-grained
optimization options changes from one compiler version to another.

> The "optimize based on the assumption that UB can not happen" philosophy
> amplifies even minor programming errors into something dangerous.

Unfortunately this is easier said than done. I far as I know all major
compilers do optimization based on UB. Consider this:

const int PI = 3;

int tau()
{
   return 2 * PI; // can this be folded into 6?
}

GCC folds 2 * PI into 6 even with -O0. This optimization is based on UB.
Because in some other function one can write:

void evil()
{
    const_cast<int&>(PI) = 4;
}

As some usages of PI can be folded and some can be not. The ones that were
folded would see PI = 3, the ones that were not folded would see PI = 4.

One can argue that the constant folding is fundamentally an optimization
based on UB. I believe few optimizations will be left, if we disable all
that rely on UB.

> This, of  course, also applies to other UB (in varying degrees). For signed
> overflow we have -fsanitize=signed-integer-overflow which can help detect and
> mitigate such errors, e.g. by trapping at run-time. And also this is allowed
> by UB. 

> In case of UB the choice of what to do lies with the compiler, but I think it
> is a bug if this choice is unreasonable and does not serve its users well.

Do you have some specific proposal in mind?

Currently a user has these 5 options:
1. Using -O0 suppressing optimizations.
2. Using -fno-tree-ccp suppressing this specific optimization.
3. Using -Wall and relying on warnings.
4. (in theory) Using static analyzer -fanalyzer. It doesn't detect this error
   at the moment, but I believe can be taught detecting this.
5. Using dynamic analyzer like valgrind.

It seems that you find existing options insufficient and want another one.


More information about the Gcc-bugs mailing list