This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: -funsafe-loop-optimizations


Gabriel Dos Reis wrote:
Robert Dewar <dewar@adacore.com> writes:

| My view is that it is just fine to have command line options to modify
| standard behavior. I don't even think it is terrible to have the default
| be non-standard behavior if there are well defined options to get standard
| behavior. I am a little uneasy about the optimization level changing the
| semantics, although you can get by this by saying that something is
| undefined, in which case it is fair game for optimization levels to
| change the behavior.

Yes.  However the proposed transformation is not based on undefined
behviour. There is no overflow for unsigned integer types in C and
C++.  They are modular.  The proposed tranformation is true mutilation
of standard semantics.

I think you missed what I meant. Let me try to be clearer. To me it is a bad idea to have formal semantics depend on the optimization level. However, it is normal for behavior of undefined constructs to vary with optimization level.

Everyone understands what the C semantics are, that's not an issue. The
question is what should the default semantics of gcc be, and what options
should be available to modify these semantics.

One possibility is to say that in default mode, wrapping of unsigned integer
types in the context of loop variables has undefined behavior. Of course this
is inconsistent with the C standard, everyone understands that. But if this
were the default semantics. then from a formal point of view, optimization
would not be changing these semantics, merely giving different results for
undefined constructs. Of course, in this case you would have to have a switch
that enforced the standard semantics (in the spirit of --pedantic).

That at least is consistent, but my own view is that you should only have
non-standard semantics in the default case if you are really really sure
that the efficiency issues involved are significant.

I very much doubt that this is the case here, so I would favor having the
default mode strictly respect C semantics, and then, if it is really shown
that the optimization is worth while, introducing a special "optimization"
flag to allow it. Of course the use of the word "optimization" here is
contentious, since as Gaby says, this is not an optimization, but a
mutilation of standard semantics.

Such multilation definitely requires a burden of proof that the transformation
is worth while in real programs (it is one thing to discuss whether a correct
optimization is or is not worth while, quite another to discuss whether a
transformation that is patently incorrect is worth while!

For comparison, in the Ada world, we have two cases where the default behavior
is non-standard:

-gnato turns on overflow checking, required by the standard. Normally arithmetic
overflow is ignored, as in C. Formally a program with arithmetic overflow has
undefined semantics in default mode. Practically, it behaves just as C would.
The motivation here is that arithmetic overflow checking is expensive since it
is done in a kludgy way using double length arithmetic. It has been on our list
for over a decade to figure out how to get the gcc back end to do efficient
arithemetic overflow checking in the GNAT context. In practice, it is a VFAQ
from Ada programmers to complain that GNAT is wrong in not doing checking, and
although we give a (polite) RTFM response, it is not really satisfactory. We
have discussed changing this default, since perhaps on modern machines, most
customers won't notice the extra space and time, but we have not done anything.

-gnatE turns on the Ada RM dynamic elaboration checking model. We think this
model is fundamentally a bad idea, and GNAT implements an alternative model
of elaboration checking that is more restrictive, but fully static, so that the
inefficiency of run-time checks is avoided, and, more importantly, you know that
your program is free of run-time elaboration errors, which are an endless source
of difficulty in large programs, especially when they are ported from one compiler
to another. We therefore prefer to present this more restricted version of static
elaboration checking as the default, so that programmers who don't understand
elaboration issues (most don't) will by default not get into trouble. It is a
moderately common FAQ to ask why some legacy code being ported results in errors
at bind time complaining about bad elaboration. We tell people in this case that
they can either use -gnatE, and hope for the best (i.e. hope that their code is
properly written to be portable), or they can restructure code to meet the more
restrictive model, thus eliminating elaboration problems once and for all. The
best approach depends on how involved the legacy code is, and whether there are
people around who understand the code.

We are thinking of one additional default behavior, which is to turn on
-fno-strict-aliasing by default. We have not done this yet, since there is
a lot of internal argument (e.g. Robert likes the idea, Richard does not!)

One final point is that in consideration of defaults, it is tempting to take
benchmarking into account. We find many people always benchmarking with the
default options. We have even discussed making at least -O1 the default
setting, since so often customers benchmark with default options, not
specifying optimization, since other compilers have taught them to be afraid
of optimizing. In such a circumstance, gcc often looks bad, since its
unoptimized code is often horrible compared to the code output by other
compilers in "unoptimized" mode.

-- Gaby



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]