On Tue, 24 Aug 2004, Joe Buck wrote:
[ issues connected with limited range of C++ enums ]
On Tue, Aug 24, 2004 at 10:32:39AM -0600, Roger Sayle wrote:
I'm happy with the explanations so far. The next question is, if
the middle-end is supposed to treat loads, stores and comparisons
of these types identically to the underlying integer type for the
enumeration, is there a benefit for setting TYPE_PRECISION lower
than GET_MODE_BITSIZE for C++'s enumerated types?
I think that the middle end should, at least by default, try to generate
the best code possible for conformant programs that use C++ enums. In
most cases, that probably means treating them as belonging to the
underlying integer type; the exception would be where the limited range
allows for, say, more efficient switch statements, or to determine that
some code is unreachable.
It's possible to have the compiler generate enum range-checking code, but
doing so would seem inconsistent with how we do everything else. It would
only make sense in the context of a bounds-checking compiler that would
also handle array bounds and the like.
The problem here is that a C++ enum can hold values outside the range
of it's type.
enum E { zero = 0x00, one = 0x01 };
void test(int n)
{
enum E x;
x = (E)n;
switch (x)
{
case zero: foo(); break;
case one: bar(); break;
}
}
int main()
{
test(255);
return 0;
}
In the above code, the enumeration makes "x" is a single bit
wide, and all possible values of the width are handled in the switch.
If we were to omit the range tests, and use a table jump, the above
code would probably core dump for values of x such as 255.