This is the mail archive of the
mailing list for the GCC project.
Re: [patch] for PRs 27639 and 26719
email@example.com (Richard Kenner) writes:
> > It would be nice if somebody could write down the rules for subtypes
> > of INTEGER_TYPE. As far as I know they are undocumented.
> Can you say what type of information you're looking for? My view has always
> been that they follow the standard rules of all types, with nothing special
> going on. Some languages have rules regarding in which types arithmetic
> is done, but those are necessarily language-specific.
Right now we have no documentation at all for the TREE_TYPE field of
an INTEGER_TYPE. At least, none that I can find. So anything would
be nice. For example, INTEGER_TYPE is documented in c-tree.texi.
That documentation does not mention the TREE_TYPE field.
Some specific questions which should be documented somewhere: what
does it mean for TREE_TYPE of an INTEGER_TYPE to be non-NULL? When
does language independent code need to look at the TREE_TYPE field?
What guarantees do we have for the TREE_TYPE field: e.g., will it
always be INTEGER_TYPE? What is the relationship between the
TYPE_PRECISION, TYPE_SIZE, and TYPE_MODE of the two types?
Your statement suggest that we don't need to care about subtypes in
the middle-end, which implies that we never need to look at the
TREE_TYPE field (except for debug info, of course). Yet I know that
is not true, because we have code that does look at that field, e.g.,
build_range_check, int_fits_type_p. And there is apparently in issue
in the loop optimizers. So when does the middle-end need to care?
That is the kind of documentation I am looking for.