This is the mail archive of the
mailing list for the GCC project.
Re: wide-int branch now up for public comment and review
- From: Mike Stump <mikestump at comcast dot net>
- To: Richard Sandiford <rdsandiford at googlemail dot com>
- Cc: Kenneth Zadeck <zadeck at naturalbridge dot com>, rguenther at suse dot de, gcc-patches <gcc-patches at gcc dot gnu dot org>, r dot sandiford at uk dot ibm dot com
- Date: Sun, 25 Aug 2013 12:49:32 -0700
- Subject: Re: wide-int branch now up for public comment and review
- References: <520A9DCC dot 6080609 at naturalbridge dot com> <87ppt4e9hg dot fsf at talisman dot default> <B2FB5C39-EAA7-48FF-A063-FC496FF10E03 at comcast dot net> <87li3pd3p6 dot fsf at talisman dot default>
On Aug 25, 2013, at 11:29 AM, Richard Sandiford <firstname.lastname@example.org> wrote:
> Mike Stump <email@example.com> writes:
>> On Aug 23, 2013, at 8:02 AM, Richard Sandiford
>> <firstname.lastname@example.org> wrote:
>>> We really need to get rid of the #include "tm.h" in wide-int.h.
>>> MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent
>>> thing in there. If that comes from tm.h then perhaps we should put it
>>> into a new header file instead.
>> BITS_PER_UNIT comes from there as well, and I'd need both. Grabbing the
>> #defines we generate is easy enough, but BITS_PER_UNIT would be more
>> annoying. No port in the tree makes use of it yet (other than 8). So,
>> do we just assume BITS_PER_UNIT is 8?
> Looks like wide-int is just using BITS_PER_UNIT to get the number of
> bits in "char". That's a host thing, so it should be CHAR_BIT instead.
? What? No. BITS_PER_UNIT is a feature of the target machine, so, it is absolutely wrong to use a property of the host machine or the build machine. We don't use sizeof(int) to set the size of int on the target for the example same reason.