This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: MAX_FIXED_MODE_SIZE vs. __attribute__(mode) vs. ABIs
- From: Paul Schlie <schlie at comcast dot net>
- To: <gcc at gcc dot gnu dot org>
- Date: Mon, 03 Jan 2005 13:20:10 -0500
- Subject: Re: MAX_FIXED_MODE_SIZE vs. __attribute__(mode) vs. ABIs
> Giovanni Bajo wrote:
>> Roger Sayle <roger@eyesopen.com> wrote:
>> ...
>> I'm not clear whether it is the front-ends who should check the
>> GET_MODE_BITSIZE on their __attribute__(mode)) specifications against
>> MAX_FIXED_MODE_SIZE and either issue a warning or use BLKmode instead,
>> or if its the backend for not not explicitly disqualifying DImode in
>> HARD_REGNO_MODE_OK (and other macros), or some other interaction.
>
> It would seem to me that it is best to catch this kind of errors early. If
> DImode is disallowed on such target, we should not even give the user a way
> to create such variables. handle_mode_attribute uses scalar_mode_supported_p.
> The default implementation does not seem to care about MAX_FIXED_MODE_SIZE,
> so I guess that is the bug.
>
> You could also add some sanity check at startup that scalar_mode_supported_p
> obeys MAX_FIXED_MODE_SIZE (that is, for any mode > MAX_FIXED_MODE_SIZE,
> scalar_mode_supported_p returns false).
As a general question: What's the relationship between MAX_FIXED_MODE_SIZE
and the indidividualy md defined type-sizes, and target word-size supposed
to be?
- is MAX_FIXED_MODE_SIZE meant to simply define the largest defined
type-size supported by the target's md description (which may be larger
than it's native word-size, and/or smaller than it's largest defined
type-size which would then also presume larger than MAX_FIXED_MODE_SIZE
operations would need to be defined elsewhere by default (libgcc2)?
- or is MAX_FIXED_MODE_SIZE meant to define the target's word type-size?
- or should MAX_FIXED_MODE_SIZE not be explicitly defined, but be computed
as the largest XXXX_MODE_SIZE defined by the target's md?
- why is there no corresponding MAX_FLOAT_MODE_SIZE for targets which may
support SF (float) for example, but may need DF (double's) emulated?
- and as Roger indirectly raised the question, what about other ABI things
like EXCEPTION_MODE_SIZE, or a definition of if enum's are passed as
optimally sized integer equivalents, or promoted by default to int size,
etc.?
(I apologize for the question if dumb, but its never been real clear to me).