This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug ada/26797] [4.3 regression] ACATS cxh1001 fails
- From: "kenner at vlsi1 dot ultra dot nyu dot edu" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: 8 Mar 2007 16:52:12 -0000
- Subject: [Bug ada/26797] [4.3 regression] ACATS cxh1001 fails
- References: <bug-26797-7210@http.gcc.gnu.org/bugzilla/>
- Reply-to: gcc-bugzilla at gcc dot gnu dot org
------- Comment #28 from kenner at vlsi1 dot ultra dot nyu dot edu 2007-03-08 16:52 -------
Subject: Re: [4.3 regression] ACATS cxh1001 fails
> I don't see what the problem is - you don't have to convert to the base
> type, you can always convert to some standard type of that precision,
> eg int32, before calling the builtin.
Sure, but it's extra tree nodes and more to do. See below.
> Sure, it's just that overloading V_C_E like this feels somehow wrong to me.
Why? It's not "overloading". V_C_E of an expression E of type X to
type Y means "interpret the bits of E as if it were type Y and not type X".
If Y is X'Base, then interpreting E as being Y means that it can now have
all the values of Y. In other words, we could only change a V_C_E to a
NOP_EXPR if we can prove that the value of E is in range of *both* X
and Y.
Of course, we still have a bit of a mess here in that the real point is
a confusion between what in Ada is called a Bounded Error and C's notion
of "undefined" (Ada's "erroneous"). But I think we can do better in this
area: we just haven't gotten to take a really good look at it.
> However I haven't been able to put my finger on any technical obstacle to
> this use of V_C_E.
Nor can I ...
--
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26797