patch to fix constant math -5th patch, rtl

Kenneth Zadeck zadeck@naturalbridge.com
Wed Apr 24 14:57:00 GMT 2013


On 04/24/2013 09:36 AM, Richard Biener wrote:
> On Wed, Apr 24, 2013 at 2:44 PM, Richard Sandiford
> <rdsandiford@googlemail.com> wrote:
>> Richard Biener <richard.guenther@gmail.com> writes:
>>> Can we in such cases please to a preparatory patch and change the
>>> CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to
>>> mode precision first?
>> I'm not sure what you mean here.  CONST_INT HWIs are already sign-extended
>> from mode precision to HWI precision.  The 8-bit value 0xb10000000 must be
>> represented as (const_int -128); nothing else is allowed.  E.g. (const_int 128)
>> is not a valid QImode value on BITS_PER_UNIT==8 targets.
> Yes, that's what I understand.  But consider you get a CONST_INT that is
> _not_ a valid QImode value.  Current code simply trusts that it is, given
> the context from ...
And the fact that it we have to trust but cannot verify is a severe 
problem at the rtl level that is not going to go away.    what i have 
been strongly objecting to is your idea that just because we cannot 
verify it, we can thus go change it in some completely different way 
(i.e. the infinite precision nonsense that you keep hitting us with) and 
it will all be ok.

I have three problems with this.

1) Even if we could do this, it gives us answers that are not what the 
programmer expects!!!!!!
Understand this!!!  Programmers expect the code to behave the same way 
if they optimize it or not.   If you do infinite precision arithmetic 
you get different answers than the machine may give you. While the C and 
C++ standards allow this, it is NOT desirable. While there are some 
optimizations that must make visible changes to be effective, this is 
certainly not the case with infinite precision math    Making the change 
to infinite precision math only because you think is pretty is NOT best 
practices and will only give GCC a bad reputation in the community.

Each programming language defines what it means to do constant 
arithmetic and by and large, our front ends do this the way they say.  
But once you go beyond that, you are squarely in the realm where an 
optimizer is expected to try to make the program run fast without 
changing the results.  Infinite precision math in the optimizations is 
visible in that A * B / C may get different answers between an infinite 
precision evaluation and one that is finite precision as specified by 
the types.  And all of this without any possible upside to the 
programmer.   Why would we want to drive people to use llvm?????   This 
is my primary objection.    If you ever gave any reason for infinite 
precision aside from that you consider it pretty, then i would consider 
it.    BUT THIS IS NOT WHAT PROGRAMMERS WANT!!!!

2) The rtl level of GCC does not follow best practices by today's 
standards.    It is quite fragile.     At this point, the best that can 
be said is that it generally seems to work.   What you are asking is for 
us to make the assumption that the code is in fact in better shape than 
it is.    I understand that in your mind, you are objecting to letting 
the back ends hold back something that you believe the middle ends 
should do, but the truth is that this is a bad idea for the middle ends.

3) If i am on a 32 bit machine and i say GEN_INT (0xffffffff), i get a 
32 bit word with 32 1s in it.   There is no other information. In 
particular there is no information that tells me was that a -1 or was 
that the largest positive integer.   We do not have GEN_INTS and a 
GEN_INTU, we just have GEN_INT.  Your desire is that we can take those 
32 bits and apply the lt_p function, not the ltu_p or lts_p function, 
but an lu_p function and use that to compare those 32 bits to 
something.   At the rtl level  there is simply not enough information 
there to sign extend this value.   This will never work without a major 
rewrite of the back ends.



More information about the Gcc-patches mailing list