Comments on the suggestion to use infinite precision math for wide int.

Kenneth Zadeck zadeck@naturalbridge.com
Mon Apr 8 05:28:00 GMT 2013


Richard,

You advocate that I should be using an infinite precision
representation and I advocate a finite precision representation where
the precision is taken from the context.  I would like to make the
case for my position here, in a separate thread, because the other
thread is just getting too messy.

At both the tree level and the rtl level you have a type (mode is just
bad rep for types) and both of those explicitly have precisions. The
semantics of the programming languages that we implement define, or at
least recommend, that most operations be done in a precision that is
implementation dependent (or like java a particular machine
independent precision).  Each hardware platform specifies exactly how
every operation is done.  I will admit that infinite precision is more
esthetically pleasing than what i have done, but exact precision
matches the needs of these clients.  The problem is that the results
from infinite precision arithmetic differ in many significant ways
from finite precision math.  And the number of places where you have
to inject a precision to get the expected answer, ultimately makes the
infinite precision representation unattractive.

As I said on Thursday, whenever you do operations that do not satisfy
the requirements of a mathematical ring (add sub and mul are in a
ring, divide, shift and comparisons are not) you run the risk of
getting a result that is not what would have been obtained with either
a strict interpretation of the semantics or the machine. Intuitively
any operation that looks at the bits above the precision does not
qualify as an operation that works in a ring.

The poster child for operations that do not belong to a ring is division.
For my example, I am using 4 bit integers because it makes the
examples easy, but similar examples exist for any fixed precision.

Consider 8 * 10 / 4

in an infinite precision world the result is 20, but in a 4 bit
precision world the answer is 0.

another example is to ask if

-10 * 10 is less than 0?

again you get a different answer with infinite precision.   I would argue
that if i declare a variable of type uint32 and scale my examples i have
the right to expect the compiler to produce the same result as the
machine would.

While C and C++ may have enough wiggle room in their standards so that
this is just an unexpected, but legal, result as opposed to being wrong,
everyone will hate you (us) if we do this.  Furthermore, Java explicitly 
does
not allow this (not that anyone actually uses gcj).  I do not know
enough about go, ada and fortran to say how it would effect them.

In looking at the double-int class, the only operation that does not
fit in a ring that is done properly is shifting.  There we explicitly
pass in the precision.

The reason that we rarely see this kind of problem even though
double-int implements 128 bit infinite precision is that currently
very little of the compiler actually uses infinite precision in a
robust way.   In a large number of places, the code looks like:

if (TYPE_PRECISION (TREE_TYPE (...)) < HOST_BITS_PER_WIDE_INT)
    do something using inline operators.
else
    either do not do something or use const-double,

such code clears out most of these issues before the two passes that
embrace infinite precision get a chance to do much damage.  However,
my patch at the rtl level gets rid of most of this kind of code and
replaces it with calls to wide-int that currently uses only operations
within the precision.  I assume that if i went down the infinite
precision road at the tree level, that all of this would come to the
surface very quickly.  I prefer to not change my rep and not have to
deal with this later.

Add, subtract, multiply and the logicals are all safe.  But divide,
remainder, and all of the comparisons need explicit precisions.  In
addition operations like clz, ctl and clrsb need precisions.  In total
about half of the functions would need a precision passed in.  My
point is that once you have to start passing in the precision in for all
of those operations, it seems to be cleaner to get the precision from
the leaves of the tree as I currently do.

Once you buy into the math in a particular precision world, a lot of
the other issues that you raise are just settled.  Asking how to extend
a value beyond it's precision is like asking what the universe was like 
before
the big bang.  It is just something you do not need to know.

I understand that you would like to have functions like x + 1 work,
and so do I. I just could not figure out how to make them have
unsurprising semantics.  In particular, g++ did not seem to be happy
with me defining two plus operators, one for each of signed and
unsigned HWIs.  It seems like if someone explicitly added a wide_int
and an unsigned HWI that they had a right to have the unsigned hwi not
be sign extended.  But if you can show me how to do this, i am happy
to go down that road.

Kenny



More information about the Gcc-patches mailing list