This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: patch to fix constant math - 4th patch - the wide-int class - patch ping for the next stage 1


On 04/19/2013 09:31 AM, Richard Biener wrote:

Richi,

As you can see, i have done almost all of what you have recommended here. one thing that is different that was not addressed in this email. I changed the template that is used for the second operand for rtl so that rather than taking an rtl, it takes a pair <rtl, mode>. This means that i was able to add back the assertion checking that the second operand of binary operations is the same as for the first. I also moved the frag for the rtl templates into this patch set rather than the rtl patch set.

+   number of elements of the vector that are in use.  When LEN *
+   HOST_BITS_PER_WIDE_INT < the precision, the value has been
+   compressed.  The values of the elements of the vector greater than
+   LEN - 1. are all equal to the highest order bit of LEN.

equal to the highest order bit of element LEN - 1. ?
Fixed, you are correct.

I have gone thru the entire wide-int patch to clean this up. The bottom line is that if the precision is not a multiple of the size of a HWI then everything above that precision is assumed to be identical to the sign bit.
Especially _not_ equal to the precision - 1 bit of the value, correct?
I do not understand your question here, because in the case talked about above, the bit at precision - 1 would not have been explicitly represented.
+   The representation does not contain any information inherant about
+   signedness of the represented value, so it can be used to represent
+   both signed and unsigned numbers.   For operations where the results
+   depend on signedness (division, comparisons), the signedness must
+   be specified separately.  For operations where the signness
+   matters, one of the operands to the operation specifies either
+   wide_int::SIGNED or wide_int::UNSIGNED.

The last sentence is somehow duplicated.
fixed
+   The numbers are stored as sign entended numbers as a means of
+   compression.  Leading HOST_WIDE_INTS that contain strings of either
+   -1 or 0 are removed as long as they can be reconstructed from the
+   top bit that is being represented.

I'd put this paragraph before the one that talks about signedness, next
to the one that already talks about encoding.
done
+   All constructors for wide_int take either a precision, an enum
+   machine_mode or tree_type.  */

That's probably no longer true (I'll now check).
yes you are correct
+class wide_int {
+  /* Internal representation.  */
+
+  /* VAL is set to a size that is capable of computing a full
+     multiplication on the largest mode that is represented on the
+     target.  The full multiplication is use by tree-vrp.  tree-vpn
+     currently does a 2x largest mode by 2x largest mode yielding a 4x
+     largest mode result.  If operations are added that require larger
+     buffers, then VAL needs to be changed.  */
+  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
+  unsigned short len;
+  unsigned int precision;

I wonder if there is a technical reason to stick to HOST_WIDE_INTs?
I'd say for efficiency HOST_WIDEST_FAST_INT would be more appropriate
(to get a 32bit value on 32bit x86 for example).  I of course see that
conversion to/from HOST_WIDE_INT is an important operation
that would get slightly more complicated.

Maybe just quickly checking the code generated on 32bit x86 for
HOST_WIDE_INT vs. HOST_WIDEST_FAST_INT tells us whether
it's worth considering (it would be bad if each add/multiply would
end up calling to libgcc for example - I know that doesn't happen
for x86, but maybe it would happen for an arm hosted gcc
targeting x86_64?)
This is an interesting point. my guess is that it is unlikely to be worth the work. consider add: most machines have add with carry and well written 32 bit ports would have used an add with carry sequence rather than making the libcall. If i rewrite wide-int in terms of host_fastest_int, then i have to do some messy code to compute the carry which is unlikely to translate into the proper carry instructions. Not to mention the cost overhead of converting to and from HFI given that gcc is written almost entirely using HWIs.

I thought about the possible idea of just converting the mul and div functions. This would be easy because i already reblock them into HOST_WIDE_HALF_INTs to do the math. I could just do a different reblocking. However, i think that it is unlikely that doing this would ever show up on anyone's performance counts. Either way you do the same number of multiply instructions, it is just the subroutine wrapper that could possibly go away.
+  enum ShiftOp {
+    NONE,
+    /* There are two uses for the wide-int shifting functions.  The
+       first use is as an emulation of the target hardware.  The
+       second use is as service routines for other optimizations.  The
+       first case needs to be identified by passing TRUNC as the value
+       of ShiftOp so that shift amount is properly handled according to the
+       SHIFT_COUNT_TRUNCATED flag.  For the second case, the shift
+       amount is always truncated by the bytesize of the mode of
+       THIS.  */
+    TRUNC
+  };

double-int simply honors SHIFT_COUNT_TRUNCATED.  Why differ
from that (and thus change behavior in existing code - not sure if you
do that with introducing wide-int)?
I believe that GCC is supposed to be a little schizophrenic here, at least according to the doc. when it is doing its own math it is defined to use SHIFT_COUNT_TRUNCATED, but when it is doing math for the program it is supposed to honor what the port does. The idea is that you pass in TRUNC when you want to have the shift op do what the port does. Again, this goes back to an insistence that the optimizations are just doing what the machine would do, only earlier.
+  enum SignOp {
+    /* Many of the math functions produce different results depending
+       on if they are SIGNED or UNSIGNED.  In general, there are two
+       different functions, whose names are prefixed with an 'S" and
+       or an 'U'.  However, for some math functions there is also a
+       routine that does not have the prefix and takes an SignOp
+       parameter of SIGNED or UNSIGNED.  */
+    SIGNED,
+    UNSIGNED
+  };

You seem to insist on that.  It should propagate to the various parts
of the compiler that have settled for the 'uns' integer argument.
Having one piece behave different is just weird.  I suppose I will
find code like

     wi.ext (prec, uns ? UNSIGNED : SIGNED)

in your conversion?
this has been handled in another thread.
+  static wide_int from_shwi (HOST_WIDE_INT op0,
+                            unsigned int precision, bool *overflow = 0);

+  if (precision < HOST_BITS_PER_WIDE_INT)
+    {
+      HOST_WIDE_INT t = sext_hwi (op0, precision);
+      if (t != op0 && overflow)
+       *overflow = true;
+      op0 = t;
+    }

Hm.  I'd say input values should be properly extended already (we certainly
expect that from RTL or tree constants).  So we might want to assert the
above instead of tracking it via *overflow.
see my discussion of overflow below. but you are basically correct.
+  static wide_int from_array (const HOST_WIDE_INT* op0,
+                             unsigned int len,
+                             unsigned int precision, bool need_canon = true);
+  inline static wide_int from_array (const HOST_WIDE_INT* op0,
+                                    unsigned int len,
+                                    enum machine_mode mode);
+  inline static wide_int from_array (const HOST_WIDE_INT* op0,
+                                    unsigned int len,
+                                    const_tree type);

I still don't like the overloads precision vs. machine_mode vs. tree type.
It's much more explanative what you specify here if you write

   from_array (&x, len, GET_MODE_PRECISION (mode))

instead of

   from_array (&x, len, mode)

and it's not that much more typing either.
done
+  static wide_int from_double_int (double_int,
+                                  unsigned int precision);
+  inline static wide_int from_double_int (double_int, enum machine_mode);
this one would lack the tree type overload (I of course think it has an
excessive overload for machine_mode).
done
+  inline HOST_WIDE_INT to_shwi (unsigned int prec = 0) const;
+  inline unsigned HOST_WIDE_INT to_uhwi (unsigned int prec = 0) const;

this is

    wi.sext (prec);
    wi.to_shwi ();

which is less odd than handling prec == 0 specially.
no it is not, wi.sext returns a wide-int, these return a HWI.
+  static wide_int max_value (unsigned int prec,
+                            unsigned int precision, SignOp sgn);

two precisions - ugh ;)  In some places having to have a precision is ugly :/
i cleaned this up some. there are a few places where you want the min or max value to be represented in a larger precision than the precision that you are asking about. I reversed the last two args and made the last one take a default value so most of the time, you never see the second precision.
+  inline static wide_int minus_one (unsigned int prec);
+  inline static wide_int zero (unsigned int prec);
+  inline static wide_int one (unsigned int prec);
+  inline static wide_int two (unsigned int prec);

here as well.  I see two (1) properly truncates the value to zero ;)
It just comes to my mind that these could have "arbitrary" precision.
"arbitrary" == MAX_SIZE * HOST_BITS_PER_WIDE_INT.  Which
would get us back to mixed precision operations ...

you will end up with a wide-int that has been properly canonized, but it will not have a proper precision in it so it would not work to remove the precision. These are now rarely used since they are not needed as the second argument of binary functions.
+  /* Printing functions.  */
+
+  void print_dec (char *buf, SignOp sgn) const;
+  void print_dec (FILE *file, SignOp sgn) const;
+  void print_decs (char *buf) const;
+  void print_decs (FILE *file) const;
+  void print_decu (char *buf) const;
+  void print_decu (FILE *file) const;
+  void print_hex (char *buf) const;
+  void print_hex (FILE *file) const;

making those non-member functions would allow getting rid of stdio.h
from wide-int.h (similar making the tree / rtx / mode argument methods
either standalone or making them templates and externalizing the
implementation to a separate template class would get rid of the
rtl / tree dependency)
done
+  inline bool neg_p () const;

how can you test that?  wide-int doesn't have sign information.
if you are asking this question, then you are assuming that the number is signed.
+bool
+wide_int::neg_p () const
+{
+  return sign_mask () != 0;

not exactly efficient either ...
it is likely to be good enough.
+HOST_WIDE_INT
+wide_int::sign_mask () const
+{
+  int i = len - 1;
+  if (precision < HOST_BITS_PER_WIDE_INT)
+    return ((val[0] << (HOST_BITS_PER_WIDE_INT - precision))
+           >> (HOST_BITS_PER_WIDE_INT - 1));
+
+  /* VRP appears to be badly broken and this is a very ugly fix.  */
+  if (i >= 0)
+    return val[i] >> (HOST_BITS_PER_WIDE_INT - 1);
+
+  gcc_unreachable ();
+#if 0
+  return val[len - 1] >> (HOST_BITS_PER_WIDE_INT - 1);
+#endif
+}

I can't see what's the "fix".  It seems to be equivalent to the commented out
code.  Apart from not handling len == 0 for which you ICE then.
The problem is that tree-vrp is broken in a manner that it will not compile this function correctly. it has nothing to do using this function except that we cannot bootstrap the compiler with the function written in the straight forward manner.

if i write this function in the obvious way
/* Produce 0 or -1 that is the smear of the sign bit. */

HOST_WIDE_INT
wide_int::sign_mask () const
{
if (precision < HOST_BITS_PER_WIDE_INT)
return ((val[0] << (HOST_BITS_PER_WIDE_INT - precision))
>> (HOST_BITS_PER_WIDE_INT - 1));

return val[len - 1] >> (HOST_BITS_PER_WIDE_INT - 1);
}

the function fails to bootstrap because there is a bug in vrp.
libbacktrace -DCLOOG_INT_GMP ../../gccWide/gcc/emit-rtl.c -o emit-rtl.o
In file included from ../../gccWide/gcc/rtl.h:31:0,
from ../../gccWide/gcc/emit-rtl.c:39:
../../gccWide/gcc/wide-int.h: In function ‘rtx_def* immed_wide_int_const(const wide_int&, machine_mode)’: ../../gccWide/gcc/wide-int.h:649:21: error: array subscript is below array bounds [-Werror=array-bounds]
return val[len - 1] >> (HOST_BITS_PER_WIDE_INT - 1);

so i hacked the code to fool vrp into shutting up. I was planning to put in a bugzilla after the code was in. But your eagle eye caught me on it.


Back to neg_p ... it computes wrong information.
from_uwhi (-1, HOST_BITS_PER_WIDE_INT).neg_p () returns true.
No, it produces exactly the correct answer. x.neg_p is short hand for x.lts_p (0). the sign is not part of the number. the sign is an attribute of the operation which says how to interpret a particular bit pattern.



I suppose you want to rename it to msb () (implementing that efficiently
and using it from sign_mask, too)

+  template <typename T>
+    inline bool gt_p (T c, SignOp sgn) const;
+  template <typename T>
+    inline bool gts_p (T c) const;
+  template <typename T>
+    inline bool gtu_p (T c) const;

it's bad that we can't use the sign information we have available in almost
all cases ... (where precision is not an exact multiple of
HOST_BITS_PER_WIDE_INT
and len == precision / HOST_BITS_PER_WIDE_INT).  It isn't hard to encode
a sign - you just have to possibly waste a word of zeroes for positive
values where at the moment precision is an exact multiple of
HOST_BIST_PER_WIDE_INT and len == precision / HOST_BITS_PER_WIDE_INT.
Which of course means that the encoding can be one word larger than
maximally required by 'precision'.
discussed elsewhere.
+  wide_int force_to_size (unsigned int precision, SignOp sgn) const;
+  inline wide_int force_to_size (enum machine_mode mode, SignOp sgn) const;
+  inline wide_int force_to_size (const_tree type) const;
+  inline wide_int force_to_size (const_tree type, SignOp sgn) const;
+
+  inline wide_int sforce_to_size (enum machine_mode mode) const;
+  inline wide_int sforce_to_size (const_tree type) const;
+  inline wide_int zforce_to_size (enum machine_mode mode) const;
+  inline wide_int zforce_to_size (const_tree type) const;

too many overloads again
i got rid of the type and mode overrides since you insist.
Looking at

+template <typename T>
+wide_int
+wide_int::operator & (T c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  int cl;
+
+  s = to_shwi2 (ws, &cl, c);
+

I see

+  template <typename T>
+  static inline const HOST_WIDE_INT* to_shwi2 (HOST_WIDE_INT *s, int *l, T x)
+  {
+    s[0] = x;
+    if (~(T)0 < (T)0
...

+  static inline const HOST_WIDE_INT* to_shwi2 (HOST_WIDE_INT *s
ATTRIBUTE_UNUSED,
+                                       int *l, const wide_int &y)
+  {
+    *l = y.len;
...

I think you want to use template specialization here, not overloading.  Thus,

   template <>
   static inline const HOST_WIDE_INT* to_shwi2 (HOST_WIDE_INT *s, int
*l, const wide_int &y)

and adjust the HWI case to look like

   template <typename T>
   static inline const HOST_WIDE_INT* to_shwi2 (HOST_WIDE_INT *s, int
*l, const T& x)

you want to move that ~(T)0 < (T)0 check up to template substitution level
to allow SFINAE to disable the template if that's not computable.

+    s[0] = x;
+    if (~(T)0 < (T)0
+       || sizeof (T) < sizeof (HOST_WIDE_INT))
+      {
+       *l = 1;
+      }
+    else
+      {
+       s[1] = 0;
+       *l = 2;

that's only required if the MSB is set, otherwise it's not properly compressed

+      }
+    return s;
done
hmm, looking at from_uhwi I see that you are adding the extra zero words
as I suggested ... which means you _do_ have reliable sign information
available (well, not for the case where len would end up bigger than MAX_LEN).
So - can we simplify some things with that?  I can think of the ordered
comparisons for example.  Also the encoding description should be
clarified that there _is_ sign information available (and that len can be
bigger than precision / HOST_BITS_PER_WIDE_INT).
I actually got rid of the code below and one other place where i was adding the extra zeros, at least if it is beyond the precision. First, we should never be looking beyond the precision. Second, you cannot do this with rtl. For values stored in CONST_INTs. you get 64 bits and that is it.
Returning to to_shwi2 (odd name, btw):

+       /* Copy the result because it does not fit in two HWIs.  */
+       s[0] = TREE_INT_CST_LOW (tcst);
+       s[1] = TREE_INT_CST_HIGH (tcst);
+       s[2] = 0;
+       *l = 3;
+       return s;

ah, this is why you need the extra argument ;)  Btw, what happens
when a proper encoding does not fit in MAX_LEN words?  That is,
we'd need an extra zero word?
that never happens. This is what the patch (which has already gone in) that looked at the genmodes on the machine was all about. it computes that largest modes that can appear in a port and then multiplies that by 4 to compute the amount of space to allocate on the stack. So WIDE_INT_MAX_ELTS is really always going to be large enough. Given that it is only very short lived stack space, this should not be an issue. However, this code is now gone because you never see past the precision so there is no reason to ever put in that word.
+  /* There should logically be an overload for rtl here, but it cannot
+     be here because of circular include issues.  It is in rtl.h.  */
+  static inline const HOST_WIDE_INT* to_shwi2
+    (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, int *l, rtx rcst);

in rtl.c I suppose.  Btw, this is why I'm suggesting to defer implementation
to a template - you can implement that outside of wide-int.h even without
declaring the specialization here.
it is in rtl.h so that it can be inlined and mostly go away. That code in the second of the three patches that were submitted together.

Is there an accessible branch with the wide-int work?  I may be tempted
to propose some changes in form of patches.  If not then I think the
work is in a state that is suitable to put on a merge-branch.
At this point may things are still broken. I have all of the rtl changes and the first of the tree patches that i am keeping up to date as you have me make these changes. but the changes are too big to just keep pushing thru the whole mess. However, you can see how all of this works at the tree level by looking at the third patch.

+wide_int
+wide_int::add (const wide_int &op1, SignOp sgn, bool *overflow) const
+{
...
+  /* Uncompress the rest.  */
+  for (i = len; i < op1.len; i++)
+    {
+      o0 = val[i];
+      o1 = mask1;
+      x = o0 + o1 + carry;
+      result.val[i] = x;
+      old_carry = carry;
+      carry = x < o0;
+    }
+  for (i = len; i < op1.len; i++)
+    {
+      o0 = mask0;
+      o1 = op1.val[i];
+      x = o0 + o1 + carry;
+      result.val[i] = x;
+      old_carry = carry;
+      carry = x < o0;
+    }

Cut & paste error here I think.  at least one should run up to this->len, no?
And the first loop should run to min (len, op1.len) only.  How much
testing coverage does the code have for "interesting" values?
A standalone testing program will probably reveal such issues (due to
the rich-ness of the current API covering everything will be hard :/)
when i started this, my model was double-int (yes, I did start with that) and there are several functions in that that do math and report overflow. So i added corresponding functions to wide-int. As it turns out, i never needed them because they turn out to only detect overflow if the value does not fit in two HWIs rather than as i had assumed, the value does not fit in the precision that the math is being done in. So they never got tested. They are now gone.

Looking at

+HOST_WIDE_INT
+wide_int::sign_mask () const
+{
+  int i = len - 1;
+  if (precision < HOST_BITS_PER_WIDE_INT)
+    return ((val[0] << (HOST_BITS_PER_WIDE_INT - precision))
+           >> (HOST_BITS_PER_WIDE_INT - 1));
+
+  /* VRP appears to be badly broken and this is a very ugly fix.  */
+  if (i >= 0)
+    return val[i] >> (HOST_BITS_PER_WIDE_INT - 1);
+
+  gcc_unreachable ();
+#if 0
+  return val[len - 1] >> (HOST_BITS_PER_WIDE_INT - 1);
+#endif
+}

again - this looks like it does overly complicated work.  Our encoding
of values guarantees that the following is correct:

HOST_WIDE_INT
wide_int::sign_mask () const
{
   if (val[len - 1] < 0)
     return -1;
   else
     return 0;
}

I suppose quite some code can be simplified that calls sign_mask
currently (thanks to the changed encoding).
i think that guarantee is too strong a word, it is generally true but not always. The problem comes from the rtl constants. I cannot do the trick with the rtl constants. I only get to look at the bits under precision.


back to ::add ():

+  small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  if (small_prec == 0)
+    {
+      if (sgn == wide_int::SIGNED)
+       {
+         if (((!(o0 ^ o1)) & (x ^ o0)) >> (HOST_BITS_PER_WIDE_INT - 1))
+           *overflow = true;

shouldn't this share code with the non-overflow add_large and simply
do overflow detection conditional on overlflow being non-NULL?
Because the add_large code seems to be correct with respect to
the issues noted above ...
not since i deleted the one that did the overflow.
+ ex:
+  result.canonize ();
+  return result;
+}

canonizing is not strictly necessary - in fact it is unlikely to do
something useful most of the time.  I'd say we should only
canonize before building a tree or RTL with that representation.
it is your "most of the time" that i have a problem with. Either the invariant is maintained that values are canonized or not. i have decided we consistently maintain that invariant.
Note that canonize is wrong, too:

+wide_int::canonize ()
+{
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  int blocks_needed = BLOCKS_NEEDED (precision);
+  HOST_WIDE_INT top;
+  int i;
+
+  if (len > blocks_needed)
+    len = blocks_needed;

that will set a UHWI -1 to have len == 1, ignoring the extra needed
zero word.  Thus, either BLOCKS_NEEDED is broken or its
uses need to be audited.
i have removed the case from to_shwi for trees where i add the block with the extra 0 on top. The only invariant that is maintained is that there is nothing that can be looked at above the precision and with that one exception i am consistent.
+
+  /* Clean up the top bits for any mode that is not a multiple of a HWI.  */
+  if (len == blocks_needed && small_prec)
+    val[len - 1] = sext_hwi (val[len - 1], small_prec);

That's not necessary.  By construction all arithmetic operations will
preserve proper extension.  And you unconditionally sign-extend
small precision "large" positive constants here which is even wrong
(-1U, precision == 32, HWI == 64, original rep { 0x00..00fffff }, broken
{ 0xfff...fff } ).
that is not true. if you do a multiply, you get garbage in the top bits. This makes it easy to go to the other reps without canonizing. We only call the canon for certain operations that do not preserve the top bits properly.
+  if (len == 1)
+    return;

in fact for len == 1 we should do nothing in this function from the start.
Callers that also want to properly sign or zero extend the result value
should do so using ext (), not abuse another implementation inside
canonize () (which should only optimize the encoding).

Ok, I'll stop reviewing here.  It's a lot better than before - thanks for
doing the changes.

I still like you to cut down the interface somewhat and _test_ what
is remaining.  Put the code on a branch off trunk so I and others can
produce patches against it.
Richard, this is going to be very hard to do. Let me explain why so we can figure out a path forward. I can do about 95% of the stuff at the tree level by submitting small, easy to review, self contained patches where i convert one pass at a time. All of this leaves tree-cst in its current representation. I can get almost to the end, until i hit tree-vrp and the constant prop since they really do need a wider representation to do the infinite precision stuff that they do.

My plan was to convert everything: converting, testing, submitting and committing, one pass at a time until i get near the end, and then commit the last two patches and the change in rep at one time.

If i go the other route and make the big branch, then it will become very hard to tear small patches off to have easy review and testing. There are a lot of things in the current code base that expect tree-cst to be exactly two HWIs. I am not trying to hide anything from you, but you have made it clear that you want to see this is a series of bite sized chunks, not as one big blob and so i had planned to give it to you like that.


Thanks,
Richard.

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 903125e..25ed7bc 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -857,7 +857,7 @@ RTL_BASE_H = coretypes.h rtl.h rtl.def $(MACHMODE_H) reg-notes.def \
   insn-notes.def $(INPUT_H) $(REAL_H) statistics.h $(VEC_H) \
   $(FIXED_VALUE_H) alias.h $(HASHTAB_H)
 FIXED_VALUE_H = fixed-value.h $(MACHMODE_H) double-int.h
-RTL_H = $(RTL_BASE_H) $(FLAGS_H) genrtl.h
+RTL_H = $(RTL_BASE_H) $(FLAGS_H) genrtl.h wide-int.h
 RTL_ERROR_H = rtl-error.h $(RTL_H) $(DIAGNOSTIC_CORE_H)
 READ_MD_H = $(OBSTACK_H) $(HASHTAB_H) read-md.h
 PARAMS_H = params.h params.def
@@ -946,7 +946,7 @@ TREE_PRETTY_PRINT_H = tree-pretty-print.h $(PRETTY_PRINT_H)
 GIMPLE_PRETTY_PRINT_H = gimple-pretty-print.h $(TREE_PRETTY_PRINT_H)
 DIAGNOSTIC_CORE_H = diagnostic-core.h $(INPUT_H) bversion.h diagnostic.def
 DIAGNOSTIC_H = diagnostic.h $(DIAGNOSTIC_CORE_H) $(PRETTY_PRINT_H)
-DWARF2OUT_H = dwarf2out.h $(DWARF2_H)
+DWARF2OUT_H = dwarf2out.h $(DWARF2_H) wide-int.h
 C_PRETTY_PRINT_H = c-family/c-pretty-print.h $(PRETTY_PRINT_H) \
 	$(C_COMMON_H) $(TREE_H)
 SCEV_H = tree-scalar-evolution.h $(GGC_H) tree-chrec.h $(PARAMS_H)
@@ -1461,6 +1461,8 @@ OBJS = \
 	varpool.o \
 	vmsdbgout.o \
 	web.o \
+	wide-int.o \
+	wide-int-print.o \
 	xcoffout.o \
 	$(out_object_file) \
 	$(EXTRA_OBJS) \
@@ -2693,6 +2695,8 @@ targhooks.o : targhooks.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TREE_H) \
    tree-ssa-alias.h $(TREE_FLOW_H)
 common/common-targhooks.o : common/common-targhooks.c $(CONFIG_H) $(SYSTEM_H) \
    coretypes.h $(INPUT_H) $(TM_H) $(COMMON_TARGET_H) common/common-targhooks.h
+wide-int.o: wide-int.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H)
+wide-int-print.o: wide-int-print.c wide-int-print.h wide-int.h
 
 bversion.h: s-bversion; @true
 s-bversion: BASE-VER
@@ -2861,10 +2865,10 @@ emit-rtl.o : emit-rtl.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
    $(HASHTAB_H) $(TM_P_H) debug.h langhooks.h $(TREE_PASS_H) gt-emit-rtl.h \
    $(DF_H) $(PARAMS_H) $(TARGET_H)
 real.o : real.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(TREE_H) \
-   $(DIAGNOSTIC_CORE_H) $(TM_P_H) $(REAL_H) dfp.h realmpfr.h
+   $(DIAGNOSTIC_CORE_H) $(TM_P_H) $(REAL_H) dfp.h realmpfr.h wide-int.h
 realmpfr.o : realmpfr.c realmpfr.h $(CONFIG_H) $(SYSTEM_H) coretypes.h $(REAL_H) $(TREE_H)
 dfp.o : dfp.c dfp.h $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H)	$(TREE_H) \
-   $(TM_P_H) $(REAL_H) $(DECNUM_H)
+   $(TM_P_H) $(REAL_H) $(DECNUM_H) wide-int.h
 fixed-value.o: fixed-value.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \
    $(TREE_H) $(REAL_H) $(DIAGNOSTIC_CORE_H)
 jump.o : jump.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) $(RTL_H) \
@@ -3752,6 +3756,7 @@ GTFILES = $(CPP_ID_DATA_H) $(srcdir)/input.h $(srcdir)/coretypes.h \
   $(srcdir)/alias.c $(srcdir)/bitmap.c $(srcdir)/cselib.c $(srcdir)/cgraph.c \
   $(srcdir)/ipa-prop.c $(srcdir)/ipa-cp.c \
   $(srcdir)/dbxout.c \
+  $(srcdir)/wide-int.h \
   $(srcdir)/dwarf2out.h \
   $(srcdir)/dwarf2asm.c \
   $(srcdir)/dwarf2cfi.c \
@@ -3949,15 +3954,16 @@ CFLAGS-gengtype-parse.o += -DGENERATOR_FILE
 build/gengtype-parse.o: $(BCONFIG_H)
 
 gengtype-state.o build/gengtype-state.o: gengtype-state.c $(SYSTEM_H) \
-  gengtype.h errors.h double-int.h version.h $(HASHTAB_H) $(OBSTACK_H) \
-  $(XREGEX_H)
+  gengtype.h errors.h double-int.h version.h $(HASHTAB_H)    \
+  $(OBSTACK_H) $(XREGEX_H)
 gengtype-state.o: $(CONFIG_H)
 CFLAGS-gengtype-state.o += -DGENERATOR_FILE
 build/gengtype-state.o: $(BCONFIG_H)
-
+wide-int.h: $(GTM_H) $(TREE_H) hwint.h $(OPTIONS_H)			\
+  $(MACHMODE_H) double-int.h dumpfile.h $(REAL_H)
 gengtype.o build/gengtype.o : gengtype.c $(SYSTEM_H) gengtype.h 	\
-  rtl.def insn-notes.def errors.h double-int.h version.h $(HASHTAB_H) \
-  $(OBSTACK_H) $(XREGEX_H)
+  rtl.def insn-notes.def errors.h double-int.h version.h     		\
+  $(HASHTAB_H) $(OBSTACK_H) $(XREGEX_H)
 gengtype.o: $(CONFIG_H)
 CFLAGS-gengtype.o += -DGENERATOR_FILE
 build/gengtype.o: $(BCONFIG_H)
diff --git a/gcc/rtl.h b/gcc/rtl.h
index e9013ec..d019d9f 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -20,6 +20,7 @@ along with GCC; see the file COPYING3.  If not see
 #ifndef GCC_RTL_H
 #define GCC_RTL_H
 
+#include <utility>
 #include "statistics.h"
 #include "machmode.h"
 #include "input.h"
@@ -28,6 +29,7 @@ along with GCC; see the file COPYING3.  If not see
 #include "fixed-value.h"
 #include "alias.h"
 #include "hashtab.h"
+#include "wide-int.h"
 #include "flags.h"
 
 /* Value used by some passes to "recognize" noop moves as valid
@@ -1309,6 +1385,55 @@ struct address_info {
   bool autoinc_p;
 };
 
+#ifndef GENERATOR_FILE
+
+/* Accessors for rtx_mode. */
+static inline rtx
+get_rtx (const rtx_mode_t p)
+{
+  return p.first;
+}
+
+static inline enum machine_mode
+get_mode (const rtx_mode_t p)
+{
+  return p.second;
+}
+
+/* Specialization of to_shwi2 function in wide-int.h for rtl.  This
+   cannot be in wide-int.h because of circular includes.  */
+template<>
+inline const HOST_WIDE_INT*
+wide_int::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED, 
+		    unsigned int *l, unsigned int *p,
+		    const rtx_mode_t& rp)
+{
+  const rtx rcst = get_rtx (rp);
+  enum machine_mode mode = get_mode (rp);
+  *p = GET_MODE_PRECISION (mode);
+
+  switch (GET_CODE (rcst))
+    {
+    case CONST_INT:
+      *l = 1;
+      return &INTVAL (rcst);
+      
+    case CONST_WIDE_INT:
+      *l = CONST_WIDE_INT_NUNITS (rcst);
+      return &CONST_WIDE_INT_ELT (rcst, 0);
+      
+    case CONST_DOUBLE:
+      *l = 2;
+      return &CONST_DOUBLE_LOW (rcst);
+      
+    default:
+      gcc_unreachable ();
+    }
+}
+
+#endif
+
+
 extern void init_rtlanal (void);
 extern int rtx_cost (rtx, enum rtx_code, int, bool);
 extern int address_cost (rtx, enum machine_mode, addr_space_t, bool);
diff --git a/gcc/wide-int-print.c b/gcc/wide-int-print.c
new file mode 100644
index 0000000..124b17a
--- /dev/null
+++ b/gcc/wide-int-print.c
@@ -0,0 +1,138 @@
+/* Printing operations with very long integers.
+   Copyright (C) 2012-2013 Free Software Foundation, Inc.
+   Contributed by Kenneth Zadeck <zadeck@naturalbridge.com>
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the
+Free Software Foundation; either version 3, or (at your option) any
+later version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "hwint.h"
+#include "wide-int.h"
+#include "wide-int-print.h"
+
+/*
+ * public printing routines.
+ */
+
+#define BLOCKS_NEEDED(PREC) \
+  (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT)
+
+void 
+print_dec (const wide_int &wi, char *buf, SignOp sgn)
+{
+  if (sgn == SIGNED)
+    print_decs (wi, buf);
+  else
+    print_decu (wi, buf);
+}
+
+void 
+print_dec (const wide_int &wi, FILE *file, SignOp sgn)
+{
+  if (sgn == SIGNED)
+    print_decs (wi, file);
+  else
+    print_decu (wi, file);
+}
+
+
+/* Try to print the signed self in decimal to BUF if the number fits
+   in a HWI.  Other print in hex.  */
+
+void 
+print_decs (const wide_int &wi, char *buf)
+{
+  if ((wi.get_precision () <= HOST_BITS_PER_WIDE_INT)
+      || (wi.get_len () == 1 && !wi.neg_p ()))
+    sprintf (buf, HOST_WIDE_INT_PRINT_DEC, wi.elt (0));
+  else
+    print_hex (wi, buf);
+}
+
+/* Try to print the signed self in decimal to FILE if the number fits
+   in a HWI.  Other print in hex.  */
+
+void 
+print_decs (const wide_int &wi, FILE *file)
+{
+  char buf[(2 * MAX_BITSIZE_MODE_ANY_INT / BITS_PER_UNIT) + 4];
+  print_decs (wi, buf);
+  fputs (buf, file);
+}
+
+/* Try to print the unsigned self in decimal to BUF if the number fits
+   in a HWI.  Other print in hex.  */
+
+void 
+print_decu (const wide_int &wi, char *buf)
+{
+  if ((wi.get_precision () <= HOST_BITS_PER_WIDE_INT)
+      || (wi.get_len () == 1 && !wi.neg_p ()))
+    sprintf (buf, HOST_WIDE_INT_PRINT_UNSIGNED, wi.elt (0));
+  else
+    print_hex (wi, buf);
+}
+
+/* Try to print the signed self in decimal to FILE if the number fits
+   in a HWI.  Other print in hex.  */
+
+void 
+print_decu (const wide_int &wi, FILE *file)
+{
+  char buf[(2 * MAX_BITSIZE_MODE_ANY_INT / BITS_PER_UNIT) + 4];
+  print_decu (wi, buf);
+  fputs (buf, file);
+}
+
+void 
+print_hex (const wide_int &wi, char *buf)
+{
+  int i = wi.get_len ();
+
+  if (wi.zero_p ())
+    sprintf (buf, "0x");
+  else
+    {
+      if (wi.neg_p ())
+	{
+	  int j;
+	  /* If the number is negative, we may need to pad value with
+	     0xFFF...  because the leading elements may be missing and
+	     we do not print a '-' with hex.  */
+	  for (j = BLOCKS_NEEDED (wi.get_precision ()); j > i; j--)
+	    buf += sprintf (buf, HOST_WIDE_INT_PRINT_PADDED_HEX, (HOST_WIDE_INT) -1);
+	    
+	}
+      else
+	buf += sprintf (buf, HOST_WIDE_INT_PRINT_HEX, wi.elt (--i));
+      while (--i >= 0)
+	buf += sprintf (buf, HOST_WIDE_INT_PRINT_PADDED_HEX, wi.elt (i));
+    }
+}
+
+/* Print one big hex number to FILE.  Note that some assemblers may not
+   accept this for large modes.  */
+void 
+print_hex (const wide_int &wi, FILE *file)
+{
+  char buf[(2 * MAX_BITSIZE_MODE_ANY_INT / BITS_PER_UNIT) + 4];
+  print_hex (wi, buf);
+  fputs (buf, file);
+}
+
diff --git a/gcc/wide-int-print.h b/gcc/wide-int-print.h
new file mode 100644
index 0000000..f285c57
--- /dev/null
+++ b/gcc/wide-int-print.h
@@ -0,0 +1,37 @@
+/* Print wide integers.
+   Copyright (C) 2013 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the
+Free Software Foundation; either version 3, or (at your option) any
+later version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef WIDE_INT_PRINT_H
+#define WIDE_INT_PRINT_H
+
+#include <stdio.h>
+#include "wide-int.h"
+
+/* Printing functions.  */
+
+extern void print_dec (const wide_int &wi, char *buf, SignOp sgn);
+extern void print_dec (const wide_int &wi, FILE *file, SignOp sgn);
+extern void print_decs (const wide_int &wi, char *buf);
+extern void print_decs (const wide_int &wi, FILE *file);
+extern void print_decu (const wide_int &wi, char *buf);
+extern void print_decu (const wide_int &wi, FILE *file);
+extern void print_hex (const wide_int &wi, char *buf);
+extern void print_hex (const wide_int &wi, FILE *file);
+
+#endif /* WIDE_INT_PRINT_H */
diff --git a/gcc/wide-int.c b/gcc/wide-int.c
new file mode 100644
index 0000000..4853093
--- /dev/null
+++ b/gcc/wide-int.c
@@ -0,0 +1,2394 @@
+/* Operations with very long integers.  -*- C++ -*-
+   Copyright (C) 2012-2013 Free Software Foundation, Inc.
+   Contributed by Kenneth Zadeck <zadeck@naturalbridge.com>
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the
+Free Software Foundation; either version 3, or (at your option) any
+later version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "hwint.h"
+#include "wide-int.h"
+#include "rtl.h"
+#include "tree.h"
+#include "dumpfile.h"
+
+/* This is the maximal size of the buffer needed for dump.  */
+const int MAX_SIZE = 4 * (MAX_BITSIZE_MODE_ANY_INT / 4
+		     + MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT + 32);
+
+/*
+ * Internal utilities.
+ */
+
+/* Quantities to deal with values that hold half of a wide int.  Used
+   in multiply and divide.  */
+#define HALF_INT_MASK (((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1)
+
+#define BLOCK_OF(TARGET) ((TARGET) / HOST_BITS_PER_WIDE_INT)
+#define BLOCKS_NEEDED(PREC) \
+  (((PREC) + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT)
+#define SIGN_MASK(X) (((HOST_WIDE_INT)X) >> (HOST_BITS_PER_WIDE_INT - 1))
+/*
+ * Conversion routines in and out of wide-int.
+ */
+
+/* Convert OP0 into a wide int of PRECISION.  If the precision is less
+   than HOST_BITS_PER_WIDE_INT, zero extend the value of the word.
+   The overflow bit are set if the number was too large to fit in the
+   mode.  */
+
+wide_int
+wide_int::from_shwi (HOST_WIDE_INT op0,
+		     unsigned int precision)
+{
+  wide_int result;
+
+  result.precision = precision;
+  result.val[0] = op0;
+  result.len = 1;
+
+  return result;
+}
+
+/* Convert OP0 into a wide int of PRECISION.  If the precision is less
+   than HOST_BITS_PER_WIDE_INT, zero extend the value of the word.
+   The overflow bit are set if the number was too large to fit in the
+   mode.  */
+
+wide_int
+wide_int::from_uhwi (unsigned HOST_WIDE_INT op0, 
+		     unsigned int precision)
+{
+  wide_int result;
+
+  result.precision = precision;
+  result.val[0] = op0;
+
+  /* If the top bit is a 1, we need to add another word of 0s since
+     that would not expand the right value since the infinite
+     expansion of any unsigned number must have 0s at the top.  */
+  if ((HOST_WIDE_INT)op0 < 0 && precision > HOST_BITS_PER_WIDE_INT)
+    {
+      result.val[1] = 0;
+      result.len = 2;
+    }
+  else
+    result.len = 1;
+
+  return result;
+}
+
+/* Create a wide_int from an array of host_wide_ints in OP1 of LEN.
+   The result has PRECISION.  */
+
+wide_int
+wide_int::from_array (const HOST_WIDE_INT *op1, unsigned int len, 
+		      unsigned int precision, bool need_canon)
+{
+  unsigned int i;
+  wide_int result;
+  
+  result.len = len;
+  result.precision = precision;
+
+  for (i=0; i < len; i++)
+    result.val[i] = op1[i];
+
+  if (need_canon)
+    result.canonize ();
+  return result;
+}
+
+/* Convert a double int into a wide int with precision PREC.  */
+
+wide_int
+wide_int::from_double_int (double_int di, unsigned int prec)
+{
+  HOST_WIDE_INT op = di.low;
+  wide_int result;
+
+  result.precision = prec;
+  result.len = (prec <= HOST_BITS_PER_WIDE_INT) ? 1 : 2;
+
+  if (prec < HOST_BITS_PER_WIDE_INT)
+    result.val[0] = sext_hwi (op, prec);
+  else
+    {
+      result.val[0] = op;
+      if (prec > HOST_BITS_PER_WIDE_INT)
+	{
+	  if (prec < HOST_BITS_PER_DOUBLE_INT)
+	    result.val[1] = sext_hwi (di.high, prec);
+	  else
+	    result.val[1] = di.high;
+	}
+    }
+
+  if (result.len == 2)
+    result.canonize ();
+
+  return result;
+}
+
+/* Convert a integer cst into a wide int.  */
+
+wide_int
+wide_int::from_tree (const_tree tcst)
+{
+  wide_int result;
+  tree type = TREE_TYPE (tcst);
+  unsigned int prec = TYPE_PRECISION (type);
+
+  result.precision = prec;
+
+  /* This will simplify out to just an array copy and a len copy.  */
+  result.val[0] = TREE_INT_CST_LOW (tcst);
+  result.val[1] = TREE_INT_CST_HIGH (tcst);
+  if (prec > HOST_BITS_PER_WIDE_INT)
+    result.len = 2;
+  else if (prec == HOST_BITS_PER_WIDE_INT
+	   && TYPE_UNSIGNED (type)
+	   && (HOST_WIDE_INT)TREE_INT_CST_LOW (tcst) < 0)
+    result.len = 2;
+  else
+    result.len = 1;
+
+  return result;
+}
+
+/* Extract a constant integer from the X of type MODE.  The bits of
+   the integer are returned.  */
+
+wide_int
+wide_int::from_rtx (const_rtx x, enum machine_mode mode)
+{
+  wide_int result;
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  gcc_assert (mode != VOIDmode);
+
+  result.precision = prec;
+
+  switch (GET_CODE (x))
+    {
+    case CONST_INT:
+      result.val[0] = INTVAL (x);
+      result.len = 1;
+      break;
+
+#if TARGET_SUPPORTS_WIDE_INT
+    case CONST_WIDE_INT:
+      {
+	int i;
+	result.len = CONST_WIDE_INT_NUNITS (x);
+	
+	for (i = 0; i < result.len; i++)
+	  result.val[i] = CONST_WIDE_INT_ELT (x, i);
+      }
+      break;
+#else
+    case CONST_DOUBLE:
+      result.len = 2;
+      result.val[0] = CONST_DOUBLE_LOW (x);
+      result.val[1] = CONST_DOUBLE_HIGH (x);
+      break;
+#endif
+
+    default:
+      gcc_unreachable ();
+    }
+
+  return result;
+}
+
+/* Construct from a buffer of length LEN.  BUFFER will be read according
+   to byte endianess and word endianess.  Only the lower LEN bytes
+   of the result are set; the remaining high bytes are cleared.  */
+
+wide_int
+wide_int::from_buffer (const unsigned char *buffer, int len)
+{
+  wide_int result = wide_int::zero (len * BITS_PER_UNIT);
+  int words = len / UNITS_PER_WORD;
+
+  for (int byte = 0; byte < len; byte++)
+    {
+      int offset;
+      int index;
+      int bitpos = byte * BITS_PER_UNIT;
+      unsigned HOST_WIDE_INT value;
+
+      if (len > UNITS_PER_WORD)
+	{
+	  int word = byte / UNITS_PER_WORD;
+
+	  if (WORDS_BIG_ENDIAN)
+	    word = (words - 1) - word;
+
+	  offset = word * UNITS_PER_WORD;
+
+	  if (BYTES_BIG_ENDIAN)
+	    offset += (UNITS_PER_WORD - 1) - (byte % UNITS_PER_WORD);
+	  else
+	    offset += byte % UNITS_PER_WORD;
+	}
+      else
+	offset = BYTES_BIG_ENDIAN ? (len - 1) - byte : byte;
+
+      value = (unsigned HOST_WIDE_INT) buffer[offset];
+
+      index = bitpos / HOST_BITS_PER_WIDE_INT;
+      result.val[index] |= value << bitpos;
+    }
+
+  result.canonize ();
+  return result;
+}
+
+
+/*
+ * Largest and smallest values in a mode.
+ */
+
+/* Produce the largest SGNed number that is represented in TYPE_PREC.  The
+   resulting number is placed in a wide int of size RESULT_PREC. The
+   value of 0 of RESULT_PREC says return the answer with TYPE_PREC
+   precision. */
+
+wide_int
+wide_int::max_value (unsigned int type_prec, SignOp sgn, 
+		     unsigned int result_prec)
+{
+  wide_int result;
+  
+  result.precision = result_prec ? result_prec : type_prec;
+
+  if (sgn == UNSIGNED)
+    {
+      /* The unsigned max is just all ones, for which the compressed
+	 rep is just a single HWI.  */ 
+      result.len = 1;
+      result.val[0] = (HOST_WIDE_INT)-1;
+    }
+  else
+    {
+      /* The signed max is all ones except the top bit.  This must be
+	 explicitly represented.  */
+      int i;
+      int small_prec = type_prec & (HOST_BITS_PER_WIDE_INT - 1);
+      int shift = (small_prec == 0) 
+	? HOST_BITS_PER_WIDE_INT - 1 : small_prec - 1;
+
+      result.len = BLOCKS_NEEDED (type_prec);
+      for (i = 0; i < result.len - 1; i++)
+	result.val[i] = (HOST_WIDE_INT)-1;
+
+      result.val[result.len - 1] = ((HOST_WIDE_INT)1 << shift) - 1;
+    }
+  
+  return result;
+}
+
+/* Produce the smallest SGNed number that is represented in TYPE_PREC.  The
+   resulting number is placed in a wide int of size RESULT_PREC.  */
+
+wide_int
+wide_int::min_value (unsigned int type_prec, SignOp sgn, 
+		     unsigned int result_prec)
+{
+  if (result_prec == 0)
+    result_prec = type_prec;
+
+  if (sgn == UNSIGNED)
+    {
+      /* The unsigned min is just all zeros, for which the compressed
+	 rep is just a single HWI.  */ 
+      wide_int result;
+      result.len = 1;
+      result.precision = result_prec;
+      result.val[0] = 0;
+      return result;
+    }
+  else
+    {
+      /* The signed min is all zeros except the top bit.  This must be
+	 explicitly represented.  */
+      return set_bit_in_zero (type_prec - 1, result_prec);
+    }
+}
+
+/*
+ * Public utilities.
+ */
+
+/* Check the upper HOST_WIDE_INTs of src to see if the length can be
+   shortened.  An upper HOST_WIDE_INT is unnecessary if it is all ones
+   or zeros and the top bit of the next lower word matches.
+
+   This function may change the representation of THIS, but does not
+   change the value that THIS represents.  It does not sign extend in
+   the case that the size of the mode is less than
+   HOST_BITS_PER_WIDE_INT.  */
+
+void
+wide_int::canonize ()
+{
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  int blocks_needed = BLOCKS_NEEDED (precision);
+  HOST_WIDE_INT top;
+  int i;
+
+  if (len > blocks_needed)
+    len = blocks_needed;
+
+  /* Clean up the top bits for any mode that is not a multiple of a HWI.  */
+  if (len == blocks_needed && small_prec)
+    val[len - 1] = sext_hwi (val[len - 1], small_prec);
+
+  if (len == 1)
+    return;
+
+  top = val[len - 1];
+  if (top != 0 && top != (HOST_WIDE_INT)-1)
+    return;
+
+  /* At this point we know that the top is either 0 or -1.  Find the
+     first block that is not a copy of this.  */
+  for (i = len - 2; i >= 0; i--)
+    {
+      HOST_WIDE_INT x = val[i];
+      if (x != top)
+	{
+	  if (SIGN_MASK (x) == top)
+	    {
+	      len = i + 1;
+	      return;
+	    }
+
+	  /* We need an extra block because the top bit block i does
+	     not match the extension.  */
+	  len = i + 2;
+	  return;
+	}
+    }
+
+  /* The number is 0 or -1.  */
+  len = 1;
+}
+
+
+/* Make a copy of this.  */
+
+wide_int
+wide_int::copy () const
+{
+  wide_int result;
+  int i;
+
+  result.precision = precision;
+  result.len = len;
+
+  for (i = 0; i < result.len; i++)
+    result.val[i] = val[i];
+  return result;
+}
+
+
+/* Copy THIS replacing the precision with PREC.
+   It can do any of truncation, extension or copying.  */
+
+wide_int
+wide_int::force_to_size (unsigned int prec, SignOp sgn) const
+{
+  wide_int result;
+  int blocks_needed = BLOCKS_NEEDED (prec);
+  int i;
+
+  result.precision = prec;
+  result.len = blocks_needed < len ? blocks_needed : len;
+  for (i = 0; i < result.len; i++)
+    result.val[i] = val[i];
+
+  if (prec >= precision) 
+    {
+      /* Expanding */
+      int small_precision = precision & (HOST_BITS_PER_WIDE_INT - 1);
+
+      /* The only case we care about is unsigned because the rep is
+	 inherantly signed.  */
+      if (sgn == UNSIGNED)
+	{
+	  /* The top block in the existing rep must be zero extended,
+	     but this is all the work we need to do.  */
+	  if (small_precision 
+	      && (len == BLOCKS_NEEDED (precision)
+		  || len == blocks_needed))
+	    result.val[len-1] = zext_hwi (result.val[len-1], small_precision);
+	  else if (len == BLOCKS_NEEDED (precision) 
+		   && len < blocks_needed
+		   && small_precision == 0
+		   && result.val[result.len - 1] < 0)
+		    /* We need to put the 0 block on top to keep the value
+		       from being sign extended.  */ 
+		    result.val[result.len++] = 0;
+	}
+    }
+  else
+    {
+      /* Truncating.  */
+      int small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
+      /* The only weird case we need to look at here is when we are
+         truncating within the top block.  We need to make sure that
+         everything in the block above the new precision is sign
+         extended.  Note that this is independent of the SGN.  This is
+         just to stay canonical.  */
+      if (small_prec && (blocks_needed == len))
+	result.val[blocks_needed-1]
+	  = sext_hwi (result.val[blocks_needed-1], small_prec);
+    }
+
+  return result;
+}
+
+/*
+ * Comparisons, note that only equality is an operator.  The other
+ * comparisons cannot be operators since they are inherently singed or
+ * unsigned and C++ has no such operators.
+ */
+
+/* Return true if THIS == OP1.  */
+
+bool
+wide_int::eq_p_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  HOST_WIDE_INT op0mask = sign_mask ();
+  HOST_WIDE_INT op1mask = SIGN_MASK (op1[op1len - 1]);
+
+  while (l0 > l1)
+    if (val[l0--] != op1mask)
+      return false;
+
+  while (l1 > l0)
+    if (op1[l1--] != op0mask)
+      return false;
+
+  while (l0 >= 0)
+    if (val[l0--] != op1[l1--])
+      return false;
+
+  return true;
+}
+
+/* Return true if THIS < OP1 using signed comparisons.  */
+
+bool
+wide_int::lts_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, 
+		       const HOST_WIDE_INT *op1, unsigned int op1len, 
+		       unsigned int precision)
+{
+  int l;
+  HOST_WIDE_INT s0, s1;
+  unsigned HOST_WIDE_INT u0, u1;
+  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
+  HOST_WIDE_INT op0mask = SIGN_MASK (op0[op0len - 1]);
+  HOST_WIDE_INT op1mask = SIGN_MASK (op1[op1len - 1]);
+
+  /* Only the top block is compared as signed.  The rest are unsigned
+     comparisons.  */
+  s0 = blocks_needed == op0len ? op0[blocks_needed - 1] : op0mask;
+  s1 = blocks_needed == op1len ? op1[blocks_needed - 1] : op1mask;
+  if (s0 < s1)
+    return true;
+  if (s0 > s1)
+    return false;
+
+  l = (signed int)MAX (op0len, op1len);
+  /* We have already checked to top block so skip down.  */
+  l = (l == (signed int)blocks_needed) ? l - 2 : l - 1;
+
+  while (l >= 0)
+    {
+      u0 = l < (signed int)op0len ? op0[l] : op0mask;
+      u1 = l < (signed int)op1len ? op1[l] : op1mask;
+
+      if (u0 < u1)
+	return true;
+      if (u0 > u1)
+	return false;
+      l--;
+    }
+
+  return false;
+}
+
+/* Returns -1 if THIS < OP1, 0 if THIS == OP1 and 1 if A > OP1 using
+   signed compares.  */
+
+int
+wide_int::cmps_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  int l;
+  HOST_WIDE_INT s0, s1;
+  unsigned HOST_WIDE_INT u0, u1;
+  unsigned int blocks_needed = BLOCKS_NEEDED (precision);
+  HOST_WIDE_INT op0mask = sign_mask ();
+  HOST_WIDE_INT op1mask = SIGN_MASK (op1[op1len - 1]);
+
+  /* Only the top block is compared as signed.  The rest are unsigned
+     comparisons.  */
+  s0 = blocks_needed == (unsigned int)len ? val[blocks_needed - 1] : op0mask;
+  s1 = blocks_needed == op1len ? op1[blocks_needed - 1] : op1mask;
+  if (s0 < s1)
+    return -1;
+  if (s0 > s1)
+    return 1;
+
+  l = MAX (len, (signed int)op1len);
+  /* We have already checked to top block so skip down.  */
+  l = (l == (signed int)blocks_needed) ? l - 2 : l - 1;
+
+  while (l >= 0)
+    {
+      u0 = l < len ? val [l] : op0mask;
+      u1 = l < (signed int)op1len ? op1[l] : op1mask;
+
+      if (u0 < u1)
+	return -1;
+      if (u0 > u1)
+	return 1;
+      l--;
+    }
+
+  return 0;
+}
+
+/* Return true if OP0 < OP1 using unsigned comparisons.  */
+
+bool
+wide_int::ltu_p_large (const HOST_WIDE_INT *op0, unsigned int op0len, 
+		       const HOST_WIDE_INT *op1, unsigned int op1len)
+{
+  unsigned HOST_WIDE_INT x0;
+  unsigned HOST_WIDE_INT x1;
+  int l0 = op0len - 1;
+  int l1 = op1len - 1;
+  HOST_WIDE_INT op0mask = SIGN_MASK (op0[op0len - 1]);
+  HOST_WIDE_INT op1mask = SIGN_MASK (op1[op1len - 1]);
+
+  while (l0 > l1)
+    {
+      x0 = op0[l0--];
+      x1 = op1mask;
+      if (x0 < x1)
+	return true;
+      if (x0 > x1)
+	return false;
+    }
+
+  while (l1 > l0)
+    {
+      x0 = op0mask;
+      x1 = op1[l1--];
+      if (x0 < x1)
+	return true;
+      if (x0 > x1)
+	return false;
+    }
+
+  while (l0 >= 0)
+    {
+      x0 = op0[l0--];
+      x1 = op1[l1--];
+      if (x0 < x1)
+	return true;
+      if (x0 > x1)
+	return false;
+    }
+
+  return false;
+}
+
+/* Returns -1 if THIS < OP1, 0 if THIS == OP1 and 1 if A > OP1 using
+   unsigned compares.  */
+
+int
+wide_int::cmpu_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  unsigned HOST_WIDE_INT x0;
+  unsigned HOST_WIDE_INT x1;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  HOST_WIDE_INT op0mask = sign_mask ();
+  HOST_WIDE_INT op1mask = SIGN_MASK (op1[op1len - 1]);
+
+  while (l0 > l1)
+    {
+      x0 = val[l0--];
+      x1 = op1mask;
+      if (x0 < x1)
+	return -1;
+      else if (x0 > x1)
+	return 1;
+    }
+
+  while (l1 > l0)
+    {
+      x0 = op0mask;
+      x1 = op1[l1--];
+      if (x0 < x1)
+	return -1;
+      if (x0 > x1)
+	return 1;
+    }
+
+  while (l0 >= 0)
+    {
+      x0 = val[l0--];
+      x1 = op1[l1--];
+      if (x0 < x1)
+	return -1;
+      if (x0 > x1)
+	return 1;
+    }
+
+  return 0;
+}
+
+/* Return true if THIS has the sign bit set to 1 and all other bits are
+   zero.  */
+
+bool
+wide_int::only_sign_bit_p (unsigned int prec) const
+{
+  int i;
+  HOST_WIDE_INT x;
+  int small_prec;
+  bool result;
+
+  if (BLOCKS_NEEDED (prec) != len)
+    {
+      result = false;
+      goto ex;
+    }
+
+  for (i=0; i < len - 1; i++)
+    if (val[i] != 0)
+      {
+	result = false;
+	goto ex;
+      }
+
+  x = val[len - 1];
+  small_prec = prec & (HOST_BITS_PER_WIDE_INT - 1);
+  if (small_prec)
+    x = x << (HOST_BITS_PER_WIDE_INT - small_prec);
+
+  result = x == ((HOST_WIDE_INT)1) << (HOST_BITS_PER_WIDE_INT - 1);
+
+ ex:
+  return result;
+}
+
+/* Returns true if THIS fits into range of TYPE.  Signedness of OP0 is
+   assumed to be the same as the signedness of TYPE.  */
+
+bool
+wide_int::fits_to_tree_p (const_tree type) const
+{
+  int type_prec = TYPE_PRECISION (type);
+
+  if (TYPE_UNSIGNED (type))
+    return fits_u_p (type_prec);
+  else
+    return fits_s_p (type_prec);
+}
+
+/* Returns true of THIS fits in the unsigned range of precision.  */
+
+bool
+wide_int::fits_s_p (unsigned int prec) const
+{
+  if (len < BLOCKS_NEEDED (prec))
+    return true;
+
+  if (precision <= prec)
+    return true;
+
+  return *this == sext (prec);
+}
+
+
+/* Returns true if THIS fits into range of TYPE.  */
+
+bool
+wide_int::fits_u_p (unsigned int prec) const
+{
+  if (len < BLOCKS_NEEDED (prec))
+    return true;
+
+  if (precision <= prec)
+    return true;
+
+  return *this == zext (prec);
+}
+
+/*
+ * Extension.
+ */
+
+/* Sign extend THIS starting at OFFSET.  The precision of the result
+   are the same as THIS.  */
+
+wide_int
+wide_int::sext (unsigned int offset) const
+{
+  wide_int result;
+  int off;
+
+  gcc_assert (precision >= offset);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.precision = precision;
+      if (offset < precision)
+	result.val[0] = sext_hwi (val[0], offset);
+      else
+	/* If offset is greater or equal to precision there is nothing
+	   to do since the internal rep is already sign extended.  */
+	result.val[0] = val[0];
+
+      result.len = 1;
+    }
+  else if (precision == offset)
+    result = *this;
+  else
+    {
+      result = decompress (offset, precision);
+      
+      /* Now we can do the real sign extension.  */
+      off = offset & (HOST_BITS_PER_WIDE_INT - 1);
+      if (off)
+	{
+	  int block = BLOCK_OF (offset);
+	  result.val[block] = sext_hwi (val[block], off);
+	  result.len = block + 1;
+	}
+      /* We never need an extra element for sign extended values.  */
+    }    
+
+  return result;
+}
+
+/* Zero extend THIS starting at OFFSET.  The precision of the result
+   are the same as THIS.  */
+
+wide_int
+wide_int::zext (unsigned int offset) const
+{
+  wide_int result;
+  int off;
+  int block;
+
+  gcc_assert (precision >= offset);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.precision = precision;
+      if (offset < precision)
+	result.val[0] = zext_hwi (val[0], offset);
+      else if (offset == precision)
+	result.val[0] = val[0];
+	/* If offset was greater than the precision we need to zero
+	   extend from the old precision since the internal rep was
+	   equivalent to sign extended.  */
+      else
+	result.val[0] = zext_hwi (val[0], precision);
+	
+      result.len = 1;
+    }
+  else if (precision == offset)
+    result = *this;
+  else
+    {
+      result = decompress (offset, precision);
+
+      /* Now we can do the real zero extension.  */
+      off = offset & (HOST_BITS_PER_WIDE_INT - 1);
+      block = BLOCK_OF (offset);
+      if (off)
+	{
+	  result.val[block] = zext_hwi (val[block], off);
+	  result.len = block + 1;
+	}
+      else
+	/* See if we need an extra zero element to satisfy the
+	   compression rule.  */
+	if (val[block - 1] < 0 && offset < precision)
+	  {
+	    result.val[block] = 0;
+	    result.len += 1;
+	  }
+    }
+  return result;
+}
+
+/*
+ * Masking, inserting, shifting, rotating.
+ */
+
+/* Return a value with a one bit inserted in THIS at BITPOS.  */
+
+wide_int
+wide_int::set_bit (unsigned int bitpos) const
+{
+  wide_int result;
+  int i, j;
+
+  if (bitpos >= precision)
+    result = copy ();
+  else
+    {
+      result = decompress (bitpos, precision);
+      j = bitpos / HOST_BITS_PER_WIDE_INT;
+      i = bitpos & (HOST_BITS_PER_WIDE_INT - 1);
+      result.val[j] |= ((HOST_WIDE_INT)1) << i;
+    }
+
+  return result;
+}
+
+/* Insert a 1 bit into 0 at BITPOS producing an number with PRECISION.  */
+
+wide_int
+wide_int::set_bit_in_zero (unsigned int bitpos, unsigned int prec)
+{
+  wide_int result;
+  int blocks_needed = BLOCKS_NEEDED (bitpos + 1);
+  int i, j;
+
+  result.precision = prec;
+  if (bitpos >= prec)
+    {
+      result.len = 1;
+      result.val[0] = 0;
+    }
+  else
+    {
+      result.len = blocks_needed;
+      for (i = 0; i < blocks_needed; i++)
+	result.val[i] = 0;
+      
+      j = bitpos / HOST_BITS_PER_WIDE_INT;
+      i = bitpos & (HOST_BITS_PER_WIDE_INT - 1);
+      result.val[j] |= ((HOST_WIDE_INT)1) << i;
+    }
+
+  return result;
+}
+
+/* Insert WIDTH bits from OP0 into THIS starting at START.  */
+
+wide_int
+wide_int::insert (const wide_int &op0, unsigned int start, 
+		  unsigned int width) const
+{
+  wide_int result;
+  wide_int mask;
+  wide_int tmp;
+
+  gcc_checking_assert (op0.precision >= width);
+
+  if (start + width >= precision) 
+    width = precision - start;
+
+  mask = shifted_mask (start, width, false, precision);
+  tmp = op0.lshift (start, 0, precision, NONE);
+  result = tmp & mask;
+
+  tmp = and_not (mask);
+  result = result | tmp;
+
+  return result;
+}
+
+/* bswap THIS.  */
+
+wide_int
+wide_int::bswap () const
+{
+  wide_int result;
+  int i, s;
+  int end;
+  int len = BLOCKS_NEEDED (precision);
+
+  /* This is not a well defined operation if the precision is not a
+     multiple of 8.  */
+  gcc_assert ((precision & 0x7) == 0);
+
+  result.precision = precision;
+  result.len = len;
+
+  for (i = 0; i < len; i++)
+    result.val[i] = 0;
+
+  /* Only swap the bytes that are not the padding.  */
+  if ((precision & (HOST_BITS_PER_WIDE_INT - 1))
+      && (this->len == len))
+    end = precision;
+  else
+    end = this->len * HOST_BITS_PER_WIDE_INT;
+
+  for (s = 0; s < end; s += 8)
+    {
+      unsigned int d = precision - s - 8;
+      unsigned HOST_WIDE_INT byte;
+
+      int block = s / HOST_BITS_PER_WIDE_INT;
+      int offset = s & (HOST_BITS_PER_WIDE_INT - 1);
+
+      byte = (val[block] >> offset) & 0xff;
+
+      block = d / HOST_BITS_PER_WIDE_INT;
+      offset = d & (HOST_BITS_PER_WIDE_INT - 1);
+
+      result.val[block] |= byte << offset;
+    }
+
+  result.canonize ();
+  return result;
+}
+
+/* Return a result mask where the lower WIDTH bits are ones and the
+   bits above that up to the precision are zeros.  The result is
+   inverted if NEGATE is true.  The result is made with PREC. */
+
+wide_int
+wide_int::mask (unsigned int width, bool negate, unsigned int prec)
+{
+  wide_int result;
+  unsigned int i = 0;
+  int shift;
+
+  gcc_assert (width < 2 * MAX_BITSIZE_MODE_ANY_INT);
+  gcc_assert (prec <= 2 * MAX_BITSIZE_MODE_ANY_INT);
+
+  if (width == 0)
+    {
+      if (negate)
+	result = wide_int::minus_one (prec);
+      else
+	result = wide_int::zero (prec);
+    }
+  else
+    {
+      result.precision = prec;
+      
+      while (i < width / HOST_BITS_PER_WIDE_INT)
+	result.val[i++] = negate ? 0 : (HOST_WIDE_INT)-1;
+      
+      shift = width & (HOST_BITS_PER_WIDE_INT - 1);
+      if (shift != 0)
+	{
+	  HOST_WIDE_INT last = (((HOST_WIDE_INT)1) << shift) - 1;
+	  result.val[i++] = negate ? ~last : last;
+	}
+      result.len = i;
+    }
+
+  return result;
+}
+
+/* Return a result mask of WIDTH ones starting at START and the
+   bits above that up to the precision are zeros.  The result is
+   inverted if NEGATE is true.  */
+wide_int
+wide_int::shifted_mask (unsigned int start, unsigned int width, 
+			bool negate, unsigned int prec)
+{
+  wide_int result;
+  unsigned int i = 0;
+  unsigned int shift;
+  unsigned int end = start + width;
+  HOST_WIDE_INT block;
+
+  if (start + width > prec)
+    width = prec - start;
+ 
+  if (width == 0)
+    {
+      if (negate)
+	result = wide_int::minus_one (prec);
+      else
+	result = wide_int::zero (prec);
+      return result;
+    }
+
+  result.precision = prec;
+
+  while (i < start / HOST_BITS_PER_WIDE_INT)
+    result.val[i++] = negate ? (HOST_WIDE_INT)-1 : 0;
+
+  shift = start & (HOST_BITS_PER_WIDE_INT - 1);
+  if (shift)
+    {
+      block = (((HOST_WIDE_INT)1) << shift) - 1;
+      shift = (end) & (HOST_BITS_PER_WIDE_INT - 1);
+      if (shift)
+	{
+	  /* case 000111000 */
+	  block = (((HOST_WIDE_INT)1) << shift) - block - 1;
+	  result.val[i++] = negate ? ~block : block;
+	  result.len = i;
+	  return result;
+	}
+      else
+	/* ...111000 */
+	result.val[i++] = negate ? block : ~block;
+    }
+
+  while (i < end / HOST_BITS_PER_WIDE_INT)
+    /* 1111111 */
+    result.val[i++] = negate ? 0 : (HOST_WIDE_INT)-1;
+
+  shift = end & (HOST_BITS_PER_WIDE_INT - 1);
+  if (shift != 0)
+    {
+      /* 000011111 */
+      block = (((HOST_WIDE_INT)1) << shift) - 1;
+      result.val[i++] = negate ? ~block : block;
+    }
+
+  result.len = i;
+  return result;
+}
+
+
+/*
+ * logical operations.
+ */
+
+/* Return THIS & OP1.  */
+
+wide_int
+wide_int::and_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  bool need_canon = true;
+
+  result.len = MAX (len, op1len);
+  result.precision = precision;
+
+  if (l0 > l1)
+    {
+      if (SIGN_MASK (op1[op1len - 1]) == 0)
+	{
+	  l0 = l1;
+	  result.len = l1 + 1;
+	}
+      else
+	{
+	  need_canon = false;
+	  while (l0 > l1)
+	    {
+	      result.val[l0] = val[l0];
+	      l0--;
+	    }
+	}
+    }
+  else if (l1 > l0)
+    {
+      if (sign_mask () == 0)
+	result.len = l0 + 1;
+      else
+	{
+	  need_canon = false;
+	  while (l1 > l0)
+	    {
+	      result.val[l1] = op1[l1];
+	      l1--;
+	    }
+	}
+    }
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = val[l0] & op1[l0];
+      l0--;
+    }
+
+  if (need_canon)
+    result.canonize ();
+  return result;
+}
+
+/* Return THIS & ~OP1.  */
+
+wide_int
+wide_int::and_not_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  bool need_canon = true;
+
+  result.len = MAX (len, op1len);
+  result.precision = precision;
+
+  if (l0 > l1)
+    {
+      if (SIGN_MASK (op1[op1len - 1]) != 0)
+	{
+	  l0 = l1;
+	  result.len = l1 + 1;
+	}
+      else
+	{
+	  need_canon = false;
+	  while (l0 > l1)
+	    {
+	      result.val[l0] = val[l0];
+	      l0--;
+	    }
+	}
+    }
+  else if (l1 > l0)
+    {
+      if (sign_mask () == 0)
+	result.len = l0 + 1;
+      else
+	{
+	  need_canon = false;
+	  while (l1 > l0)
+	    {
+	      result.val[l1] = ~op1[l1];
+	      l1--;
+	    }
+	}
+    }
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = val[l0] & ~op1[l0];
+      l0--;
+    }
+
+  if (need_canon)
+    result.canonize ();
+
+  return result;
+}
+
+/* Return THIS | OP1.  */
+
+wide_int
+wide_int::or_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  bool need_canon = true;
+
+  result.len = MAX (len, op1len);
+  result.precision = precision;
+
+  if (l0 > l1)
+    {
+      if (SIGN_MASK (op1[op1len - 1]) != 0)
+	{
+	  l0 = l1;
+	  result.len = l1 + 1;
+	}
+      else
+	{
+	  need_canon = false;
+	  while (l0 > l1)
+	    {
+	      result.val[l0] = val[l0];
+	      l0--;
+	    }
+	}
+    }
+  else if (l1 > l0)
+    {
+      if (sign_mask () != 0)
+	result.len = l0 + 1;
+      else
+	{
+	  need_canon = false;
+	  while (l1 > l0)
+	    {
+	      result.val[l1] = op1[l1];
+	      l1--;
+	    }
+	}
+    }
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = val[l0] | op1[l0];
+      l0--;
+    }
+
+  if (need_canon)
+    result.canonize ();
+
+  return result;
+}
+
+/* Return THIS | ~OP1.  */
+
+wide_int
+wide_int::or_not_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+  bool need_canon = true;
+
+  result.len = MAX (len, op1len);
+  result.precision = precision;
+
+  if (l0 > l1)
+    {
+      if (SIGN_MASK (op1[op1len - 1]) == 0)
+	{
+	  l0 = l1;
+	  result.len = l1 + 1;
+	}
+      else
+	{
+	  need_canon = false;
+	  while (l0 > l1)
+	    {
+	      result.val[l0] = val[l0];
+	      l0--;
+	    }
+	}
+    }
+  else if (l1 > l0)
+    {
+      if (sign_mask () != 0)
+	result.len = l0 + 1;
+      else
+	{
+	  need_canon = false;
+	  while (l1 > l0)
+	    {
+	      result.val[l1] = ~op1[l1];
+	      l1--;
+	    }
+	}
+    }
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = val[l0] | ~op1[l0];
+      l0--;
+    }
+
+  if (need_canon)
+    result.canonize ();
+  return result;
+}
+
+/* Return the exclusive ior (xor) of THIS and OP1.  */
+
+wide_int
+wide_int::xor_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  int l0 = len - 1;
+  int l1 = op1len - 1;
+
+  result.len = MAX (len, op1len);
+  result.precision = precision;
+
+  while (l0 > l1)
+    {
+      result.val[l0] = val[l0] ^ SIGN_MASK (op1[op1len - 1]);
+      l0--;
+    }
+
+  while (l1 > l0)
+    {
+      result.val[l1] = sign_mask () ^ op1[l1];
+      l1--;
+    }
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = val[l0] ^ op1[l0];
+      l0--;
+    }
+
+  result.canonize ();
+  return result;
+}
+
+/*
+ * math
+ */
+
+/* Absolute value of THIS.  */
+
+wide_int
+wide_int::abs () const
+{
+  if (sign_mask ())
+    return neg ();
+
+  wide_int result = copy ();
+  return result;
+}
+
+/* Add of THIS and OP1.  No overflow is detected.  */
+
+wide_int
+wide_int::add_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  unsigned HOST_WIDE_INT o0, o1;
+  unsigned HOST_WIDE_INT x = 0;
+  unsigned HOST_WIDE_INT carry = 0;
+  unsigned HOST_WIDE_INT mask0, mask1;
+  unsigned int i, small_prec;
+
+  result.precision = precision;
+  result.len = MAX (len, op1len);
+  mask0 = sign_mask ();
+  mask1 = SIGN_MASK (op1[op1len - 1]);
+  /* Add all of the explicitly defined elements.  */
+  for (i = 0; i < result.len; i++)
+    {
+      o0 = i < len ? (unsigned HOST_WIDE_INT)val[i] : mask0;
+      o1 = i < op1len ? (unsigned HOST_WIDE_INT)op1[i] : mask1;
+      x = o0 + o1 + carry;
+      result.val[i] = x;
+      carry = x < o0;
+    }
+
+  if (len * HOST_BITS_PER_WIDE_INT < precision)
+    {
+      result.val[result.len] = mask0 + mask1 + carry;
+      result.len++;
+    }
+
+  small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  if (small_prec != 0 && BLOCKS_NEEDED (precision) == result.len)
+    {
+      /* Modes with weird precisions.  */
+      i = result.len - 1;
+      result.val[i] = sext_hwi (result.val[i], small_prec);
+    }
+
+  result.canonize ();
+  return result;
+}
+
+
+/* Count leading zeros of THIS but only looking at the bits in the
+   smallest HWI of size mode.  */
+
+wide_int
+wide_int::clz () const
+{
+  int i;
+  int start;
+  int count;
+  HOST_WIDE_INT v;
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+
+  if (zero_p ())
+    {
+      enum machine_mode mode = mode_for_size (precision, MODE_INT, 0);
+      if (mode == BLKmode)
+	mode_for_size (precision, MODE_PARTIAL_INT, 0); 
+
+      /* Even if the value at zero is undefined, we have to come up
+	 with some replacement.  Seems good enough.  */
+      if (mode == BLKmode)
+	count = precision;
+      else if (!CLZ_DEFINED_VALUE_AT_ZERO (mode, count))
+	count = precision;
+    }
+  else
+    {
+      /* The high order block is special if it is the last block and the
+	 precision is not an even multiple of HOST_BITS_PER_WIDE_INT.  We
+	 have to clear out any ones above the precision before doing clz
+	 on this block.  */
+      if (BLOCKS_NEEDED (precision) == len && small_prec)
+	{
+	  v = zext_hwi (val[len - 1], small_prec);
+	  count = clz_hwi (v) - (HOST_BITS_PER_WIDE_INT - small_prec);
+	  start = len - 2;
+	  if (v != 0)
+	    return from_shwi (count, precision);
+	}
+      else
+	{
+	  count = HOST_BITS_PER_WIDE_INT * (BLOCKS_NEEDED (precision) - len);
+	  start = len - 1;
+	}
+      
+      for (i = start; i >= 0; i--)
+	{
+	  v = elt (i);
+	  count += clz_hwi (v);
+	  if (v != 0)
+	    break;
+	}
+    }
+  return from_shwi (count, precision);
+}
+
+/* Count the number of redundant leading bits of THIS.  Return result
+   as a HOST_WIDE_INT.  */
+
+wide_int
+wide_int::clrsb () const
+{
+  if (neg_p ())
+    return operator ~ ().clz () - 1;
+
+  return clz () - 1;
+}
+
+/* Count zeros of THIS.   */
+
+wide_int
+wide_int::ctz () const
+{
+  int i;
+  unsigned int count = 0;
+  HOST_WIDE_INT v;
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  int end;
+  bool more_to_do;
+
+  if (zero_p ())
+    {
+      enum machine_mode mode = mode_for_size (precision, MODE_INT, 0);
+      if (mode == BLKmode)
+	mode_for_size (precision, MODE_PARTIAL_INT, 0); 
+
+      /* Even if the value at zero is undefined, we have to come up
+	 with some replacement.  Seems good enough.  */
+      if (mode == BLKmode)
+	count = precision;
+      else if (!CTZ_DEFINED_VALUE_AT_ZERO (mode, count))
+	count = precision;
+    }
+  else
+    {
+      /* The high order block is special if it is the last block and the
+	 precision is not an even multiple of HOST_BITS_PER_WIDE_INT.  We
+	 have to clear out any ones above the precision before doing clz
+	 on this block.  */
+      if (BLOCKS_NEEDED (precision) == len && small_prec)
+	{
+	  end = len - 1;
+	  more_to_do = true;
+	}
+      else
+	{
+	  end = len;
+	  more_to_do = false;
+	}
+      
+      for (i = 0; i < end; i++)
+	{
+	  v = val[i];
+	  count += ctz_hwi (v);
+	  if (v != 0)
+	    return wide_int::from_shwi (count, precision);
+	}
+      
+      if (more_to_do)
+	{
+	  v = zext_hwi (val[len - 1], small_prec);
+	  count = ctz_hwi (v);
+	  /* The top word was all zeros so we have to cut it back to prec,
+	     because we are counting some of the zeros above the
+	     interesting part.  */
+	  if (count > precision)
+	    count = precision;
+	}
+      else
+	/* Skip over the blocks that are not represented.  They must be
+	   all zeros at this point.  */
+	count = precision;
+    }
+
+  return wide_int::from_shwi (count, precision);
+}
+
+/* ffs of THIS.  */
+
+wide_int
+wide_int::ffs () const
+{
+  HOST_WIDE_INT count = ctz ().to_shwi ();
+  if (count == precision)
+    count = 0;
+  else
+    count += 1;
+  return wide_int::from_shwi (count, precision);
+}
+
+/* Subroutines of the multiplication and division operations.  Unpack
+   the first IN_LEN HOST_WIDE_INTs in INPUT into 2 * IN_LEN
+   HOST_HALF_WIDE_INTs of RESULT.  The rest of RESULT is filled by
+   uncompressing the top bit of INPUT[IN_LEN - 1].  */
+
+static void
+wi_unpack (unsigned HOST_HALF_WIDE_INT *result, 
+	   const unsigned HOST_WIDE_INT *input,
+	   int in_len, int out_len)
+{
+  int i;
+  int j = 0;
+  HOST_WIDE_INT mask;
+
+  for (i = 0; i <in_len; i++)
+    {
+      result[j++] = input[i];
+      result[j++] = input[i] >> HOST_BITS_PER_HALF_WIDE_INT;
+    }
+  mask = SIGN_MASK (input[in_len - 1]);
+  mask &= HALF_INT_MASK;
+
+  /* Smear the sign bit.  */
+  while (j < out_len)
+    result[j++] = mask;
+}
+
+/* The inverse of wi_unpack.  IN_LEN is the the number of input
+   blocks.  The number of output blocks will be half this amount.  */
+
+static void
+wi_pack (unsigned HOST_WIDE_INT *result, 
+	 const unsigned HOST_HALF_WIDE_INT *input, 
+	 int in_len)
+{
+  int i = 0;
+  int j = 0;
+
+  while (i < in_len - 2)
+    {
+      result[j++] = (unsigned HOST_WIDE_INT)input[i] 
+	| ((unsigned HOST_WIDE_INT)input[i + 1] << HOST_BITS_PER_HALF_WIDE_INT);
+      i += 2;
+    }
+
+  /* Handle the case where in_len is odd.   For this we zero extend.  */
+  if (i & 1)
+    result[j++] = (unsigned HOST_WIDE_INT)input[i];
+  else
+    result[j++] = (unsigned HOST_WIDE_INT)input[i] 
+      | ((unsigned HOST_WIDE_INT)input[i + 1] << HOST_BITS_PER_HALF_WIDE_INT);
+}
+
+/* Return an integer that is the exact log2 of THIS.  */
+
+wide_int
+wide_int::exact_log2 () const
+{
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  wide_int count;
+  wide_int result;
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      HOST_WIDE_INT v;
+      if (small_prec)
+	v = zext_hwi (val[0], small_prec);
+      else
+	v = val[0];
+      result = wide_int::from_shwi (::exact_log2 (v), precision);
+    }
+  else
+    {
+      count = ctz ();
+      if (clz () + count + 1 == precision)
+	result = count;
+      else
+	result = wide_int::from_shwi (-1, precision);
+    }
+  return result;
+}
+
+/* Return an integer that is the floor log2 of THIS.  */
+
+wide_int
+wide_int::floor_log2 () const
+{
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  wide_int result;
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      HOST_WIDE_INT v;
+      if (small_prec)
+	v = zext_hwi (val[0], small_prec);
+      else
+	v = val[0];
+      result = wide_int::from_shwi (::floor_log2 (v), precision);
+    }
+  else
+    result = wide_int::from_shwi (precision, precision) - 1 - clz ();
+  return result;
+}
+
+
+/* Multiply Op1 by Op2.  If HIGH is set, only the upper half of the
+   result is returned.  If FULL is set, the entire result is returned
+   in a mode that is twice the width of the inputs.  However, that
+   mode needs to exist if the value is to be usable.  Clients that use
+   FULL need to check for this.
+
+   If HIGH or FULL are not setm throw away the upper half after the check
+   is made to see if it overflows.  Unfortunately there is no better
+   way to check for overflow than to do this.  OVERFLOW is assumed to
+   be sticky so it should be initialized.  SGN controls the signess
+   and is used to check overflow or if HIGH or FULL is set.  */
+
+wide_int
+wide_int::mul_internal (bool high, bool full, 
+			const wide_int *op1, 
+			const HOST_WIDE_INT *op2, unsigned int op2len,
+			SignOp sgn,  bool *overflow, 
+			bool needs_overflow)
+{
+  wide_int result;
+  unsigned HOST_WIDE_INT o0, o1, k, t;
+  unsigned int i;
+  unsigned int j;
+  unsigned int prec = op1->get_precision ();
+  unsigned int blocks_needed = BLOCKS_NEEDED (prec);
+  unsigned int half_blocks_needed = blocks_needed * 2;
+  /* The sizes here are scaled to support a 2x largest mode by 2x
+     largest mode yielding a 4x largest mode result.  This is what is
+     needed by vpn.  */
+
+  unsigned HOST_HALF_WIDE_INT 
+    u[2 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT 
+    v[2 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  /* The '2' in 'R' is because we are internally doing a full
+     multiply.  */
+  unsigned HOST_HALF_WIDE_INT 
+    r[2 * 2 * MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT) - 1;
+
+  /* If the top level routine did not really pass in an overflow, then
+     just make sure that we never attempt to set it.  */
+  if (overflow == 0)
+    needs_overflow = false;
+  result.precision = op1->precision;
+
+  if (high || full || needs_overflow)
+    {
+      /* If we need to check for overflow, we can only do half wide
+	 multiplies quickly because we need to look at the top bits to
+	 check for the overflow.  */
+      if (prec <= HOST_BITS_PER_HALF_WIDE_INT)
+	{
+	  HOST_WIDE_INT t, r;
+	  result.len = 1;
+	  o0 = op1->elt (0);
+	  o1 = op2[0];
+	  r = o0 * o1;
+	  /* Signed shift down will leave 0 or -1 if there was no
+	     overflow for signed or 0 for unsigned.  */
+	  t = SIGN_MASK (r);
+	  if (needs_overflow)
+	    {
+	      if (sgn == SIGNED)
+		{
+		  if (t != (HOST_WIDE_INT)-1 && t != 0)
+		    *overflow = true;
+		}
+	      else
+		{
+		  if (t != 0)
+		    *overflow = true;
+		}
+	    }
+	  if (full)
+	    {
+	      result.val[0] = sext_hwi (r, prec * 2);
+	      result.precision = op1->precision * 2;
+	    }
+	  else if (high)
+	    result.val[0] = r >> prec;
+	  else
+	    result.val[0] = sext_hwi (r, prec);
+	  return result;
+	}
+    }
+
+  wi_unpack (u, (const unsigned HOST_WIDE_INT*)op1->val, op1->len,
+	     half_blocks_needed);
+  wi_unpack (v, (const unsigned HOST_WIDE_INT*)op2, op2len,
+	     half_blocks_needed);
+
+  /* The 2 is for a full mult.  */
+  memset (r, 0, half_blocks_needed * 2 
+	  * HOST_BITS_PER_HALF_WIDE_INT / BITS_PER_UNIT);
+
+  for (j = 0; j < half_blocks_needed; j++)
+    {
+      k = 0;
+      for (i = 0; i < half_blocks_needed; i++)
+	{
+	  t = ((unsigned HOST_WIDE_INT)u[i] * (unsigned HOST_WIDE_INT)v[j]
+	       + r[i + j] + k);
+	  r[i + j] = t & HALF_INT_MASK;
+	  k = t >> HOST_BITS_PER_HALF_WIDE_INT;
+	}
+      r[j + half_blocks_needed] = k;
+    }
+
+  /* We did unsigned math above.  For signed we must adjust the
+     product (assuming we need to see that).  */
+  if (sgn == SIGNED && (full || high || needs_overflow))
+    {
+      unsigned HOST_WIDE_INT b;
+      if ((*op1).neg_p ())
+	{
+	  b = 0;
+	  for (i = 0; i < half_blocks_needed; i++)
+	    {
+	      t = (unsigned HOST_WIDE_INT)r[i + half_blocks_needed]
+		- (unsigned HOST_WIDE_INT)v[i] - b;
+	      r[i + half_blocks_needed] = t & HALF_INT_MASK;
+	      b = t >> (HOST_BITS_PER_WIDE_INT - 1);
+	    }
+	}
+      if (op2[op2len-1] < 0)
+	{
+	  b = 0;
+	  for (i = 0; i < half_blocks_needed; i++)
+	    {
+	      t = (unsigned HOST_WIDE_INT)r[i + half_blocks_needed]
+		- (unsigned HOST_WIDE_INT)u[i] - b;
+	      r[i + half_blocks_needed] = t & HALF_INT_MASK;
+	      b = t >> (HOST_BITS_PER_WIDE_INT - 1);
+	    }
+	}
+    }
+
+  if (needs_overflow)
+    {
+      HOST_WIDE_INT top;
+
+      /* For unsigned, overflow is true if any of the top bits are set.
+	 For signed, overflow is true if any of the top bits are not equal
+	 to the sign bit.  */
+      if (sgn == UNSIGNED)
+	top = 0;
+      else
+	{
+	  top = r[(half_blocks_needed) - 1];
+	  top = SIGN_MASK (top << (HOST_BITS_PER_WIDE_INT / 2));
+	  top &= mask;
+	}
+      
+      for (i = half_blocks_needed; i < half_blocks_needed * 2; i++)
+	if (((HOST_WIDE_INT)(r[i] & mask)) != top)
+	  *overflow = true; 
+    }
+
+  if (full)
+    {
+      /* compute [2prec] <- [prec] * [prec] */
+      wi_pack ((unsigned HOST_WIDE_INT*)result.val, r, 2 * half_blocks_needed);
+      result.len = blocks_needed * 2;
+      result.precision = op1->precision * 2;
+    }
+  else if (high)
+    {
+      /* compute [prec] <- ([prec] * [prec]) >> [prec] */
+      wi_pack ((unsigned HOST_WIDE_INT*)&result.val [blocks_needed >> 1],
+	       r, half_blocks_needed);
+      result.len = blocks_needed;
+    }
+  else
+    {
+      /* compute [prec] <- ([prec] * [prec]) && ((1 << [prec]) - 1) */
+      wi_pack ((unsigned HOST_WIDE_INT*)result.val, r, half_blocks_needed);
+      result.len = blocks_needed;
+    }
+      
+  result.canonize ();
+  return result;
+}
+
+/* Compute the parity of THIS.  */
+
+wide_int
+wide_int::parity () const
+{
+  wide_int count = popcount ();
+  return count & 1;
+}
+
+/* Compute the population count of THIS.  */
+
+wide_int
+wide_int::popcount () const
+{
+  int i;
+  int start;
+  int count;
+  HOST_WIDE_INT v;
+  int small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+
+  /* The high order block is special if it is the last block and the
+     precision is not an even multiple of HOST_BITS_PER_WIDE_INT.  We
+     have to clear out any ones above the precision before doing clz
+     on this block.  */
+  if (BLOCKS_NEEDED (precision) == len && small_prec)
+    {
+      v = zext_hwi (val[len - 1], small_prec);
+      count = popcount_hwi (v);
+      start = len - 2;
+    }
+  else
+    {
+      if (sign_mask ())
+	count = HOST_BITS_PER_WIDE_INT * (BLOCKS_NEEDED (precision) - len);
+      else
+	count = 0;
+      start = len - 1;
+    }
+
+  for (i = start; i >= 0; i--)
+    {
+      v = val[i];
+      count += popcount_hwi (v);
+    }
+  return wide_int::from_shwi (count, precision);
+}
+
+/* Subtract of THIS and OP1.  No overflow is detected.  */
+
+wide_int
+wide_int::sub_large (const HOST_WIDE_INT *op1, unsigned int op1len) const
+{
+  wide_int result;
+  unsigned HOST_WIDE_INT o0, o1;
+  unsigned HOST_WIDE_INT x = 0;
+  /* We implement subtraction as an in place negate and add.  Negation
+     is just inversion and add 1, so we can do the add of 1 by just
+     starting the borrow in of the first element at 1.  */
+  unsigned HOST_WIDE_INT borrow = 0;
+  unsigned HOST_WIDE_INT mask0, mask1;
+  unsigned int i, small_prec;
+
+  result.precision = precision;
+  result.len = MAX (len, op1len);
+  mask0 = sign_mask ();
+  mask1 = SIGN_MASK (op1[op1len - 1]);
+
+  /* Subtract all of the explicitly defined elements.  */
+  for (i = 0; i < result.len; i++)
+    {
+      o0 = i < len ? (unsigned HOST_WIDE_INT)val[i] : mask0;
+      o1 = i < op1len ? (unsigned HOST_WIDE_INT)op1[i] : mask1;
+      x = o0 - o1 - borrow;
+      result.val[i] = x;
+      borrow = o0 < o1;
+    }
+
+  if (len * HOST_BITS_PER_WIDE_INT < precision)
+    {
+      result.val[result.len] = mask0 - mask1 - borrow;
+      result.len++;
+    }
+
+  small_prec = precision & (HOST_BITS_PER_WIDE_INT - 1);
+  if (small_prec != 0 && BLOCKS_NEEDED (precision) == result.len)
+    {
+      /* Modes with weird precisions.  */
+      i = result.len - 1;
+      result.val[i] = sext_hwi (result.val[i], small_prec);
+    }
+
+  result.canonize ();
+  return result;
+}
+
+
+/*
+ * Division and Mod
+ */
+
+/* Compute B_QUOTIENT and B_REMAINDER from B_DIVIDEND/B_DIVISOR.  The
+   algorithm is a small modification of the algorithm in Hacker's
+   Delight by Warren, which itself is a small modification of Knuth's
+   algorithm.  M is the number of significant elements of U however
+   there needs to be at least one extra element of B_DIVIDEND
+   allocated, N is the number of elements of B_DIVISOR.  */
+
+void
+wide_int::divmod_internal_2 (unsigned HOST_HALF_WIDE_INT *b_quotient, 
+			     unsigned HOST_HALF_WIDE_INT *b_remainder,
+			     unsigned HOST_HALF_WIDE_INT *b_dividend, 
+			     unsigned HOST_HALF_WIDE_INT *b_divisor, 
+			     int m, int n)
+{
+  /* The "digits" are a HOST_HALF_WIDE_INT which the size of half of a
+     HOST_WIDE_INT and stored in the lower bits of each word.  This
+     algorithm should work properly on both 32 and 64 bit
+     machines.  */
+  unsigned HOST_WIDE_INT b
+    = (unsigned HOST_WIDE_INT)1 << HOST_BITS_PER_HALF_WIDE_INT;
+  unsigned HOST_WIDE_INT qhat;   /* Estimate of quotient digit.  */
+  unsigned HOST_WIDE_INT rhat;   /* A remainder.  */
+  unsigned HOST_WIDE_INT p;      /* Product of two digits.  */
+  HOST_WIDE_INT s, i, j, t, k;
+
+  /* Single digit divisor.  */
+  if (n == 1)
+    {
+      k = 0;
+      for (j = m - 1; j >= 0; j--)
+	{
+	  b_quotient[j] = (k * b + b_dividend[j])/b_divisor[0];
+	  k = ((k * b + b_dividend[j])
+	       - ((unsigned HOST_WIDE_INT)b_quotient[j]
+		  * (unsigned HOST_WIDE_INT)b_divisor[0]));
+	}
+      b_remainder[0] = k;
+      return;
+    }
+
+  s = clz_hwi (b_divisor[n-1]) - HOST_BITS_PER_HALF_WIDE_INT; /* CHECK clz */
+
+  /* Normalize B_DIVIDEND and B_DIVISOR.  Unlike the published
+     algorithm, we can overwrite b_dividend and b_divisor, so we do
+     that.  */
+  for (i = n - 1; i > 0; i--)
+    b_divisor[i] = (b_divisor[i] << s)
+      | (b_divisor[i-1] >> (HOST_BITS_PER_HALF_WIDE_INT - s));
+  b_divisor[0] = b_divisor[0] << s;
+
+  b_dividend[m] = b_dividend[m-1] >> (HOST_BITS_PER_HALF_WIDE_INT - s);
+  for (i = m - 1; i > 0; i--)
+    b_dividend[i] = (b_dividend[i] << s)
+      | (b_dividend[i-1] >> (HOST_BITS_PER_HALF_WIDE_INT - s));
+  b_dividend[0] = b_dividend[0] << s;
+
+  /* Main loop.  */
+  for (j = m - n; j >= 0; j--)
+    {
+      qhat = (b_dividend[j+n] * b + b_dividend[j+n-1]) / b_divisor[n-1];
+      rhat = (b_dividend[j+n] * b + b_dividend[j+n-1]) - qhat * b_divisor[n-1];
+    again:
+      if (qhat >= b || qhat * b_divisor[n-2] > b * rhat + b_dividend[j+n-2])
+	{
+	  qhat -= 1;
+	  rhat += b_divisor[n-1];
+	  if (rhat < b)
+	    goto again;
+	}
+
+      /* Multiply and subtract.  */
+      k = 0;
+      for (i = 0; i < n; i++)
+	{
+	  p = qhat * b_divisor[i];
+	  t = b_dividend[i+j] - k - (p & HALF_INT_MASK);
+	  b_dividend[i + j] = t;
+	  k = ((p >> HOST_BITS_PER_HALF_WIDE_INT)
+	       - (t >> HOST_BITS_PER_HALF_WIDE_INT));
+	}
+      t = b_dividend[j+n] - k;
+      b_dividend[j+n] = t;
+
+      b_quotient[j] = qhat;
+      if (t < 0)
+	{
+	  b_quotient[j] -= 1;
+	  k = 0;
+	  for (i = 0; i < n; i++)
+	    {
+	      t = (HOST_WIDE_INT)b_dividend[i+j] + b_divisor[i] + k;
+	      b_dividend[i+j] = t;
+	      k = t >> HOST_BITS_PER_HALF_WIDE_INT;
+	    }
+	  b_dividend[j+n] += k;
+	}
+    }
+  for (i = 0; i < n; i++)
+    b_remainder[i] = (b_dividend[i] >> s) 
+      | (b_dividend[i+1] << (HOST_BITS_PER_HALF_WIDE_INT - s));
+}
+
+
+/* Do a truncating divide DIVISOR into DIVIDEND.  The result is the
+   same size as the operands.  SIGN is either wide_int::SIGNED or
+   wide_int::UNSIGNED.  */
+
+wide_int
+wide_int::divmod_internal (bool compute_quotient, 
+			   const wide_int *dividend, 
+			   const HOST_WIDE_INT *divisor,
+			   unsigned int divisorlen,
+			   SignOp sgn, wide_int *remainder,
+			   bool compute_remainder, 
+			   bool *oflow)
+{
+  wide_int quotient, u0, u1;
+  unsigned int prec = dividend->get_precision();
+  int blocks_needed = 2 * BLOCKS_NEEDED (prec);
+  /* The '2' in the next 4 vars are because they are built on half
+     sized wide ints.  */
+  unsigned HOST_HALF_WIDE_INT 
+    b_quotient[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    b_remainder[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  unsigned HOST_HALF_WIDE_INT
+    b_dividend[(MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT) + 1];
+  unsigned HOST_HALF_WIDE_INT
+    b_divisor[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_HALF_WIDE_INT];
+  int m, n;
+  bool dividend_neg = false;
+  bool divisor_neg = false;
+  bool overflow = false;
+
+  if (divisor[0] == 0 && divisorlen == 1)
+    overflow = true;
+
+  /* The smallest signed number / -1 causes overflow.  */
+  if (sgn == SIGNED)
+    {
+      wide_int t = wide_int::set_bit_in_zero (prec - 1, prec);
+      if (*dividend == t && divisor[0] == -1 && divisorlen == 1)
+	overflow = true;
+    }
+
+  quotient.precision = prec;
+  remainder->precision = prec;
+
+  /* If overflow is set, just get out.  There will only be grief by
+     continuing.  */
+  if (overflow)
+    {
+      if (compute_remainder)
+	{
+	  remainder->len = 1;
+	  remainder->val[0] = 0;
+	}
+      if (oflow != 0)
+	*oflow = true;
+      return wide_int::zero (prec);
+    }
+
+  /* Do it on the host if you can.  */
+  if (prec <= HOST_BITS_PER_WIDE_INT)
+    {
+      quotient.len = 1;
+      remainder->len = 1;
+      if (sgn == SIGNED)
+	{
+	  quotient.val[0] 
+	    = sext_hwi (dividend->val[0] / divisor[0], prec);
+	  remainder->val[0] 
+	    = sext_hwi (dividend->val[0] % divisor[0], prec);
+	}
+      else
+	{
+	  unsigned HOST_WIDE_INT o0 = dividend->val[0];
+	  unsigned HOST_WIDE_INT o1 = divisor[0];
+
+	  if (prec < HOST_BITS_PER_WIDE_INT)
+	    {
+	      o0 = zext_hwi (o0, prec);
+	      o1 = zext_hwi (o1, prec);
+	    }
+	  quotient.val[0] = sext_hwi (o0 / o1, prec);
+	  remainder->val[0] = sext_hwi (o0 % o1, prec);
+	}
+      return quotient;
+    }
+
+  /* Make the divisor and divident positive and remember what we
+     did.  */
+  if (sgn == SIGNED)
+    {
+      if (dividend->sign_mask ())
+	{
+	  u0 = dividend->neg ();
+	  dividend = &u0;
+	  dividend_neg = true;
+	}
+      if (divisor[divisorlen - 1] < 0)
+	{
+	  u1 = wide_int::zero (dividend->precision).sub_large (divisor, divisorlen);
+	  divisor = u1.val;
+	  divisorlen = u1.len;
+	  divisor_neg = true;
+	}
+    }
+
+  wi_unpack (b_dividend, (const unsigned HOST_WIDE_INT*)dividend->val,
+	     dividend->len, blocks_needed);
+  wi_unpack (b_divisor, (const unsigned HOST_WIDE_INT*)divisor, 
+	     divisorlen, blocks_needed);
+
+  if (dividend->sign_mask ())
+    m = blocks_needed;
+  else
+    m = 2 * dividend->get_len ();
+
+  if (SIGN_MASK (divisor[divisorlen - 1]))
+    n = blocks_needed;
+  else
+    n = 2 * divisorlen;
+
+  /* It is known that the top input block to the divisor is non zero,
+     but when this block is split into two half blocks, it may be that
+     the top half block is zero.  Skip over this half block.  */
+  if (b_divisor[n - 1] == 0)
+    n--;
+
+  divmod_internal_2 (b_quotient, b_remainder, b_dividend, b_divisor, m, n);
+
+  if (compute_quotient)
+    {
+      wi_pack ((unsigned HOST_WIDE_INT*)quotient.val, b_quotient, m);
+      quotient.len = m / 2;
+      quotient.canonize ();
+      /* The quotient is neg if exactly one of the divisor or dividend is
+	 neg.  */
+      if (dividend_neg != divisor_neg)
+	quotient = quotient.neg ();
+    }
+  else
+    quotient = wide_int::zero (dividend->precision);
+
+  if (compute_remainder)
+    {
+      wi_pack ((unsigned HOST_WIDE_INT*)remainder->val, b_remainder, n);
+      if (n & 1)
+	n++;
+      remainder->len = n / 2;
+      (*remainder).canonize ();
+      /* The remainder is always the same sign as the dividend.  */
+      if (dividend_neg)
+	*remainder = (*remainder).neg ();
+    }
+  else
+    *remainder = wide_int::zero (dividend->precision);
+  return quotient;
+}
+
+
+/*
+ * Shifting, rotating and extraction.
+ */
+
+/* Extract WIDTH bits from THIS starting at OFFSET.  The result is
+   assumed to fit in a HOST_WIDE_INT.  This function is safe in that
+   it can properly access elements that may not be explicitly
+   represented.  */
+
+HOST_WIDE_INT
+wide_int::extract_to_hwi (int offset, int width) const
+{
+  int start_elt, end_elt, shift;
+  HOST_WIDE_INT x;
+
+  /* Get rid of the easy cases first.   */
+  if (offset >= len * HOST_BITS_PER_WIDE_INT)
+    return sign_mask ();
+  if (offset + width <= 0)
+    return 0;
+
+  shift = offset & (HOST_BITS_PER_WIDE_INT - 1);
+  if (offset < 0)
+    {
+      start_elt = -1;
+      end_elt = 0;
+      x = 0;
+    }
+  else
+    {
+      start_elt = offset / HOST_BITS_PER_WIDE_INT;
+      end_elt = (offset + width - 1) / HOST_BITS_PER_WIDE_INT;
+      x = start_elt >= len 
+	? sign_mask ()
+	: (unsigned HOST_WIDE_INT)val[start_elt] >> shift;
+    }
+
+  if (start_elt != end_elt)
+    {
+      HOST_WIDE_INT y = end_elt == len
+	? sign_mask () : val[end_elt];
+
+      x |= y << (HOST_BITS_PER_WIDE_INT - shift);
+    }
+
+  if (width != HOST_BITS_PER_WIDE_INT)
+    x &= ((HOST_WIDE_INT)1 << width) - 1;
+
+  return x;
+}
+
+
+/* Left shift THIS by CNT.  See the definition of Op.TRUNC for how to
+   set Z.  Since this is used internally, it has the ability to
+   specify the BISIZE and PRECISION independently.  This is useful
+   when inserting a small value into a larger one.  */
+
+wide_int
+wide_int::lshift_large (unsigned int cnt, unsigned int res_prec) const
+{
+  wide_int result;
+  unsigned int i;
+
+  result.precision = res_prec;
+
+  if (cnt >= res_prec)
+    {
+      result.val[0] = 0;
+      result.len = 1;
+      return result;
+    }
+
+  for (i = 0; i < res_prec; i += HOST_BITS_PER_WIDE_INT)
+    result.val[i / HOST_BITS_PER_WIDE_INT]
+      = extract_to_hwi (i - cnt, HOST_BITS_PER_WIDE_INT);
+
+  result.len = BLOCKS_NEEDED (res_prec);
+  result.canonize ();
+
+  return result;
+}
+
+/* Unsigned right shift THIS by CNT.  See the definition of Op.TRUNC
+   for how to set Z.  */
+
+wide_int
+wide_int::rshiftu_large (unsigned int cnt) const
+{
+  wide_int result;
+  int stop_block, offset, i;
+
+  result.precision = precision;
+
+  if (cnt >= precision)
+    {
+      result.val[0] = 0;
+      result.len = 1;
+      return result;
+    }
+
+  stop_block = BLOCKS_NEEDED (precision - cnt);
+  for (i = 0; i < stop_block; i++)
+    result.val[i]
+      = extract_to_hwi ((i * HOST_BITS_PER_WIDE_INT) + cnt,
+			HOST_BITS_PER_WIDE_INT);
+
+  result.len = stop_block;
+
+  offset = (precision - cnt) & (HOST_BITS_PER_WIDE_INT - 1);
+  if (offset)
+    result.val[stop_block - 1] = zext_hwi (result.val[stop_block - 1], offset);
+  else
+    /* The top block had a 1 at the top position so it will decompress
+       wrong unless a zero block is added.  This only works because we
+       know the shift was greater than 0.  */
+    if (result.val[stop_block - 1] < 0)
+      result.val[result.len++] = 0;
+  return result;
+}
+
+/* Signed right shift THIS by CNT.  See the definition of Op.TRUNC for
+   how to set Z.  */
+
+wide_int
+wide_int::rshifts_large (unsigned int cnt) const
+{
+  wide_int result;
+  int stop_block, i;
+
+  result.precision = precision;
+
+  if (cnt >= precision)
+    {
+      HOST_WIDE_INT m = sign_mask ();
+      result.val[0] = m;
+      result.len = 1;
+      return result;
+    }
+
+  stop_block = BLOCKS_NEEDED (precision - cnt);
+  for (i = 0; i < stop_block; i++)
+    result.val[i]
+      = extract_to_hwi ((i * HOST_BITS_PER_WIDE_INT) + cnt,
+			HOST_BITS_PER_WIDE_INT);
+
+  result.len = stop_block;
+
+  /* No need to sign extend the last block, since it extract_to_hwi
+     already did that.  */
+  return result;
+}
+
+/*
+ * Private utilities.
+ */
+/* Decompress THIS for at least TARGET bits into a result with
+   precision PREC.  */
+
+wide_int
+wide_int::decompress (unsigned int target, unsigned int prec) const
+{
+  wide_int result;
+  int blocks_needed = BLOCKS_NEEDED (target);
+  HOST_WIDE_INT mask;
+  int len, i;
+
+  result.precision = prec;
+  result.len = blocks_needed;
+
+  for (i = 0; i < this->len; i++)
+    result.val[i] = val[i];
+
+  len = this->len;
+
+  if (target > result.precision)
+    return result;
+
+  /* The extension that we are doing here is not sign extension, it is
+     decompression.  */
+  mask = sign_mask ();
+  while (len < blocks_needed)
+    result.val[len++] = mask;
+
+  return result;
+}
diff --git a/gcc/wide-int.h b/gcc/wide-int.h
new file mode 100644
index 0000000..ed9f831
--- /dev/null
+++ b/gcc/wide-int.h
@@ -0,0 +1,2411 @@
+/* Operations with very long integers.  -*- C++ -*-
+   Copyright (C) 2012-2013 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the
+Free Software Foundation; either version 3, or (at your option) any
+later version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef WIDE_INT_H
+#define WIDE_INT_H
+
+/* Wide-int.[ch] implements a class that efficiently performs
+   mathematical operations on finite precision integers.  Wide-ints
+   are designed to be transient - they are not for long term storage
+   of values.  There is tight integration between wide-ints and the
+   other longer storage GCC representations (rtl and tree).
+
+   A wide integer is represented as a vector of HOST_WIDE_INTs.  The
+   vector contains enough elements to hold a value of
+   MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT which is a
+   derived for each host/target combination.  The values are stored in
+   the vector with the least signicant HOST_BITS_PER_WIDE_INT bits of
+   the value stored in element 0.  
+
+   A wide_int contains three fields: the vector (VAL), precision and a
+   length, (LEN).  The length is the number of HWIs needed to
+   represent the value.
+
+   Wide-ints are write-once.
+
+   Since most integers used in a compiler are small values, it is
+   generally profitable to use a representation of the value that is
+   as small as possible.  LEN is used to indicate the number of
+   elements of the vector that are in use.  The numbers are stored as
+   sign entended numbers as a means of compression.  Leading
+   HOST_WIDE_INTs that contain strings of either -1 or 0 are removed
+   as long as they can be reconstructed from the top bit that is being
+   represented.
+
+   The representation does not contain any information inherant about
+   signedness of the represented value, so it can be used to represent
+   both signed and unsigned numbers.   For operations where the results
+   depend on signedness (division, comparisons), the signedness must
+   be specified separately.
+
+   Note that the bits above the precision are not defined and the
+   algorithms used here are careful not to depend on their value.  In
+   particular, values that come in from rtx constants may have random
+   bits.  */
+
+
+#ifndef GENERATOR_FILE
+#include <utility>
+#include "tree.h"
+#include "system.h"
+#include "hwint.h"
+#include "options.h"
+#include "tm.h"
+#include "insn-modes.h"
+#include "machmode.h"
+#include "double-int.h"
+#include <gmp.h>
+#include "dumpfile.h"
+#include "real.h"
+
+#define WIDE_INT_MAX_ELTS \
+  ((4 * MAX_BITSIZE_MODE_ANY_INT + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT)
+
+enum ShiftOp {
+  NONE,
+  /* There are two uses for the wide-int shifting functions.  The
+     first use is as an emulation of the target hardware.  The
+     second use is as service routines for other optimizations.  The
+     first case needs to be identified by passing TRUNC as the value
+     of ShiftOp so that shift amount is properly handled according to the
+     SHIFT_COUNT_TRUNCATED flag.  For the second case, the shift
+     amount is always truncated by the bytesize of the mode of
+     THIS.  */
+  TRUNC
+};
+
+enum SignOp {
+  /* Many of the math functions produce different results depending
+     on if they are SIGNED or UNSIGNED.  In general, there are two
+     different functions, whose names are prefixed with an 'S" and
+     or an 'U'.  However, for some math functions there is also a
+     routine that does not have the prefix and takes an SignOp
+     parameter of SIGNED or UNSIGNED.  */
+  SIGNED,
+  UNSIGNED
+};
+
+static const int DUMP_MAX = (2 * (MAX_BITSIZE_MODE_ANY_INT / 4
+				  + MAX_BITSIZE_MODE_ANY_INT 
+				  / HOST_BITS_PER_WIDE_INT + 32));
+
+class GTY(()) wide_int {
+  /* Internal representation.  */
+  
+  /* VAL is set to a size that is capable of computing a full
+     multiplication on the largest mode that is represented on the
+     target.  The full multiplication is use by tree-vrp.  tree-vpn
+     currently does a 2x largest mode by 2x largest mode yielding a 4x
+     largest mode result.  If operations are added that require larger
+     buffers, then VAL needs to be changed.  */
+  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];
+  unsigned short len;
+  unsigned int precision;
+
+ public:
+  /* Conversions.  */
+
+  static wide_int from_shwi (HOST_WIDE_INT op0, 
+			     unsigned int precision);
+  static wide_int from_uhwi (unsigned HOST_WIDE_INT op0, 
+			     unsigned int precision);
+
+  inline static wide_int from_hwi (HOST_WIDE_INT op0, const_tree type);
+  inline static wide_int from_shwi (HOST_WIDE_INT op0, enum machine_mode mode);
+  inline static wide_int from_uhwi (unsigned HOST_WIDE_INT op0, 
+				    enum machine_mode mode);
+  static wide_int from_array (const HOST_WIDE_INT* op0,
+			      unsigned int len, 
+			      unsigned int precision, bool need_canon = true); 
+
+  static wide_int from_double_int (double_int, 
+				   unsigned int precision);
+  static wide_int from_tree (const_tree);
+  static wide_int from_rtx (const_rtx, enum machine_mode);
+  static wide_int from_buffer (const unsigned char*, int);
+
+  inline HOST_WIDE_INT to_shwi (unsigned int prec = 0) const;
+  inline unsigned HOST_WIDE_INT to_uhwi (unsigned int prec = 0) const;
+
+  /* Largest and smallest values that are represented in a TYPE_PREC.
+     RESULT_PREC is the precision of the value that the answer is
+     returned within.  The default value of 0 says return the answer
+     with TYPE_PREC precision.  */
+  static wide_int max_value (unsigned int type_prec, 
+			     SignOp sgn, unsigned int result_prec = 0);
+  inline static wide_int max_value (const_tree type);
+  inline static wide_int max_value (enum machine_mode mode, SignOp sgn);
+  
+  static wide_int min_value (unsigned int type_prec, 
+			     SignOp sgn, unsigned int result_prec = 0);
+  inline static wide_int min_value (unsigned int precision, SignOp sgn);
+  inline static wide_int min_value (const_tree type);
+  inline static wide_int min_value (enum machine_mode mode, SignOp sgn);
+  
+  /* Small constants */
+
+  inline static wide_int minus_one (unsigned int prec);
+  inline static wide_int zero (unsigned int prec);
+  inline static wide_int one (unsigned int prec);
+  inline static wide_int two (unsigned int prec);
+
+  /* Accessors.  */
+
+  inline unsigned short get_len () const;
+  inline unsigned int get_precision () const;
+  inline HOST_WIDE_INT elt (unsigned int i) const;
+
+  /* Comparative functions.  */
+
+  inline bool minus_one_p () const;
+  inline bool zero_p () const;
+  inline bool one_p () const;
+  inline bool neg_p () const;
+
+  template <typename T>
+  inline bool operator == (const T &c) const;
+  
+  template <typename T>
+  inline bool operator != (const T &c) const;
+  
+  template <typename T>
+  inline bool gt_p (const T &c, SignOp sgn) const;
+  template <typename T>
+  inline bool gts_p (const T &c) const;
+  template <typename T>
+  inline bool gtu_p (const T &c) const;
+  
+  template <typename T>
+  inline bool lt_p (const T &c, SignOp sgn) const;
+  template <typename T>
+  inline bool lts_p (const T &c) const;
+  template <typename T>
+  inline bool ltu_p (const T &c) const;
+  
+  template <typename T>
+  inline int cmp (const T &c, SignOp sgn) const;
+  template <typename T>
+  inline int cmps (const T &c) const;
+  template <typename T>
+  inline int cmpu (const T &c) const;
+  
+  bool only_sign_bit_p (unsigned int prec) const;
+  inline bool only_sign_bit_p () const;
+  inline bool fits_uhwi_p () const;
+  inline bool fits_shwi_p () const;
+  bool fits_to_tree_p (const_tree type) const;
+  bool fits_u_p (unsigned int prec) const;
+  bool fits_s_p (unsigned int prec) const;
+
+  /* Min and max */
+
+  template <typename T>
+  inline wide_int min (const T &c, SignOp sgn) const;
+  inline wide_int min (const wide_int &op1, SignOp sgn) const;
+  template <typename T>
+  inline wide_int max (const T &c, SignOp sgn) const;
+  inline wide_int max (const wide_int &op1, SignOp sgn) const;
+  template <typename T>
+  inline wide_int smin (const T &c) const;
+  inline wide_int smin (const wide_int &op1) const;
+  template <typename T>
+  inline wide_int smax (const T &c) const;
+  inline wide_int smax (const wide_int &op1) const;
+  template <typename T>
+  inline wide_int umin (const T &c) const;
+  inline wide_int umin (const wide_int &op1) const;
+  template <typename T>
+  inline wide_int umax (const T &c) const;
+  inline wide_int umax (const wide_int &op1) const;
+
+  /* Extension, these do not change the precision.  */
+
+  inline wide_int ext (unsigned int offset, SignOp sgn) const;
+  wide_int sext (unsigned int offset) const;
+  wide_int zext (unsigned int offset) const;
+
+  /* Make a fast copy.  */
+
+  wide_int copy () const;
+
+  /* These change the underlying precision.  */
+  
+  wide_int force_to_size (unsigned int precision, SignOp sgn) const;
+  inline wide_int sforce_to_size (unsigned int precision) const;
+  inline wide_int zforce_to_size (unsigned int precision) const;
+
+  /* Masking, and Insertion  */
+
+  wide_int set_bit (unsigned int bitpos) const;
+  static wide_int set_bit_in_zero (unsigned int bitpos, unsigned int prec);
+  inline static wide_int set_bit_in_zero (unsigned int, 
+					  enum machine_mode mode);
+  inline static wide_int set_bit_in_zero (unsigned int, const_tree type);
+  wide_int insert (const wide_int &op0, unsigned int offset,
+		   unsigned int width) const;
+
+  wide_int bswap () const;
+
+  static wide_int mask (unsigned int start, bool negate, 
+			unsigned int prec);
+  inline static wide_int mask (unsigned int start, bool negate, 
+			       enum machine_mode mode);
+  inline static wide_int mask (unsigned int start, bool negate,
+			       const_tree type);
+
+  static wide_int shifted_mask (unsigned int start, unsigned int width,
+				bool negate, unsigned int prec);
+  inline static wide_int shifted_mask (unsigned int start, unsigned int width, 
+				       bool negate, enum machine_mode mode);
+  inline static wide_int shifted_mask (unsigned int start, unsigned int width, 
+				       bool negate, const_tree type);
+
+  inline HOST_WIDE_INT sign_mask () const;
+
+  /* Logicals */
+
+  template <typename T>
+  inline wide_int operator & (const T &c) const;
+  template <typename T>
+  inline wide_int and_not (const T &c) const;
+  inline wide_int operator ~ () const;
+  template <typename T>
+  inline wide_int operator | (const T &c) const;
+  template <typename T>
+  inline wide_int or_not (const T &c) const;
+  template <typename T>
+  inline wide_int operator ^ (const T &c) const;
+  
+  /* Arithmetic operation functions, alpha sorted.  */
+  
+  wide_int abs () const;
+  template <typename T>
+  inline wide_int operator + (const T &c) const;
+  
+  wide_int clz () const;
+  wide_int clrsb () const;
+  wide_int ctz () const;
+  wide_int exact_log2 () const;
+  wide_int floor_log2 () const;
+  wide_int ffs () const;
+  
+  template <typename T>
+  inline wide_int operator * (const T &c) const;
+  template <typename T>
+  inline wide_int mul (const T &c, SignOp sgn, bool *overflow) const;
+  template <typename T>
+  inline wide_int smul (const T &c, bool *overflow) const;
+  template <typename T>
+  inline wide_int umul (const T &c, bool *overflow) const;
+  template <typename T>
+  inline wide_int mul_full (const T &c, SignOp sgn) const;
+  template <typename T>
+  inline wide_int smul_full (const T &c) const;
+  template <typename T>
+  inline wide_int umul_full (const T &c) const;
+  template <typename T>
+  inline wide_int mul_high (const T &c, SignOp sgn) const;
+  
+  inline wide_int neg () const;
+  inline wide_int neg (bool *overflow) const;
+  
+  wide_int parity () const;
+  wide_int popcount () const;
+  
+  template <typename T>
+  inline wide_int operator - (const T& c) const;
+  
+  /* Divison and mod.  These are the ones that are actually used, but
+     there are a lot of them.  */
+  
+  template <typename T>
+  inline wide_int div_trunc (const T &c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+  inline wide_int sdiv_trunc (const T &c) const;
+  template <typename T>
+  inline wide_int udiv_trunc (const T &c) const;
+  
+  template <typename T>
+  inline wide_int div_floor (const T &c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+  inline wide_int udiv_floor (const T &c) const;
+  template <typename T>
+  inline wide_int sdiv_floor (const T &c) const;
+  template <typename T>
+  inline wide_int div_ceil (const T &c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+  inline wide_int div_round (const T &c, SignOp sgn, bool *overflow = 0) const;
+  
+  template <typename T>
+  inline wide_int divmod_trunc (const T &c, wide_int *mod, SignOp sgn) const;
+  template <typename T>
+  inline wide_int sdivmod_trunc (const T &c, wide_int *mod) const;
+  template <typename T>
+  inline wide_int udivmod_trunc (const T &c, wide_int *mod) const;
+  
+  template <typename T>
+  inline wide_int divmod_floor (const T &c, wide_int *mod, SignOp sgn) const;
+  template <typename T>
+  inline wide_int sdivmod_floor (const T &c, wide_int *mod) const;
+  
+  template <typename T>
+  inline wide_int mod_trunc (const T &c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+  inline wide_int smod_trunc (const T &c) const;
+  template <typename T>
+  inline wide_int umod_trunc (const T &c) const;
+  
+  template <typename T>
+  inline wide_int mod_floor (const T &c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+  inline wide_int umod_floor (const T &c) const;
+  template <typename T>
+  inline wide_int mod_ceil (const T &c, SignOp sgn, bool *overflow = 0) const;
+  template <typename T>
+  inline wide_int mod_round (const T &c, SignOp sgn, bool *overflow = 0) const;
+  
+  /* Shifting rotating and extracting.  */
+  HOST_WIDE_INT extract_to_hwi (int offset, int width) const;
+  
+  template <typename T>
+  inline wide_int lshift (const T &c, unsigned int bitsize = 0,
+			  unsigned int precision = 0,
+			  ShiftOp z = NONE) const;
+
+  template <typename T>
+  inline wide_int lrotate (const T &c) const;
+  inline wide_int lrotate (unsigned HOST_WIDE_INT y) const;
+
+  template <typename T>
+  inline wide_int rshift (const T &c, SignOp sgn, 
+			  unsigned int bitsize = 0,
+			  ShiftOp z = NONE) const;
+  template <typename T>
+  inline wide_int rshiftu (const T &c, unsigned int bitsize = 0,
+			   ShiftOp z = NONE) const;
+  template <typename T>
+  inline wide_int rshifts (const T &c, unsigned int bitsize = 0,
+			   ShiftOp z = NONE) const;
+
+  template <typename T>
+  inline wide_int rrotate (const T &c) const;
+  inline wide_int rrotate (unsigned HOST_WIDE_INT y) const;
+
+  char *dump (char* buf) const;
+ private:
+
+  /* 
+   * Internal versions that do the work if the values do not fit in a
+   * HWI.
+   */ 
+
+  /* Comparisons */
+  bool eq_p_large (const HOST_WIDE_INT *, unsigned int) const;
+  static bool lts_p_large (const HOST_WIDE_INT *, unsigned int, 
+			   const HOST_WIDE_INT *, unsigned int,
+			   unsigned int);
+  int cmps_large (const HOST_WIDE_INT *, unsigned int) const;
+  static bool ltu_p_large (const HOST_WIDE_INT *, unsigned int, 
+			   const HOST_WIDE_INT *, unsigned int);
+  int cmpu_large (const HOST_WIDE_INT *, unsigned int) const;
+
+  /* Logicals.  */
+  wide_int and_large (const HOST_WIDE_INT *, unsigned int) const;
+  wide_int and_not_large (const HOST_WIDE_INT *, unsigned int) const;
+  wide_int or_large (const HOST_WIDE_INT *, unsigned int) const;
+  wide_int or_not_large (const HOST_WIDE_INT *, unsigned int) const;
+  wide_int xor_large (const HOST_WIDE_INT *, unsigned int) const;
+
+  /* Arithmetic */
+  wide_int add_large (const HOST_WIDE_INT *, unsigned int) const;
+  wide_int sub_large (const HOST_WIDE_INT *, unsigned int) const;
+
+  wide_int lshift_large (unsigned int cnt, unsigned int res_prec) const;
+  wide_int rshiftu_large (unsigned int cnt) const;
+  wide_int rshifts_large (unsigned int cnt) const;
+
+  static wide_int
+  mul_internal (bool high, bool full, 
+		const wide_int *op1, const HOST_WIDE_INT *op2, unsigned int op2len,
+		SignOp sgn,  bool *overflow, bool needs_overflow);
+  static void
+  divmod_internal_2 (unsigned HOST_HALF_WIDE_INT *b_quotient, 
+		     unsigned HOST_HALF_WIDE_INT *b_remainder,
+		     unsigned HOST_HALF_WIDE_INT *b_dividend, 
+		     unsigned HOST_HALF_WIDE_INT *b_divisor, 
+		     int m, int n);
+  static wide_int
+  divmod_internal (bool compute_quotient, 
+		   const wide_int *dividend, const HOST_WIDE_INT *, unsigned int,
+		   SignOp sgn, wide_int *remainder,
+		   bool compute_remainder, 
+		   bool *overflow);
+
+
+  /* Private utility routines.  */
+  wide_int decompress (unsigned int target, unsigned int precision) const;
+  void canonize ();
+  static inline int trunc_shift (const HOST_WIDE_INT *cnt, unsigned int len, 
+				 unsigned int bitsize, ShiftOp z);
+
+  template <typename T>
+  static inline bool
+  top_bit_set (T x) {
+    return (x >> (sizeof (x)*8 - 1)) != 0;
+  }
+
+  /* The following template and its overrides are used for the second
+     operand of binary functions.  They access the value are package
+     it so that the operation can be done.  These have been
+     implemented so that pointer copying is done from the rep of the
+     second operand rather than actual data copying.  This is safe
+     even for garbage collected objects since the value is immediately
+     throw away.  
+
+     The first two templates match all integers.  */
+
+  template <typename T>
+  static inline const HOST_WIDE_INT* 
+  to_shwi2 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p, const T& x)
+  {
+    s[0] = x;
+    if (~(T)0 < (T)0
+	|| sizeof (T) < sizeof (HOST_WIDE_INT)
+	|| ! top_bit_set (x))
+      {
+	*l = 1;
+      }
+    else
+      {
+	s[1] = 0;
+	*l = 2;	  
+      }
+    *p = 0;
+    return s;
+  }
+};
+
+template <>
+inline const HOST_WIDE_INT*
+wide_int::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
+		    unsigned int *l, unsigned int *p, const wide_int &y)
+{
+  *l = y.len;
+  *p = y.precision;
+  return y.val;
+}
+
+  
+/* This overload may just return the pointer to the value and set
+   the length.  */
+template<>
+inline const HOST_WIDE_INT*
+wide_int::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
+		    unsigned int *l, unsigned int *p, const const_tree& tcst)
+{
+  /* This is rather complex now, in the future when the rep for tree
+     cst is changed, it will be just a pointer and len copy.  */
+  tree type = TREE_TYPE (tcst);
+  unsigned int prec = TYPE_PRECISION (type);
+
+  if (prec > HOST_BITS_PER_WIDE_INT)
+    *l = 2;
+  else
+    {
+      if (prec == HOST_BITS_PER_WIDE_INT
+	  && TYPE_UNSIGNED (type)
+	  && (HOST_WIDE_INT)TREE_INT_CST_LOW (tcst) < 0)
+	*l = 2;
+      else
+	*l = 1;
+    }
+
+  *p = prec;
+  return (const HOST_WIDE_INT*)&TREE_INT_CST_LOW (tcst);
+}
+
+/* This is used to bundle an rtx and a mode together so that the pair
+   can be used as the second operand of a wide int expression.  If we
+   ever put modes into rtx integer constants, this should go away and
+   then just pass an rtx in.  */
+typedef std::pair<rtx, enum machine_mode> rtx_mode_t;
+
+/* There should logically be an overload for rtl here, but it cannot
+   be here because of circular include issues.  It is in rtl.h.  */
+template<>
+inline const HOST_WIDE_INT*
+wide_int::to_shwi2 (HOST_WIDE_INT *s ATTRIBUTE_UNUSED,
+		    unsigned int *l, unsigned int *p,
+		    const rtx_mode_t& rp);
+
+/* Insert a 1 bit into 0 at BITPOS producing an number with precision
+   taken from MODE.  */
+
+wide_int
+wide_int::set_bit_in_zero (unsigned int bitpos, enum machine_mode mode)
+{
+  return wide_int::set_bit_in_zero (bitpos, GET_MODE_PRECISION (mode));
+}
+
+/* Insert a 1 bit into 0 at BITPOS producing an number with precision
+   taken from TYPE.  */
+
+wide_int
+wide_int::set_bit_in_zero (unsigned int bitpos, const_tree type)
+{
+
+  return wide_int::set_bit_in_zero (bitpos, TYPE_PRECISION (type));
+}
+
+/* Return a result mask where the lower WIDTH bits are ones and the
+   bits above that up to the precision are zeros.  The result is
+   inverted if NEGATE is true.  The result is made with precision
+   taken from MODE.  */
+
+wide_int
+wide_int::mask (unsigned int width, bool negate, enum machine_mode mode)
+{
+  return wide_int::mask (width, negate, GET_MODE_PRECISION (mode));
+}
+
+/* Return a result mask where the lower WIDTH bits are ones and the
+   bits above that up to the precision are zeros.  The result is
+   inverted if NEGATE is true.  The result is made with precision
+   taken from TYPE.  */
+
+wide_int
+wide_int::mask (unsigned int width, bool negate, const_tree type)
+{
+
+  return wide_int::mask (width, negate, TYPE_PRECISION (type));
+}
+
+/* Return a result mask of WIDTH ones starting at START and the bits
+   above that up to the precision are zeros.  The result is inverted
+   if NEGATE is true.  The result is made with precision taken from
+   MODE.  */
+
+wide_int
+wide_int::shifted_mask (unsigned int start, unsigned int width, 
+			bool negate, enum machine_mode mode)
+{
+  return wide_int::shifted_mask (start, width, negate,
+				 GET_MODE_PRECISION (mode));
+}
+
+/* Return a result mask of WIDTH ones starting at START and the bits
+   above that up to the precision are zeros.  The result is inverted
+   if NEGATE is true.  The result is made with precision taken from
+   TYPE.  */
+
+wide_int
+wide_int::shifted_mask (unsigned int start, unsigned int width, 
+			bool negate, const_tree type)
+{
+
+  return wide_int::shifted_mask (start, width, negate,
+				 TYPE_PRECISION (type));
+}
+
+/* Produce 0 or -1 that is the smear of the sign bit.  */
+
+HOST_WIDE_INT
+wide_int::sign_mask () const
+{
+  int i = len - 1;
+  if (precision < HOST_BITS_PER_WIDE_INT)
+    return ((val[0] << (HOST_BITS_PER_WIDE_INT - precision))
+	    >> (HOST_BITS_PER_WIDE_INT - 1));
+
+  /* VRP appears to be badly broken and this is a very ugly fix.  */
+  if (i >= 0)
+    return val[i] >> (HOST_BITS_PER_WIDE_INT - 1);
+
+  gcc_unreachable ();
+#if 0
+  return val[len - 1] >> (HOST_BITS_PER_WIDE_INT - 1);
+#endif
+}
+
+/* Conversions */
+
+/* Convert OP0 into a wide_int with parameters taken from TYPE.  If
+   the value does not fit, set OVERFLOW.  */
+
+wide_int
+wide_int::from_hwi (HOST_WIDE_INT op0, const_tree type)
+{
+  unsigned int prec = TYPE_PRECISION (type);
+
+  if (TYPE_UNSIGNED (type))
+    return wide_int::from_uhwi (op0, prec);
+  else
+    return wide_int::from_shwi (op0, prec);
+}
+
+/* Convert signed OP0 into a wide_int with parameters taken from
+   MODE. If the value does not fit, set OVERFLOW. */
+
+wide_int
+wide_int::from_shwi (HOST_WIDE_INT op0, enum machine_mode mode)
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  return wide_int::from_shwi (op0, prec);
+}
+
+/* Convert unsigned OP0 into a wide_int with parameters taken from
+   MODE. If the value does not fit, set OVERFLOW. */
+
+wide_int
+wide_int::from_uhwi (unsigned HOST_WIDE_INT op0, enum machine_mode mode)
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+
+  return wide_int::from_uhwi (op0, prec);
+}
+
+/* Return THIS as a signed HOST_WIDE_INT.  If THIS does not fit in
+   PREC, the information is lost. */
+
+HOST_WIDE_INT 
+wide_int::to_shwi (unsigned int prec) const
+{
+  HOST_WIDE_INT result;
+
+  if (prec == 0)
+    prec = precision;
+
+  if (prec < HOST_BITS_PER_WIDE_INT)
+    result = sext_hwi (val[0], prec);
+  else
+    result = val[0];
+
+  return result;
+}
+
+/* Return THIS as an unsigned HOST_WIDE_INT.  If THIS does not fit in
+   PREC, the information is lost. */
+
+unsigned HOST_WIDE_INT 
+wide_int::to_uhwi (unsigned int prec) const
+{
+  HOST_WIDE_INT result;
+
+  if (prec == 0)
+    prec = precision;
+
+  if (prec < HOST_BITS_PER_WIDE_INT)
+    result = zext_hwi (val[0], prec);
+  else
+    result = val[0];
+
+  return result;
+}
+
+
+/* Min and Max value helpers.  */
+
+/* Produce the largest number that is represented in MODE. The
+   precision are taken from mode.  SGN must be SIGNED or
+   UNSIGNED.  */
+
+wide_int
+wide_int::max_value (enum machine_mode mode, SignOp sgn)
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+  return max_value (prec, sgn, prec);
+}
+
+/* Produce the largest number that is represented in TYPE. The
+   precision and sign are taken from TYPE.  */
+
+wide_int
+wide_int::max_value (const_tree type)
+{
+  unsigned int prec = TYPE_PRECISION (type);
+  return max_value (prec, TYPE_UNSIGNED (type) ? UNSIGNED : SIGNED, prec);
+}
+
+/* Produce the smallest number that is represented in MODE. The
+   precision are taken from mode.  SGN must be SIGNED or UNSIGNED.  */
+
+wide_int
+wide_int::min_value (enum machine_mode mode, SignOp sgn)
+{
+  unsigned int prec = GET_MODE_PRECISION (mode);
+  return min_value (prec, sgn, prec);
+}
+
+/* Produce the smallest number that is represented in TYPE. The
+   precision and sign are taken from TYPE.  */
+
+wide_int
+wide_int::min_value (const_tree type)
+{
+  unsigned int prec = TYPE_PRECISION (type);
+  return min_value (prec, TYPE_UNSIGNED (type) ? UNSIGNED : SIGNED, prec);
+}
+
+
+/* Small constants.  */
+
+/* Return a wide int of -1 with precision PREC.  */
+
+wide_int
+wide_int::minus_one (unsigned int prec)
+{
+  return wide_int::from_shwi (-1, prec);
+}
+
+/* Return a wide int of 0 with precision PREC.  */
+
+wide_int
+wide_int::zero (unsigned int prec)
+{
+  return wide_int::from_shwi (0, prec);
+}
+
+/* Return a wide int of 1 with precision PREC.  */
+
+wide_int
+wide_int::one (unsigned int prec)
+{
+  return wide_int::from_shwi (1, prec);
+}
+
+/* Return a wide int of 2 with precision PREC.  */
+
+wide_int
+wide_int::two (unsigned int prec)
+{
+  return wide_int::from_shwi (2, prec);
+}
+
+/* Public accessors for the interior of a wide int.  */
+
+/* Get the number of host wide ints actually represented within the
+   wide int.  */
+
+unsigned short
+wide_int::get_len () const
+{
+  return len;
+}
+
+/* Get precision of the value represented within the wide int.  */
+
+unsigned int
+wide_int::get_precision () const
+{
+  return precision;
+}
+
+/* Get a particular element of the wide int.  */
+
+HOST_WIDE_INT
+wide_int::elt (unsigned int i) const
+{
+  return i >= len ? sign_mask () : val[i];
+}
+
+/* Return true if THIS is -1.  */
+
+bool
+wide_int::minus_one_p () const
+{
+  return len == 1 && val[0] == (HOST_WIDE_INT)-1;
+}
+
+/* Return true if THIS is 0.  */
+
+bool
+wide_int::zero_p () const
+{
+  return len == 1 && val[0] == 0;
+}
+
+/* Return true if THIS is 1.  */
+
+bool
+wide_int::one_p () const
+{
+  return len == 1 && val[0] == 1;
+}
+
+/* Return true if THIS is negative.  */
+
+bool
+wide_int::neg_p () const
+{
+  return sign_mask () != 0;
+}
+
+/*
+ * Comparisons, note that only equality is an operator.  The other
+ * comparisons cannot be operators since they are inherently signed or
+ * unsigned and C++ has no such operators.
+ */
+
+/* Return true if THIS == OP1.  */
+
+template <typename T>
+bool
+wide_int::operator == (const T &c) const
+{
+  bool result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (precision < HOST_BITS_PER_WIDE_INT)
+    {
+      unsigned HOST_WIDE_INT mask = ((HOST_WIDE_INT)1 << precision) - 1;
+      result = (val[0] & mask) == (s[0] & mask);
+    }
+  else if (precision == HOST_BITS_PER_WIDE_INT)
+      result = val[0] == s[0];
+  else
+    result = eq_p_large (s, cl);
+
+  if (result)
+    gcc_assert (len == cl);
+
+  return result;
+}
+
+/* Return true if THIS is not equal to OP1. */ 
+
+template <typename T>
+bool
+wide_int::operator != (const T &c) const
+{
+  return !(*this == c);
+}  
+
+/* Return true if THIS is less than OP1.  Signness is indicated by
+   OP.  */
+
+template <typename T>
+bool
+wide_int::lt_p (const T &c, SignOp op) const
+{
+  if (op == SIGNED)
+    return lts_p (c);
+  else
+    return ltu_p (c);
+}  
+
+/* Return true if THIS < OP1 using signed comparisons.  */
+
+template <typename T>
+bool
+wide_int::lts_p (const T &c) const
+{
+  bool result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    /* The values are already logically sign extended.  */
+    result = val[0] < s[0];
+  else
+    result = lts_p_large (val, len, s, cl, precision);
+  return result;
+}
+
+/* Return true if THIS < OP1 using unsigned comparisons.  */
+
+template <typename T>
+bool
+wide_int::ltu_p (const T &c) const
+{
+  unsigned HOST_WIDE_INT x0;
+  unsigned HOST_WIDE_INT x1;
+  bool result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	{
+	  x0 = zext_hwi (val[0], precision);
+	  x1 = zext_hwi (s[0], precision);
+	}
+      else
+	{
+	  x0 = val[0];
+	  x1 = s[0];
+	}
+
+      result = x0 < x1;
+    }
+  else
+    result = ltu_p_large (val, len, s, cl);
+  return result;
+}
+
+/* Return true if THIS is greater than OP1.  Signness is indicated by
+   OP.  */
+
+template <typename T>
+bool
+wide_int::gt_p (const T &c, SignOp op) const
+{
+  if (op == SIGNED)
+    return gts_p (c);
+  else
+    return gtu_p (c);
+}  
+
+/* Return true if THIS < OP1 using signed comparisons.  */
+
+template <typename T>
+bool
+wide_int::gts_p (const T &c) const
+{
+  bool result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      /* The values are already logically sign extended.  */
+      result = val[0] > s[0];
+    }
+  else
+    /* Reverse the parms and use ltu.  */
+    result = lts_p_large (s, cl, val, len, precision);
+  return result;
+}
+
+/* Return true if THIS < OP1 using unsigned comparisons.  */
+
+template <typename T>
+bool
+wide_int::gtu_p (const T &c) const
+{
+  unsigned HOST_WIDE_INT x0;
+  unsigned HOST_WIDE_INT x1;
+  bool result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	{
+	  x0 = zext_hwi (val[0], precision);
+	  x1 = zext_hwi (s[0], precision);
+	}
+      else
+	{
+	  x0 = val[0];
+	  x1 = s[0];
+	}
+
+      result = x0 > x1;
+    }
+  else 
+    /* Reverse the parms and use ltu.  */
+    result = ltu_p_large (s, cl, val, len);
+  return result;
+}
+
+
+/* Return -1 0 or 1 depending on how THIS compares with OP1.  Signness
+   is indicated by OP.  */
+
+template <typename T>
+int
+wide_int::cmp (const T &c, SignOp op) const
+{
+  if (op == SIGNED)
+    return cmps (c);
+  else
+    return cmpu (c);
+}  
+
+
+/* Returns -1 if THIS < OP1, 0 if THIS == OP1 and 1 if A > OP1 using
+   signed compares.  */
+
+template <typename T>
+int
+wide_int::cmps (const T &c) const
+{
+  int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      /* The values are already logically sign extended.  */
+      if (val[0] < s[0])
+	result = -1;
+      else if (val[0] > s[0])
+	result = 1;
+      else 
+	result = 0;
+    }
+  else
+    result = cmps_large (s, cl);
+  return result;
+}
+
+/* Returns -1 if THIS < OP1, 0 if THIS == OP1 and 1 if A > OP1 using
+   unsigned compares.  */
+
+template <typename T>
+int
+wide_int::cmpu (const T &c) const
+{
+  unsigned HOST_WIDE_INT x0;
+  unsigned HOST_WIDE_INT x1;
+  int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	{
+	  x0 = zext_hwi (val[0], precision);
+	  x1 = zext_hwi (s[0], precision);
+	}
+      else
+	{
+	  x0 = val[0];
+	  x1 = s[0];
+	}
+
+      if (x0 < x1)
+	result = -1;
+      else if (x0 == x1)
+	result = 0;
+      else
+	result = 1;
+    }
+  else
+    result = cmpu_large (s, cl);
+  return result;
+}
+
+
+/* Return the signed or unsigned min of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::min (const T &c, SignOp sgn) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (sgn == SIGNED)
+    return lts_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+  else
+    return ltu_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the signed or unsigned min of THIS and OP1. */
+
+wide_int
+wide_int::min (const wide_int &op1, SignOp sgn) const
+{
+  if (sgn == SIGNED)
+    return lts_p (op1) ? (*this) : op1;
+  else
+    return ltu_p (op1) ? (*this) : op1;
+}  
+
+/* Return the signed or unsigned max of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::max (const T &c, SignOp sgn) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (sgn == SIGNED)
+    return gts_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+  else
+    return gtu_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the signed or unsigned max of THIS and OP1. */
+
+wide_int
+wide_int::max (const wide_int &op1, SignOp sgn) const
+{
+  if (sgn == SIGNED)
+    return gts_p (op1) ? (*this) : op1;
+  else
+    return gtu_p (op1) ? (*this) : op1;
+}  
+
+/* Return the signed min of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::smin (const T &c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  return lts_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the signed min of THIS and OP1. */
+
+wide_int
+wide_int::smin (const wide_int &op1) const
+{
+  return lts_p (op1) ? (*this) : op1;
+}  
+
+/* Return the signed max of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::smax (const T &c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  return gts_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the signed max of THIS and OP1. */
+
+wide_int
+wide_int::smax (const wide_int &op1) const
+{
+  return gts_p (op1) ? (*this) : op1;
+}  
+
+/* Return the unsigned min of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::umin (const T &c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  return ltu_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the unsigned min of THIS and OP1. */
+
+wide_int
+wide_int::umin (const wide_int &op1) const
+{
+  return ltu_p (op1) ? (*this) : op1;
+}  
+
+/* Return the unsigned max of THIS and OP1. */
+
+template <typename T>
+wide_int
+wide_int::umax (const T &c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  return gtu_p (c) ? (*this) : wide_int::from_array (s, cl, precision, false);
+}  
+
+/* Return the unsigned max of THIS and OP1. */
+
+wide_int
+wide_int::umax (const wide_int &op1) const
+{
+  return gtu_p (op1) ? (*this) : op1;
+}  
+
+/* Return true if THIS has the sign bit set to 1 and all other bits are
+   zero.  */
+
+bool
+wide_int::only_sign_bit_p () const
+{
+  return only_sign_bit_p (precision);
+}
+
+/* Return true if THIS fits in a HOST_WIDE_INT with no loss of
+   precision.  */
+
+bool
+wide_int::fits_shwi_p () const
+{
+  return len == 1;
+}
+
+/* Return true if THIS fits in an unsigned HOST_WIDE_INT with no loss
+   of precision.  */
+
+bool
+wide_int::fits_uhwi_p () const
+{
+  return len == 1 
+    || (len == 2 && val[1] == 0);
+}
+
+/* Return THIS extended to PREC.  The signness of the extension is
+   specified by OP.  */
+
+wide_int 
+wide_int::ext (unsigned int prec, SignOp z) const
+{
+  if (z == UNSIGNED)
+    return zext (prec);
+  else
+    return sext (prec);
+}
+
+/* Return THIS forced to the size PREC.  This is sign extended if
+   needed. */
+
+wide_int 
+wide_int::sforce_to_size (unsigned int prec) const
+{
+  return force_to_size (prec, SIGNED);
+}
+
+/* Return THIS forced to the size PREC.  This is zero extended if
+   needed. */
+
+wide_int 
+wide_int::zforce_to_size (unsigned int prec) const
+{
+  return force_to_size (prec, UNSIGNED);
+}
+
+/*
+ * Logicals.
+ */
+
+/* Return THIS & OP1.  */
+
+template <typename T>
+wide_int
+wide_int::operator & (const T &c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+  
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] & s[0];
+    }
+  else
+    result = and_large (s, cl);
+  return result;
+}
+
+
+/* Return THIS & ~OP1.  */
+
+template <typename T>
+wide_int
+wide_int::and_not (const T &c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] & ~s[0];
+    }
+  else
+    result = and_not_large (s, cl);
+  return result;
+}
+
+
+/* Return the logical negation (bitwise complement) of THIS.  */
+
+wide_int
+wide_int::operator ~ () const
+{
+  wide_int result;
+  int l0 = len - 1;
+
+  result.len = len;
+  result.precision = precision;
+
+  while (l0 >= 0)
+    {
+      result.val[l0] = ~val[l0];
+      l0--;
+    }
+  return result;
+}
+
+/* Return THIS | OP1.  */
+
+template <typename T>
+wide_int
+wide_int::operator | (const T &c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] | s[0];
+    }
+  else
+    result = or_large (s, cl);
+  return result;
+}
+
+
+/* Return THIS | ~OP1.  */
+
+template <typename T>
+wide_int
+wide_int::or_not (const T &c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+  
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] | ~s[0];
+    }
+  else
+    result = or_not_large (s, cl);
+  return result;
+}
+
+
+/* Return THIS ^ OP1.  */
+
+template <typename T>
+wide_int
+wide_int::operator ^ (const T &c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+  
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] ^ s[0];
+    }
+  else
+    result = xor_large (s, cl);
+  return result;
+}
+
+/*
+ * Integer arithmetic
+ */
+
+/* Return THIS + OP1.  */
+
+template <typename T>
+wide_int
+wide_int::operator + (const T &c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+  
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] + s[0];
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	result.val[0] = sext_hwi (result.val[0], precision);
+    }
+  else
+    result = add_large (s, cl);
+  return result;
+}
+
+/* Multiply THIS and OP1.  The result is the same precision as the
+   operands, so there is no reason for signed or unsigned
+   versions.  */
+
+template <typename T>
+wide_int
+wide_int::operator * (const T &c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+  bool overflow = false;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] * s[0];
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	result.val[0] = sext_hwi (result.val[0], precision);
+    }
+  else
+    result = mul_internal (false, false, this, s, cl, UNSIGNED, &overflow, false);
+  return result;
+}
+
+/* Multiply THIS and OP1.  The signess is specified with SGN.
+   OVERFLOW is set true if the result overflows.  */
+
+template <typename T>
+wide_int 
+wide_int::mul (const T &c, SignOp sgn, bool *overflow) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  return mul_internal (false, false, this, s, cl, sgn, overflow, true);
+}
+
+/* Signed multiply THIS and OP1.  The result is the same precision as
+   the operands.  OVERFLOW is set true if the result overflows.  */
+
+template <typename T>
+wide_int
+wide_int::smul (const T &c, bool *overflow) const
+{
+  return mul (c, SIGNED, overflow);
+}
+
+/* Unsigned multiply THIS and OP1.  The result is the same precision
+   as the operands.  OVERFLOW is set true if the result overflows.  */
+
+template <typename T>
+wide_int
+wide_int::umul (const T &c, bool *overflow) const
+{
+  return mul (c, UNSIGNED, overflow);
+}
+
+/* Multiply THIS and OP1.  The signess is specified with SGN.  The
+   result is twice the precision as the operands.  The signess is
+   specified with SGN.  */
+
+template <typename T>
+wide_int
+wide_int::mul_full (const T &c, SignOp sgn) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  return mul_internal (false, true, this, s, cl, sgn, 0, false);
+}
+
+/* Signed multiply THIS and OP1.  The result is twice the precision as
+   the operands.  */
+
+template <typename T>
+wide_int
+wide_int::smul_full (const T &c) const
+{
+  return mul_full (c, SIGNED);
+}
+
+/* Unsigned multiply THIS and OP1.  The result is twice the precision
+   as the operands.  */
+
+template <typename T>
+wide_int
+wide_int::umul_full (const T &c) const
+{
+  return mul_full (c, UNSIGNED);
+}
+
+/* Multiply THIS and OP1 and return the high part of that result.  The
+   signess is specified with SGN.  The result is the same precision as
+   the operands.  The mode is the same mode as the operands.  The
+   signess is specified with y.  */
+
+template <typename T>
+wide_int
+wide_int::mul_high (const T &c, SignOp sgn) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  return mul_internal (true, false, this, s, cl, sgn, 0, false);
+}
+
+/* Negate THIS.  */
+
+wide_int
+wide_int::neg () const
+{
+  wide_int z = wide_int::from_shwi (0, precision);
+  return z - *this;
+}
+
+/* Negate THIS.  Set overflow if the value cannot be negated.  */
+
+wide_int
+wide_int::neg (bool *overflow) const
+{
+  wide_int z = wide_int::from_shwi (0, precision);
+  if (only_sign_bit_p ())
+    *overflow = true;
+
+  return z - *this;
+}
+
+/* Return THIS - OP1.  */
+
+template <typename T>
+wide_int
+wide_int::operator - (const T& c) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+  
+  if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.len = 1;
+      result.precision = precision;
+      result.val[0] = val[0] - s[0];
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	result.val[0] = sext_hwi (result.val[0], precision);
+    }
+  else
+    result = sub_large (s, cl);
+  return result;
+}
+
+
+
+/* Divide DIVISOR into THIS.  The result is the same size as the
+   operands.  The sign is specified in SGN.  The output is truncated.
+   Overflow is set to true if the result overflows, otherwise it is
+   not set.  */
+template <typename T>
+wide_int
+wide_int::div_trunc (const T &c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+  
+  return divmod_internal (true, this, s, cl, sgn, 
+			  &remainder, false, overflow);
+}
+
+/* Signed divide with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::sdiv_trunc (const T &c) const
+{
+  return div_trunc (c, SIGNED);
+}
+
+/* Unsigned divide with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::udiv_trunc (const T &c) const
+{
+  return div_trunc (c, UNSIGNED);
+}
+
+/* Divide DIVISOR into THIS.  The result is the same size as the
+   operands.  The sign is specified in SGN.  The output is floor
+   truncated.  Overflow is set to true if the result overflows,
+   otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::div_floor (const T &c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+  if (sgn == SIGNED && quotient.neg_p () && !remainder.zero_p ())
+    return quotient - 1;
+  return quotient;
+}
+
+/* Unsigned divide with floor truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::udiv_floor (const T &c) const
+{
+  bool overflow;
+
+  return div_floor (c, UNSIGNED, &overflow);
+}
+
+/* Signed divide with floor truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::sdiv_floor (const T &c) const
+{
+  bool overflow;
+
+  return div_floor (c, SIGNED, &overflow);
+}
+
+/* Divide DIVISOR into THIS.  The result is the same size as the
+   operands.  The sign is specified in SGN.  The output is ceil
+   truncated.  Overflow is set to true if the result overflows,
+   otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::div_ceil (const T &c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+
+  if (!remainder.zero_p ())
+    {
+      if (sgn == UNSIGNED || quotient.neg_p ())
+	return quotient;
+      else
+	return quotient + 1;
+    }
+  return quotient;
+}
+
+/* Divide DIVISOR into THIS.  The result is the same size as the
+   operands.  The sign is specified in SGN.  The output is round
+   truncated.  Overflow is set to true if the result overflows,
+   otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::div_round (const T &c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+  if (!remainder.zero_p ())
+    {
+      wide_int divisor = wide_int::from_array (s, cl, precision);
+      if (sgn == SIGNED)
+	{
+	  wide_int p_remainder = remainder.neg_p () ? remainder.neg () : remainder;
+	  wide_int p_divisor = divisor.neg_p () ? divisor.neg () : divisor;
+	  p_divisor = p_divisor.rshiftu_large (1);
+	  
+	  if (p_divisor.gts_p (p_remainder)) 
+	    {
+	      if (quotient.neg_p ())
+		return quotient - 1;
+	      else 
+		return quotient + 1;
+	    }
+	}
+      else
+	{
+	  wide_int p_divisor = divisor.rshiftu_large (1);
+	  if (p_divisor.gtu_p (remainder))
+	    return quotient + 1;
+	}
+    }
+  return quotient;
+}
+
+/* Divide DIVISOR into THIS producing both the quotient and remainder.
+   The result is the same size as the operands.  The sign is specified
+   in SGN.  The output is truncated.  */
+
+template <typename T>
+wide_int
+wide_int::divmod_trunc (const T &c, wide_int *remainder, SignOp sgn) const
+{
+  bool overflow = false;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  return divmod_internal (true, this, s, cl, sgn, 
+			  remainder, true, &overflow);
+}
+
+/* Signed divide/mod with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::sdivmod_trunc (const T &c, wide_int *mod) const
+{
+  return divmod_trunc (c, mod, SIGNED);
+}
+
+/* Unsigned divide/mod with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::udivmod_trunc (const T &c, wide_int *mod) const
+{
+  return divmod_trunc (c, mod, UNSIGNED);
+}
+
+/* Divide DIVISOR into THIS.  The remainder is also produced in
+   REMAINDER.  The result is the same size as the operands.  The sign
+   is specified in SGN.  The output is floor truncated.  Overflow is
+   set to true if the result overflows, otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::divmod_floor (const T &c, wide_int *remainder, SignOp sgn) const
+{
+  wide_int quotient;
+  bool overflow = false;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      remainder, true, &overflow);
+  if (sgn == SIGNED && quotient.neg_p () && !(*remainder).zero_p ())
+    {
+      *remainder = *remainder - wide_int::from_array (s, cl, precision);
+      return quotient - 1;
+    }
+  return quotient;
+}
+
+/* Signed divide/mod with floor truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::sdivmod_floor (const T &c, wide_int *mod) const
+{
+  return divmod_floor (c, mod, SIGNED);
+}
+
+/* Divide DIVISOR into THIS producing the remainder.  The result is
+   the same size as the operands.  The sign is specified in SGN.  The
+   output is truncated.  Overflow is set to true if the result
+   overflows, otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::mod_trunc (const T &c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  divmod_internal (true, this, s, cl, sgn, 
+			  &remainder, true, overflow);
+  return remainder;
+}
+
+/* Signed mod with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::smod_trunc (const T &c) const
+{
+  return mod_trunc (c, SIGNED);
+}
+
+/* Unsigned mod with truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::umod_trunc (const T &c) const
+{
+  return mod_trunc (c, UNSIGNED);
+}
+
+/* Divide DIVISOR into THIS producing the remainder.  The result is
+   the same size as the operands.  The sign is specified in SGN.  The
+   output is floor truncated.  Overflow is set to true if the result
+   overflows, otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::mod_floor (const T &c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+
+  if (sgn == SIGNED && quotient.neg_p () && !remainder.zero_p ())
+    return remainder - wide_int::from_array (s, cl, precision);
+  return remainder;
+}
+
+/* Unsigned mod with floor truncation of result.  */
+
+template <typename T>
+wide_int
+wide_int::umod_floor (const T &c) const
+{
+  bool overflow;
+
+  return mod_floor (c, UNSIGNED, &overflow);
+}
+
+/* Divide DIVISOR into THIS producing the remainder.  The result is the
+   same size as the operands.  The sign is specified in SGN.  The
+   output is ceil truncated.  Overflow is set to true if the result
+   overflows, otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::mod_ceil (const T &c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+
+  if (!remainder.zero_p ())
+    {
+      if (sgn == UNSIGNED || quotient.neg_p ())
+	return remainder;
+      else
+	return remainder - wide_int::from_array (s, cl, precision);
+    }
+  return remainder;
+}
+
+/* Divide DIVISOR into THIS producing the remainder.  The result is
+   the same size as the operands.  The sign is specified in SGN.  The
+   output is round truncated.  Overflow is set to true if the result
+   overflows, otherwise it is not set.  */
+
+template <typename T>
+wide_int
+wide_int::mod_round (const T &c, SignOp sgn, bool *overflow) const
+{
+  wide_int remainder;
+  wide_int quotient;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+  gcc_checking_assert (prec == 0 || prec == precision);
+
+  quotient = divmod_internal (true, this, s, cl, sgn, 
+			      &remainder, true, overflow);
+
+  if (!remainder.zero_p ())
+    {
+      wide_int divisor = wide_int::from_array (s, cl, precision);
+      if (sgn == SIGNED)
+	{
+	  wide_int p_remainder = remainder.neg_p () ? remainder.neg () : remainder;
+	  wide_int p_divisor = divisor.neg_p () ? divisor.neg () : divisor;
+	  p_divisor = p_divisor.rshiftu_large (1);
+	  
+	  if (p_divisor.gts_p (p_remainder)) 
+	    {
+	      if (quotient.neg_p ())
+		return remainder + divisor;
+	      else 
+		return remainder - divisor;
+	    }
+	}
+      else
+	{
+	  wide_int p_divisor = divisor.rshiftu_large (1);
+	  if (p_divisor.gtu_p (remainder))
+	    return remainder - divisor;
+	}
+    }
+  return remainder;
+}
+
+
+/* If SHIFT_COUNT_TRUNCATED is defined, truncate CNT.   
+
+   At first look, the shift truncation code does not look right.
+   Shifts (and rotates) are done according to the precision of the
+   mode but the shift count is truncated according to the bitsize of
+   the mode.  This is how real hardware works (Knuth's mix machine is
+   the only known exception to this rule, but it was never real).
+
+   On an ideal machine, like Knuth's mix machine, a shift count is a
+   word long and all of the bits of that word are examined to compute
+   the shift amount.  But on real hardware, especially on machines
+   with fast (single cycle shifts) that takes too long.  On these
+   machines, the amount of time to perform a shift dictates the cycle
+   time of the machine so corners are cut to keep this fast.  A
+   comparison of an entire 64 bit word would take something like 6
+   gate delays before the shifting can even start.
+
+   So real hardware only looks at a small part of the shift amount.
+   On ibm machines, this tends to be 1 more than what is necessary to
+   encode the shift amount.  The rest of the world looks at only the
+   minimum number of bits.  This means that only 3 gate delays are
+   necessary to set up the shifter.
+
+   On the other hand, right shifts and rotates must be according to
+   the precision or the operation does not make any sense. 
+
+   This function is called in two contexts.  If OP == TRUNC, this
+   function provides a count that matches the semantics of the target
+   machine depending on the value of SHIFT_COUNT_TRUNCATED.  Note that
+   if SHIFT_COUNT_TRUNCATED is not defined, this function may produce
+   -1 as a value if the shift amount is greater than the bitsize of
+   the mode.  -1 is a surrogate for a very large amount.
+
+   If OP == NONE, then this function always truncates the shift value
+   to the bitsize because this shifting operation is a function that
+   is internal to GCC.  */
+
+inline int
+wide_int::trunc_shift (const HOST_WIDE_INT *cnt, unsigned int len ATTRIBUTE_UNUSED,
+		       unsigned int bitsize, ShiftOp trunc_op)
+{
+  if (trunc_op == TRUNC)
+    {
+      gcc_checking_assert (bitsize != 0);
+#ifdef SHIFT_COUNT_TRUNCATED
+      return cnt[0] & (bitsize - 1);
+#else
+      if (cnt[0] < bitsize && cnt[0] >= 0 && len == 1)
+	return cnt[0];
+      else 
+	return -1;
+#endif
+    }
+  else if (bitsize == 0)
+    return cnt[0];
+  else
+    return cnt[0] & (bitsize - 1);
+}
+
+/* Left shift THIS by C.  See the definition of Op.TRUNC for how to
+   set TRUNC_OP.  Since this is used internally, it has the ability to
+   specify the BITSIZE  and PRECISION independently.  This is useful
+   when inserting a small value into a larger one.  */
+
+template <typename T>
+wide_int
+wide_int::lshift (const T &c, unsigned int bitsize, 
+		  unsigned int res_prec, ShiftOp trunc_op) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+  HOST_WIDE_INT shift;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+
+  if (res_prec == 0)
+    res_prec = precision;
+
+  shift = trunc_shift (s, cl, bitsize, trunc_op);
+  if (shift == -1)
+    result = wide_int::zero (precision);
+  else if (shift == 0 && res_prec == precision)
+    result = *this;
+  /* Handle the simple case quickly.   */
+  else if (res_prec <= HOST_BITS_PER_WIDE_INT)
+    {
+      result.precision = res_prec;
+      result.len = 1;
+      result.val[0] = val[0] << shift;
+    }
+  else
+    result = lshift_large (shift, res_prec);
+  return result;
+}
+
+/* Rotate THIS left by Y within its precision.  */
+
+template <typename T>
+wide_int
+wide_int::lrotate (const T &c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+
+  return lrotate ((unsigned HOST_WIDE_INT)s[0]);
+}
+
+
+/* Rotate THIS left by CNT within its precision.  */
+
+wide_int
+wide_int::lrotate (unsigned HOST_WIDE_INT cnt) const
+{
+  wide_int left, right, result;
+
+  left = lshift (cnt);
+  right = rshiftu (precision - cnt);
+  result = left | right;
+
+  return result;
+}
+
+
+/* Right shift THIS by Y.  SGN indicates the sign.  TRUNC_OP indicates the
+   truncation option.  */
+
+template <typename T>
+wide_int
+wide_int::rshift (const T &c, SignOp sgn, 
+		  unsigned int bitsize, ShiftOp trunc_op) const
+{
+  if (sgn == UNSIGNED)
+    return rshiftu (c, bitsize, trunc_op);
+  else
+    return rshifts (c, bitsize, trunc_op);
+}
+
+
+/* Unsigned right shift THIS by CNT.  See the definition of Op.TRUNC
+   for how to set TRUNC_OP.  */
+
+template <typename T>
+wide_int
+wide_int::rshiftu (const T &c, unsigned int bitsize,
+		   ShiftOp trunc_op) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+  HOST_WIDE_INT shift;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+
+  shift = trunc_shift (s, cl, bitsize, trunc_op);
+  
+  if (shift == 0)
+    result = copy ();
+  else if (shift == -1)
+    result = wide_int::zero (precision);
+  else if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      /* Handle the simple case quickly.   */
+      unsigned HOST_WIDE_INT x = val[0];
+
+      result.precision = precision;
+      result.len = 1;
+
+      if (precision < HOST_BITS_PER_WIDE_INT)
+	x = zext_hwi (x, precision);
+
+      result.val[0] = x >> shift;
+    }
+  else 
+    result = rshiftu_large (shift);
+  return result;
+}
+
+/* Signed right shift THIS by CNT.  See the definition of Op.TRUNC for
+   how to set TRUNC_OP.  */
+
+template <typename T>
+wide_int
+wide_int::rshifts (const T &c, unsigned int bitsize,
+		   ShiftOp trunc_op) const
+{
+  wide_int result;
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+  HOST_WIDE_INT shift;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+
+  shift = trunc_shift (s, cl, bitsize, trunc_op);
+  
+  if (shift == 0)
+    result = copy ();
+  else if (shift == -1)
+    result = wide_int::zero (precision);
+  else if (precision <= HOST_BITS_PER_WIDE_INT)
+    {
+      /* Handle the simple case quickly.   */
+      HOST_WIDE_INT x = val[0];
+
+      result.precision = precision;
+      result.len = 1;
+      result.val[0] = x >> shift;
+    }
+  else 
+    result = rshifts_large (shift);
+  return result;
+}
+
+/* Rotate THIS right by Y within its precision.  */
+
+
+template <typename T>
+wide_int
+wide_int::rrotate (const T &c) const
+{
+  HOST_WIDE_INT ws[WIDE_INT_MAX_ELTS];
+  const HOST_WIDE_INT *s;
+  unsigned int cl;
+  unsigned int prec;
+
+  s = to_shwi2 (ws, &cl, &prec, c);
+
+  return rrotate ((unsigned HOST_WIDE_INT) s[0]);
+}
+
+
+/* Rotate THIS left by CNT within its precision.  */
+
+wide_int
+wide_int::rrotate (unsigned HOST_WIDE_INT cnt) const
+{
+  wide_int left, right, result;
+
+  left = lshift (precision - cnt);
+  right = rshiftu (cnt);
+  result = left | right;
+
+  return result;
+}
+
+/* tree related routines.  */
+
+extern tree wide_int_to_tree (tree type, const wide_int &cst);
+extern tree wide_int_to_infinite_tree (tree type, const wide_int &cst, 
+				       unsigned int prec);
+extern tree force_fit_type_wide (tree, const wide_int &, int, bool);
+
+/* real related routines.  */
+extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
+extern wide_int decimal_real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
+
+
+/* Conversion to and from GMP integer representations.  */
+
+void mpz_set_wide_int (mpz_t, wide_int, bool);
+wide_int mpz_get_wide_int (const_tree, mpz_t, bool);
+#endif /* GENERATOR FILE */
+
+#endif /* WIDE_INT_H */

Attachment: p4-7.clog
Description: Text document


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]