This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][RFC] Bit CCP and pointer alignment propagation


On Fri, 30 Jul 2010, Paolo Bonzini wrote:

> > + 	CONSTANT	->   V_i has been found to hold a constant
> > + 			    value C.
> 
> "Some bits of V_i have been found to..."

Done.

> Or just remove/rewrite the head comment and move that comment here:
> 
> > + /* Possible lattice values.  */
> > + typedef enum
> > + {
> > +   UNINITIALIZED,
> > +   UNDEFINED,
> > +   CONSTANT,
> > +   VARYING
> > + } ccp_lattice_t;
> 
> > + 	  val.bit_lattice_val
> > + 	    = double_int_and
> > + 	        (double_int_ior (r1val.bit_lattice_val,
> > + 				 r2val.bit_lattice_val),
> > + 		 double_int_and (double_int_ior (r1val.value,
> > + 						 r1val.bit_lattice_val),
> > + 				 double_int_ior (r2val.value,
> > + 						 r2val.bit_lattice_val)));
> > + 	  if (!double_int_minus_one_p (val.bit_lattice_val))
> > + 	    val.lattice_val = CONSTANT;
> 
> In other places of this function you just set it to CONSTANT without the if.
> You can just set the lattice_val to CONSTANT here:
> 
> > +   bit_prop_value_t val = { VARYING, { -1, -1 }, { 0, 0 } };
> 
> and change it to VARYING ("if minus one then VARYING") at the end of the
> function.  If no computation is done in the function, the bit_lattice_val will
> stay -1.

True.  I changed it to initial { CONSTANT, { -1, -1 }, {0, 0}} and
drop to varying in the end.  So I don't need to set the lattice value
at all.

> > +     case LSHIFT_EXPR:
> > +     case RSHIFT_EXPR:
> > +       if (r2val.lattice_val == CONSTANT
> > + 	  && double_int_zero_p (r2val.bit_lattice_val))
> 
> Even if some bits are varying, the result will have at least r2val.value &
> ~r2val.bit_lattice_val bits shifted in.  So you can do something like
> 
>       if (r2val.lattice_val != CONSTANT)
>         break;
>       if (double_int_zero_p (r2val.bit_lattice_val)) {
>         in_mask = r1val.bit_lattice_val;
>         in_value = r1val.value;
>       } else {
>         in_mask = double_int_minus_one;
>         in_value = double_int_minus_one;
>       }
> 
>       shift = r2val.value & ~r2val.bit_lattice_val;
>       if (SHIFT_COUNT_TRUNCATED)
>         shift &= GETMODE_BITSIZE (TYPE_MODE (type)) - 1;
> 
>       val.lattice_val = CONSTANT;
>       val.bit_lattice_val
> 	= double_int_lshift (in_mask, shift,
> 			     TYPE_PRECISION (type), false);
>       val.value = double_int_lshift (in_value, shift,
> 				     TYPE_PRECISION (type), false);

Hmm.  I don't quite understand.  Are you saying that only the
lower bits of r2val are important if SHIFT_COUNT_TRUNCATED?
At least we need to know the sign of the shift amount
to be able to tell if we're shifting in zeros from the right
or ones from the left.

> > + 	  else if (shift < 0)
> > + 	    {
> > + 	      val.lattice_val = CONSTANT;
> > + 	      val.bit_lattice_val
> > + 		= double_int_rshift (r1val.bit_lattice_val,
> > + 				     -shift, TYPE_PRECISION (type), true);
> > + 	      val.value = double_int_rshift (r1val.value, shift,
> > + 					     TYPE_PRECISION (type), true);
> > + 	    }
> 
> Here shifted in bits are varying for signed types (unless the sign bit is
> constant, but that's not handled here either).

That should be handled fine by using an arithmetic shift for both
the lattice and the value.  So for varying lattice "sign" bit we
shift in varying.  Hm, but indeed for unsigned constants we shift
in the sign bit - fixed to

          else if (shift < 0)
            {
              val.bit_lattice_val
                = double_int_rshift (r1val.bit_lattice_val,
                                     -shift, TYPE_PRECISION (type), true);
              val.value = double_int_rshift (r1val.value, -shift,
                                             TYPE_PRECISION (type), !uns);

(typo for using 'shift' fixed as well)

> > + 	  unsigned r1ctz = 0, r2ctz = 0;
> > + 	  while (r1ctz < HOST_BITS_PER_WIDE_INT
> > + 		 && !(r1val.bit_lattice_val.low & (1 << r1ctz))
> > + 		 && !(r1val.value.low & (1 << r1ctz)))
> > + 	    r1ctz++;
> > + 	  while (r2ctz < HOST_BITS_PER_WIDE_INT
> > + 		 && !(r2val.bit_lattice_val.low & (1 << r2ctz))
> > + 		 && !(r2val.value.low & (1 << r2ctz)))
> > + 	    r2ctz++;
> 
> This is just a ctz on (v | m), no?  This makes it easier to track high bits as
> well.

Yes.  Probably worth adding double_int_ctz.

> I'm sure that you can optimize the addition, I'll think about it.

I thought about it as well and decided to be safe for now ;)
But iterating sth with full-width operations should be possible.

Thanks for the review,
Richard.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]