This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][RFC] Bit CCP and pointer alignment propagation


On 07/30/2010 02:39 PM, Richard Guenther wrote:
+     case LSHIFT_EXPR:
+     case RSHIFT_EXPR:
+       if (r2val.lattice_val == CONSTANT
+ 	&&  double_int_zero_p (r2val.bit_lattice_val))

Even if some bits are varying, the result will have at least r2val.value& ~r2val.bit_lattice_val bits shifted in. So you can do something like

       if (r2val.lattice_val != CONSTANT)
         break;
       if (double_int_zero_p (r2val.bit_lattice_val)) {
         in_mask = r1val.bit_lattice_val;
         in_value = r1val.value;
       } else {
         in_mask = double_int_minus_one;
         in_value = double_int_minus_one;
       }

       shift = r2val.value&  ~r2val.bit_lattice_val;
       if (SHIFT_COUNT_TRUNCATED)
         shift&= GETMODE_BITSIZE (TYPE_MODE (type)) - 1;

       val.lattice_val = CONSTANT;
       val.bit_lattice_val
	= double_int_lshift (in_mask, shift,
			     TYPE_PRECISION (type), false);
       val.value = double_int_lshift (in_value, shift,
				     TYPE_PRECISION (type), false);

Hmm. I don't quite understand. Are you saying that only the lower bits of r2val are important if SHIFT_COUNT_TRUNCATED? At least we need to know the sign of the shift amount to be able to tell if we're shifting in zeros from the right or ones from the left.

Well, yes. :)


I wonder if using negative shift counts is just a useless complication, since that's what got me confused. If you just make "shift" a positive shift count and test the tree code rather than the sign of shift, the code I gave above to handle SHIFT_COUNT_TRUNCATED magically starts to work.

+ 	  else if (shift<  0)
+ 	    {
+ 	      val.lattice_val = CONSTANT;
+ 	      val.bit_lattice_val
+ 		= double_int_rshift (r1val.bit_lattice_val,
+ 				     -shift, TYPE_PRECISION (type), true);
+ 	      val.value = double_int_rshift (r1val.value, shift,
+ 					     TYPE_PRECISION (type), true);
+ 	    }

Here shifted in bits are varying for signed types (unless the sign bit is constant, but that's not handled here either).

That should be handled fine by using an arithmetic shift for both the lattice and the value.

Aha, right.


So for varying lattice "sign" bit we
shift in varying.  Hm, but indeed for unsigned constants we shift
in the sign bit - fixed to

           else if (shift<  0)
             {
               val.bit_lattice_val
                 = double_int_rshift (r1val.bit_lattice_val,
                                      -shift, TYPE_PRECISION (type), true);

This should be !uns too, since for unsigned values shifted-in bits are constant. But it's much more clever than I thought: if the topmost bit is constant, the newly computed lattice is also constant. Very nice.


This unfortunately wouldn't work if r1val.bit_lattice_val is changed to in_mask as in my example above. To handle that you could use

   msb_constant = uns || !double_bit_set_p (r1val.bit_lattice_val,
                                            TYPE_PRECISION (type) - 1)

and then you pass !msb_constant to this right shift. I'll let you decide whether it's overkill, but in any case the current clever code needs documentation.

Just remember to add testcases covering shifting of unsigned values, positive signed values, and negative signed values. Also file a missing optimization bug for e.g. (x << (y | 8)) & 255 if you don't include my suggestions.

+ 	  unsigned r1ctz = 0, r2ctz = 0;
+ 	  while (r1ctz<  HOST_BITS_PER_WIDE_INT
+ 		&&  !(r1val.bit_lattice_val.low&  (1<<  r1ctz))
+ 		&&  !(r1val.value.low&  (1<<  r1ctz)))
+ 	    r1ctz++;
+ 	  while (r2ctz<  HOST_BITS_PER_WIDE_INT
+ 		&&  !(r2val.bit_lattice_val.low&  (1<<  r2ctz))
+ 		&&  !(r2val.value.low&  (1<<  r2ctz)))
+ 	    r2ctz++;

This is just a ctz on (v | m), no? This makes it easier to track high bits as well.

Yes. Probably worth adding double_int_ctz.

Agreed.


Paolo


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]