This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][RFC] Bit CCP and pointer alignment propagation


+ 	CONSTANT	->   V_i has been found to hold a constant
+ 			    value C.

"Some bits of V_i have been found to..."


Or just remove/rewrite the head comment and move that comment here:

+ /* Possible lattice values.  */
+ typedef enum
+ {
+   UNINITIALIZED,
+   UNDEFINED,
+   CONSTANT,
+   VARYING
+ } ccp_lattice_t;

+ 	  val.bit_lattice_val
+ 	    = double_int_and
+ 	        (double_int_ior (r1val.bit_lattice_val,
+ 				 r2val.bit_lattice_val),
+ 		 double_int_and (double_int_ior (r1val.value,
+ 						 r1val.bit_lattice_val),
+ 				 double_int_ior (r2val.value,
+ 						 r2val.bit_lattice_val)));
+ 	  if (!double_int_minus_one_p (val.bit_lattice_val))
+ 	    val.lattice_val = CONSTANT;

In other places of this function you just set it to CONSTANT without the if. You can just set the lattice_val to CONSTANT here:


+ bit_prop_value_t val = { VARYING, { -1, -1 }, { 0, 0 } };

and change it to VARYING ("if minus one then VARYING") at the end of the function. If no computation is done in the function, the bit_lattice_val will stay -1.


+     case LSHIFT_EXPR:
+     case RSHIFT_EXPR:
+       if (r2val.lattice_val == CONSTANT
+ 	  && double_int_zero_p (r2val.bit_lattice_val))

Even if some bits are varying, the result will have at least r2val.value & ~r2val.bit_lattice_val bits shifted in. So you can do something like


      if (r2val.lattice_val != CONSTANT)
        break;
      if (double_int_zero_p (r2val.bit_lattice_val)) {
        in_mask = r1val.bit_lattice_val;
        in_value = r1val.value;
      } else {
        in_mask = double_int_minus_one;
        in_value = double_int_minus_one;
      }

      shift = r2val.value & ~r2val.bit_lattice_val;
      if (SHIFT_COUNT_TRUNCATED)
        shift &= GETMODE_BITSIZE (TYPE_MODE (type)) - 1;

      val.lattice_val = CONSTANT;
      val.bit_lattice_val
	= double_int_lshift (in_mask, shift,
			     TYPE_PRECISION (type), false);
      val.value = double_int_lshift (in_value, shift,
				     TYPE_PRECISION (type), false);

+ 	  else if (shift < 0)
+ 	    {
+ 	      val.lattice_val = CONSTANT;
+ 	      val.bit_lattice_val
+ 		= double_int_rshift (r1val.bit_lattice_val,
+ 				     -shift, TYPE_PRECISION (type), true);
+ 	      val.value = double_int_rshift (r1val.value, shift,
+ 					     TYPE_PRECISION (type), true);
+ 	    }

Here shifted in bits are varying for signed types (unless the sign bit is constant, but that's not handled here either).


+ 	  unsigned r1ctz = 0, r2ctz = 0;
+ 	  while (r1ctz < HOST_BITS_PER_WIDE_INT
+ 		 && !(r1val.bit_lattice_val.low & (1 << r1ctz))
+ 		 && !(r1val.value.low & (1 << r1ctz)))
+ 	    r1ctz++;
+ 	  while (r2ctz < HOST_BITS_PER_WIDE_INT
+ 		 && !(r2val.bit_lattice_val.low & (1 << r2ctz))
+ 		 && !(r2val.value.low & (1 << r2ctz)))
+ 	    r2ctz++;

This is just a ctz on (v | m), no? This makes it easier to track high bits as well.


I'm sure that you can optimize the addition, I'll think about it.

Paolo


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]