[Bug tree-optimization/18031] OR of a bitfield and a constant is not optimized at tree level

steven at gcc dot gnu dot org gcc-bugzilla@gcc.gnu.org
Thu Apr 27 20:46:00 GMT 2006



------- Comment #5 from steven at gcc dot gnu dot org  2006-04-27 20:46 -------
So I asked myself, why are we not catching this in vrp?  I know we can derive
ranges from types, so why don't we derive a [0,1] range from the bitfield load?

It turns out that we make _all_ loads VARYING right away, so we end up with:


Value ranges after VRP:

b_1: ~[0B, 0B]  EQUIVALENCES: { b_2 } (1 elements)
b_2: VARYING
D.1882_3: VARYING
D.1883_4: [0, 1]  EQUIVALENCES: { } (0 elements)
D.1884_5: [0, +INF]  EQUIVALENCES: { } (0 elements)
D.1885_6: [0, 127]  EQUIVALENCES: { } (0 elements)
D.1886_7: [0, +INF]  EQUIVALENCES: { } (0 elements)


ior (b)
{
  <unnamed type> D.1886;
  unsigned char D.1885;
  signed char D.1884;
  signed char D.1883;
  <unnamed type> D.1882;

<bb 2>:
  D.1882_3 = b_2->bit;
  D.1883_4 = (signed char) D.1882_3;
  D.1884_5 = D.1883_4 | 1;
  D.1885_6 = (unsigned char) D.1884_5;
  D.1886_7 = (<unnamed type>) D.1885_6;
  b_2->bit = D.1886_7;
  return;

}


where, at least so it seems to me, we could find something like,
D.1882_3: [0, 1] (etc.)


-- 

steven at gcc dot gnu dot org changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
   Last reconfirmed|2006-02-18 18:24:49         |2006-04-27 20:46:04
               date|                            |


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=18031



More information about the Gcc-bugs mailing list