This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug target/67351] Missed optimisation on 64-bit field compared to 32-bit


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67351

--- Comment #4 from Andrew Pinski <pinskia at gcc dot gnu.org> ---
(In reply to UroÅ Bizjak from comment #3)
> (In reply to UroÅ Bizjak from comment #2)
> > (In reply to Allan Jensen from comment #0)
> > 
> > > Gcc will expand and detect field setting on 32-bit integers, but for some
> > > reason miss the opportunity on 64-bit.
> > 
> > The immediates for 64bit logic insns are limited to sign-extended 32bit
> > values, so this probably limits combine to combine several insns into one.
> 
> One example is:
> 
> (insn 8 6 9 2 (parallel [
>             (set (reg:DI 100)
>                 (lshiftrt:DI (reg/v:DI 98 [ a ])
>                     (const_int 48 [0x30])))
>             (clobber (reg:CC 17 flags))
>         ]) test.cpp:63 538 {*lshrdi3_1}
>      (expr_list:REG_UNUSED (reg:CC 17 flags)
>         (nil)))
> (insn 9 8 10 2 (parallel [
>             (set (reg:DI 101)
>                 (ashift:DI (reg:DI 100)
>                     (const_int 48 [0x30])))
>             (clobber (reg:CC 17 flags))
>         ]) test.cpp:63 504 {*ashldi3_1}
>      (expr_list:REG_DEAD (reg:DI 100)
>         (expr_list:REG_UNUSED (reg:CC 17 flags)
>             (nil))))
> 
> combine tries to:
> 
> Trying 8 -> 9:
> Failed to match this instruction:
> (parallel [
>         (set (reg:DI 101)
>             (and:DI (reg/v:DI 98 [ a ])
>                 (const_int -281474976710656 [0xffff000000000000])))
>         (clobber (reg:CC 17 flags))
>     ])
> 
> However, tree optimizers pass to expand the following sequence:
> 
>   a = giveMe64 ();
>   a$rgba_5 = MEM[(struct MyRgba64 *)&a];
>   _6 = a$rgba_5 >> 16;
>   _7 = a$rgba_5 >> 48;
>   _8 = _7 << 48;
>   _10 = _6 << 16;
>   _11 = _10 & 4294967295;
>   _13 = a$rgba_5 & 65535;
>   _15 = _13 | 264913582817280;
>   _16 = _8 | _15;
>   _14 = _11 | _16;
>   MEM[(struct MyRgba64 *)&D.2451] = _14;
>   return D.2451;
> 
> Richi, can these shifts be converted to equivalent masking in tree
> optimizers?


They should be or at least Naveen's patches should handle them.  There is an
open bug filed doing a >> N << N and one filed for a << N >> N already (I filed
it).

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]