[Bug target/100056] [9/10/11 Regression] orr + lsl vs. [us]bfiz

luc.vanoostenryck at gmail dot com gcc-bugzilla@gcc.gnu.org
Tue Apr 13 15:34:04 GMT 2021


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100056

--- Comment #4 from Luc Van Oostenryck <luc.vanoostenryck at gmail dot com> ---
(In reply to Jakub Jelinek from comment #3)
> Created attachment 50583 [details]
> gcc11-pr100056.patch
> 
> Untested fix.

Mmmm, that's working fine for the cases I had but not in
more general cases. I think that the constraint on the AND
may be too tight. For example, changing things slightly to
have a smaller mask:
    int or_lsl_u3(unsigned i) {
        i &= 7;
        return i | (i << 11);
    }

still gives:
    or_lsl_u3:
        and     w1, w0, 7
        ubfiz   w0, w0, 11, 3
        orr     w0, w0, w1
        ret

while GCC8 gave the expected:
    or_lsl_u3:
        and     w0, w0, 7
        orr     w0, w0, w0, lsl 11
        ret

In fact, I would tend to think that the AND part should be
removed from your split pattern (some kind of zero-extension
seems to be needed to reproduce the problem but that's all).


More information about the Gcc-bugs mailing list