[Bug tree-optimization/93040] gcc doesn't optimize unaligned accesses to a 16-bit value on the x86 as well as it does a 32-bit value (or clang)

pinskia at gcc dot gnu.org gcc-bugzilla@gcc.gnu.org
Sun Dec 22 01:06:00 GMT 2019


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93040

Andrew Pinski <pinskia at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2019-12-22
            Version|unknown                     |10.0
     Ever confirmed|0                           |1

--- Comment #2 from Andrew Pinski <pinskia at gcc dot gnu.org> ---
If we rewrite get_unaligned_16 slightly like:
    unsigned short get_unaligned_16_1 (unsigned char *p)
    {
        unsigned short t0 = p[0];
        unsigned short t1 = p[1];
        t1 <<= 8;
        return t0 | t1;
    }

GCC is able to detect it as a nop/bswap (on big-endian targets).


More information about the Gcc-bugs mailing list