[Bug tree-optimization/96135] [9/10/11 regression] bswap not detected by bswap pass, unexpected results between optimization levels

rguenth at gcc dot gnu.org gcc-bugzilla@gcc.gnu.org
Thu Mar 11 09:07:45 GMT 2021


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96135

--- Comment #2 from Richard Biener <rguenth at gcc dot gnu.org> ---
Created attachment 50361
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=50361&action=edit
WIP

On current trunk at -O3 f() again works via store-merging / vectorizing:

-  _21 = {_3, _4, _5, _6, _7, _8, _9, _10};
+  bswapsrc_22 = (long unsigned int) i_2(D);
+  bswapdst_19 = __builtin_bswap64 (bswapsrc_22);
+  _21 = VIEW_CONVERT_EXPR<vector(8) char>(bswapdst_19);

but g() does not, because init_symbolic_number doesn't like non-integral types.
Fixing that generates

_Z1gd:
.LFB2:
        .cfi_startproc
        movq    %xmm0, %rax
        bswap   %rax
        ret

but with -m32 it has the issue that we bswap only the lower part since
vectorizing produced two vector CTORs.  So we'd need to use a BIT_FIELD_REF
to extract the integer representation.

WIP patch attached.


More information about the Gcc-bugs mailing list