[Bug target/68263] New: Vector "*mov<mode>_internal" fails to handle misaligned load/store from reload

hjl.tools at gmail dot com gcc-bugzilla@gcc.gnu.org
Mon Nov 9 22:22:00 GMT 2015


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68263

            Bug ID: 68263
           Summary: Vector "*mov<mode>_internal" fails to handle
                    misaligned load/store from reload
           Product: gcc
           Version: 6.0
            Status: UNCONFIRMED
          Severity: normal
          Priority: P3
         Component: target
          Assignee: unassigned at gcc dot gnu.org
          Reporter: hjl.tools at gmail dot com
                CC: julia.koval at intel dot com, ubizjak at gmail dot com
  Target Milestone: ---
            Target: x86

On Linux/ia32 with SSE2 enabled, r230050 gave:

FAIL: gcc.target/i386/iamcu/test_passing_floats.c execution,  -O3 -g 

Reload generates

(insn 153 7 9 2 (set (mem/c:V4SF (plus:SI (reg/f:SI 7 sp)
                (const_int 16 [0x10])) [7 %sfp+-16 S16 A32])
        (reg:V4SF 22 xmm1 [88])) gcc.target/i386/iamcu/test_passing_floats.c:49
1210 {*movv4sf_internal}
     (nil))

in test_floats_on_stack.  But sse.md has

(define_insn "*mov<mode>_internal"
  [(set (match_operand:VMOVE 0 "nonimmediate_operand"               "=v,v ,m")
        (match_operand:VMOVE 1 "nonimmediate_or_sse_const_operand"  "C
,vm,v"))]
  "TARGET_SSE
   && (register_operand (operands[0], <MODE>mode)
       || register_operand (operands[1], <MODE>mode))"
{
...
       {
        case MODE_V16SF:
        case MODE_V8SF:
        case MODE_V4SF:
          if (TARGET_AVX
              && (misaligned_operand (operands[0], <MODE>mode)
                  || misaligned_operand (operands[1], <MODE>mode)))
            return "vmovups\t{%1, %0|%0, %1}";
          else
            return "%vmovaps\t{%1, %0|%0, %1}";

        case MODE_V8DF:
        case MODE_V4DF:
        case MODE_V2DF:
          if (TARGET_AVX
              && (misaligned_operand (operands[0], <MODE>mode)
                  || misaligned_operand (operands[1], <MODE>mode)))
            return "vmovupd\t{%1, %0|%0, %1}";
          else
            return "%vmovapd\t{%1, %0|%0, %1}";

        case MODE_OI:
        case MODE_TI:
          if (TARGET_AVX
              && (misaligned_operand (operands[0], <MODE>mode)
                  || misaligned_operand (operands[1], <MODE>mode)))
            return TARGET_AVX512VL ? "vmovdqu64\t{%1, %0|%0, %1}"
                                   : "vmovdqu\t{%1, %0|%0, %1}";
          else
            return TARGET_AVX512VL ? "vmovdqa64\t{%1, %0|%0, %1}"
                                   : "%vmovdqa\t{%1, %0|%0, %1}";

Misaligned load/store are only handled for AVX, not SSE.


More information about the Gcc-bugs mailing list