This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: unwind info for epilogues

On Fri, May 29, 2009 at 05:42:12PM -0700, Richard Henderson wrote:
> The other large change from the previous patch is the ability to have
> the eh_return epilogue from _Unwind_Resume (and friends) marked
> properly.  This required the addition of an EH_RETURN rtx, so that
> the middle-end could recognize when epilogue expansion should happen,
> rather than the add-hoc unspecs that ports had been using.  As it
> happens, only i386 and bfin implement eh_return via special epilogues;
> most ports only need to overwrite one or more registers before using
> a normal epilogue.
> Tested on x86_64, i686; committed.

Thanks.  Just a nit.
For (-O2 -fasynchronous-unwind-tables):
void bar (void);
void bar2 (int, int, int, int);
void bar3 (int, int, int, int, int, int, int);
long foo (int x, int y)
  long a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p;
  asm volatile ("" : "=rm" (a), "=rm" (b), "=rm" (c), "=rm" (d), "=rm" (e),
		     "=rm" (f), "=rm" (g), "=rm" (h), "=rm" (i), "=rm" (j),
		     "=rm" (k), "=rm" (l), "=rm" (m), "=rm" (n), "=rm" (o),
		     "=rm" (p));
  bar ();
  bar2 (0, 1, 2, 3);
  bar ();
  bar3 (0, 1, 2, 3, 4, 5, 6);
  bar ();
  bar2 (0, 1, 2, 3);
  return a + b + c + d + e + f + g + h + i + j + k + l + m + n + o + p;
On x86_64 we get:
        movq    112(%rsp), %rbp
        .cfi_restore 6
        movq    120(%rsp), %r12
        .cfi_restore 12
        addq    %rbx, %rax
        movq    104(%rsp), %rbx
        .cfi_restore 3
        addq    %r13, %rax
        movq    128(%rsp), %r13
        .cfi_restore 13
        addq    %r14, %rax
        movq    136(%rsp), %r14
        .cfi_restore 14
        addq    %r15, %rax
        addq    16(%rsp), %rax
        movq    144(%rsp), %r15
        .cfi_restore 15
        addq    24(%rsp), %rax
        addq    32(%rsp), %rax
        addq    40(%rsp), %rax
        addq    48(%rsp), %rax
        addq    56(%rsp), %rax
        addq    64(%rsp), %rax
        addq    72(%rsp), %rax
        addq    80(%rsp), %rax
        addq    88(%rsp), %rax
        addq    $152, %rsp
        .cfi_def_cfa_offset 8
Couldn't we avoid the .cfi_restore directives altogether on x86_64
in this case?
If the target has red-zone and all saved registers are within the
red-zone after stack is adjusted up, the unwinders can IMHO use the stack
slots just as well as registers.  If the registers weren't saved within
the red-zone or the target doesn't have any (such as i386):
        movl    -32(%ebp), %eax
        addl    -28(%ebp), %eax
        addl    -36(%ebp), %eax
        addl    %edi, %eax
        movl    -4(%ebp), %edi
        .cfi_restore 7
        addl    %esi, %eax
        movl    -8(%ebp), %esi
        .cfi_restore 6
        addl    %ebx, %eax
        movl    -12(%ebp), %ebx
        .cfi_restore 3
        addl    -40(%ebp), %eax
        addl    -44(%ebp), %eax
        addl    -48(%ebp), %eax
        addl    -52(%ebp), %eax
        addl    -56(%ebp), %eax
        addl    -60(%ebp), %eax
        addl    -64(%ebp), %eax
        addl    -68(%ebp), %eax
        addl    -72(%ebp), %eax
        addl    -76(%ebp), %eax
        movl    %ebp, %esp
        .cfi_def_cfa_register 4
        popl    %ebp
        .cfi_restore 5
        .cfi_def_cfa_offset 4
then can't the .cfi_restore directives be just moved down to the
movl %ebp, %esp instruction (the stack slots still contain the saved
register content until movl %ebp, %esp is executed)?  This would save at
least a couple of DW_CFA_advance_loc* opcodes.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]