This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH,i386] fix PR 11001


> The fix is to ensure that the registers are available before generating
> the instructions.  Note that the code is not optimal: in the memset
> case, for instance, if we choose an inlining strategy requiring 'rep
> stosl' and then discover that the necessary registers are not available,
> we generate a full call to 'memset' rather than generating an inline
> copy loop.  I don't see this as a serious defect; if you are using
> register globals on the x86, you deserve a performance penalty.

> Index: gcc/config/i386/i386.c
> ===================================================================
> --- gcc/config/i386/i386.c	(revision 128981)
> +++ gcc/config/i386/i386.c	(working copy)
> @@ -15286,6 +15286,13 @@ ix86_expand_movmem (rtx dst, rtx src, rt
>        break;
>      }
> +  /* Can't use this if the user has appropriated ecx, esi, or edi.  */
> +  if ((alg == rep_prefix_1_byte
> +       || alg == rep_prefix_4_byte
> +       || alg == rep_prefix_8_byte)
> +      && (global_regs[2] || global_regs[4] || global_regs[5]))
> +    return 0;
> +

I think that you should put this check into decide_alg(). There you
can decide between copy loop and libcall, also taking into account
optimize_size flag, as well as TARGET_INLINE_ALL_STRINGOPS and

Please note, that rep_prefix_* algorithms are used for larger blocks.
Perhaps in your case, we should scan algorithms table backwards until
we hit other non-rep_prefix algorithm. If we found none, we should
default to a loop algorithm for optimize_size or to a libcall for
other cases.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]