This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PING] : [RFC] Tighten memory type assumption in RTL combiner pass.


ping. Segher do you any comments from your side.

regards,
Venkat.

On 14 January 2015 at 16:57, Venkataramanan Kumar
<venkataramanan.kumar@linaro.org> wrote:
> Hi all,
>
> When trying to debug GCC combiner pass with the test case in PR63949
> ref https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63949 I came across this code.
>
> This code in "make_compound_operation" assumes that all PLUS and MINUS
> RTX are "MEM" type for scalar int modes and tries to optimize based on
> that assumption.
>
> /* Select the code to be used in recursive calls.  Once we are inside an
>       address, we stay there.  If we have a comparison, set to COMPARE,
>       but once inside, go back to our default of SET.  */
>
>    next_code = (code == MEM ? MEM
>                 : ((code == PLUS || code == MINUS)
>                    && SCALAR_INT_MODE_P (mode)) ? MEM
>                 : ((code == COMPARE || COMPARISON_P (x))
>                    && XEXP (x, 1) == const0_rtx) ? COMPARE
>                 : in_code == COMPARE ? SET : in_code);
>
>  next_code is passed as in_code via recursive calls to
> "make_compound_operation". Based on that we are converting shift
> pattern to MULT pattern.
>
> case ASHIFT:
>       /* Convert shifts by constants into multiplications if inside
>          an address.  */
>       if (in_code == MEM && CONST_INT_P (XEXP (x, 1))
>           && INTVAL (XEXP (x, 1)) < HOST_BITS_PER_WIDE_INT
>           && INTVAL (XEXP (x, 1)) >= 0
>           && SCALAR_INT_MODE_P (mode))
>         {
>
> Now I tried to tighten it further by adding check to see in_code is
> also MEM type.
> Not sure if this right thing to do.  But this assumption about MEM
> seems to be very relaxed before.
>
> diff --git a/gcc/combine.c b/gcc/combine.c
> index 101cf35..1353f54 100644
> --- a/gcc/combine.c
> +++ b/gcc/combine.c
> @@ -7696,7 +7696,8 @@ make_compound_operation (rtx x, enum rtx_code in_code)
>
>    next_code = (code == MEM ? MEM
>                : ((code == PLUS || code == MINUS)
> -                 && SCALAR_INT_MODE_P (mode)) ? MEM
> +                 && SCALAR_INT_MODE_P (mode)
> +                 && (in_code == MEM)) ? MEM
>                : ((code == COMPARE || COMPARISON_P (x))
>                   && XEXP (x, 1) == const0_rtx) ? COMPARE
>                : in_code == COMPARE ? SET : in_code);
>
>
> This passed bootstrap on x86_64 and  GCC tests are not regressing.
> On Aarch64 passed bootstrap tests, test case in PR passed, but few
> tests failed (failed to generate adds and subs), because there are
> patterns (extended adds and subs) based on multiplication only in
> Aarch64 backend.
>
> if this change is correct then I may need to add patterns in Aarch64
> based on shifts. Not sure about targets also.
>
> Requesting further comments/help about this.
>
> I am looking to get it fixed in stage 1.
>
> regards,
> Venkat.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]