This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: [PATCH]Fix computation of offset in ivopt



> -----Original Message-----
> From: gcc-patches-owner@gcc.gnu.org [mailto:gcc-patches-
> owner@gcc.gnu.org] On Behalf Of bin.cheng
> Sent: Friday, September 27, 2013 1:07 PM
> To: 'Richard Biener'
> Cc: GCC Patches
> Subject: RE: [PATCH]Fix computation of offset in ivopt
> 
> 
> 
> > -----Original Message-----
> > From: Richard Biener [mailto:richard.guenther@gmail.com]
> > Sent: Tuesday, September 24, 2013 6:31 PM
> > To: Bin Cheng
> > Cc: GCC Patches
> > Subject: Re: [PATCH]Fix computation of offset in ivopt
> >
> > On Tue, Sep 24, 2013 at 11:13 AM, bin.cheng <bin.cheng@arm.com> wrote:
> >
> > +       field = TREE_OPERAND (expr, 1);
> > +       if (DECL_FIELD_BIT_OFFSET (field)
> > +           && cst_and_fits_in_hwi (DECL_FIELD_BIT_OFFSET (field)))
> > +         boffset = int_cst_value (DECL_FIELD_BIT_OFFSET (field));
> > +
> > +       tmp = component_ref_field_offset (expr);
> > +       if (top_compref
> > +           && cst_and_fits_in_hwi (tmp))
> > +         {
> > +           /* Strip the component reference completely.  */
> > +           op0 = TREE_OPERAND (expr, 0);
> > +           op0 = strip_offset_1 (op0, inside_addr, top_compref, &off0);
> > +           *offset = off0 + int_cst_value (tmp) + boffset /
> BITS_PER_UNIT;
> > +           return op0;
> > +         }
> >
> > the failure paths seem mangled, that is, if cst_and_fits_in_hwi is
> > false
> for
> > either offset part you may end up doing half accounting and not
stripping.
> >
> > Btw, DECL_FIELD_BIT_OFFSET is always non-NULL.  I suggest to rewrite
> > to
> >
> >      if (!inside_addr)
> >        return orig_expr;
> >
> >      tmp = component_ref_field_offset (expr);
> >      field = TREE_OPERAND (expr, 1);
> >      if (top_compref
> >          && cst_and_fits_in_hwi (tmp)
> >          && cst_and_fits_in_hwi (DECL_FIELD_BIT_OFFSET (field)))
> >         {
> >           ...
> >         }
> Will be refined.
> 
> >
> > note that this doesn't really handle overflows correctly as
> >
> > +           *offset = off0 + int_cst_value (tmp) + boffset /
> > + BITS_PER_UNIT;
> >
> > may still overflow.
> Since it's "unsigned + signed + signed", according to implicit conversion,
the
> signed operand will be converted to unsigned, so the overflow would only
> happen when off0 is huge number and tmp/boffset is large positive number,
> right?  Do I need to check whether off0 is larger than the overflowed
result?
> Also there is signed->unsigned problem here, see below.
> 
> >
> > @@ -4133,6 +4142,9 @@ get_computation_cost_at (struct ivopts_data
> *data,
> >          bitmap_clear (*depends_on);
> >      }
> >
> > +  /* Sign-extend offset if utype has lower precision than
> > + HOST_WIDE_INT.  */  offset = sext_hwi (offset, TYPE_PRECISION
> > + (utype));
> > +
> >
> > offset is computed elsewhere in difference_cost and the bug to me
> > seems that it is unsigned.  sign-extending it here is odd at least
> > (and the
> extension
> > should probably happen at sizetype precision, not that of utype).
> I agree, The root cause is in split_offset_1, in which offset is computed.
> Every time offset is computed in this function with a signed operand (like
> "int_cst_value (tmp)" above), we need to take care the possible negative
> number problem.   Take this case as an example, we need to do below
> change:
> 
>   case INTEGER_CST:
>       //.......
>       *offset = int_cst_value (expr);
> change to
>   case INTEGER_CST:
>       //.......
>       *offset = sext_hwi (int_cst_value (expr), type);
> 
> and
>   case MULT_EXPR:
>       //.......
>       *offset = sext_hwi (int_cst_value (expr), type); to
>   case MULT_EXPR:
>       //.......
>      HOST_WIDE_INT xxx = (HOST_WIDE_INT)off0 * int_cst_value (op1);
>       *offset = sext_hwi (xxx, type);
> 
> Any comments?

Thought twice, I guess we can compute signed offset in strip_offset_1 and
sign extend it for strip_offset, thus we don't need to change every
computation of offset in that function.

Thanks.
bin




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]