This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Continue strict-volatile-bitfields fixes


Hi!

Ping.

On Wed, 09 May 2012 10:01:55 +0800, I wrote:
> On Fri, 27 Apr 2012 10:29:06 +0200, Jakub Jelinek <jakub@redhat.com> wrote:
> > On Fri, Apr 27, 2012 at 12:42:41PM +0800, Thomas Schwinge wrote:
> > > > > GET_MODE_BITSIZE (lmode)Â (8 bits). Â(With the current sources, lmode is
> > > > > VOIDmode.)
> > > > >
> > > > > Is emmitting ÂBIT_FIELD_REF <*common, 32, 0> & 255Â wrong in this case,
> > > > > or should a later optimization pass be able to figure out that
> > > > > ÂBIT_FIELD_REF <*common, 32, 0> & 255Â is in fact the same as
> > > > > common->code, and then be able to conflate these? ÂAny suggestions
> > > > > where/how to tackle this?
> > > > 
> > > > The BIT_FIELD_REF is somewhat of a red-herring.  It is created by fold-const.c
> > > > in optimize_bit_field_compare, code that I think should be removed completely.
> > > > Or it needs to be made aware of strict-volatile bitfield and C++ memory model
> > > > details.
> > 
> > I'd actually very much prefer the latter, just disable
> > optimize_bit_field_compare for strict-volatile bitfield mode and when
> > avoiding load data races in C++ memory model (that isn't going to be
> > default, right?).  This optimization is useful, and it is solely about
> > loads, so even C++ memory model usually shouldn't care.
> 
> I can't comment on the C++ memory model bits, but I have now tested the
> following patch (fixes the issue for SH, no regressions for ARM, x86):
> 
> gcc/
> 	* fold-const.c (optimize_bit_field_compare): Abort early in the strict
> 	volatile bitfields case.
> 
> Index: fold-const.c
> ===================================================================
> --- fold-const.c	(revision 186856)
> +++ fold-const.c	(working copy)
> @@ -3342,6 +3342,11 @@ optimize_bit_field_compare (location_t loc, enum t
>    tree mask;
>    tree offset;
>  
> +  /* In the strict volatile bitfields case, doing code changes here may prevent
> +     other optimizations, in particular in a SLOW_BYTE_ACCESS setting.  */
> +  if (flag_strict_volatile_bitfields > 0)
> +    return 0;
> +
>    /* Get all the information about the extractions being done.  If the bit size
>       if the same as the size of the underlying object, we aren't doing an
>       extraction at all and so can do nothing.  We also don't want to


GrÃÃe,
 Thomas

Attachment: pgp00000.pgp
Description: PGP signature


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]