[C++0x] contiguous bitfields race implementation
Aldy Hernandez
aldyh@redhat.com
Tue Aug 9 20:53:00 GMT 2011
> ok, so now you do this only for the first field in a bitfield group. But you
> do it for _all_ bitfield groups in a struct, not only for the interesting one.
>
> May I suggest to split the loop into two, first searching the first field
> in the bitfield group that contains fld and then in a separate loop computing
> the bitwidth?
Excellent idea. Done! Now there are at most two calls to
get_inner_reference, and in many cases, only one.
> Backing up, considering one of my earlier questions. What is *offset
> supposed to be relative to? The docs say sth like "relative to INNERDECL",
> but the code doesn't contain a reference to INNERDECL anymore.
Sorry, I see your confusion. The comments at the top were completely
out of date. I have simplified and rewritten them accordingly. I am
attaching get_bit_range() with these and other changes you suggested.
See if it makes sense now.
> Now we come to that padding thing. What's the C++ memory model
> semantic for re-used tail padding? Consider
Andrew addressed this elsewhere.
> There is too much get_inner_reference and tree folding stuff in this
> patch (which makes it expensive given that the algorithm is still
> inherently quadratic). You can rely on the bitfield group advancing
> by integer-cst bits (but the start offset may be non-constant, so
> may the size of the underlying record).
Now there are only two tree folding calls (apart from
get_inner_reference), and the common case has very simple arithmetic
tuples. I see no clear way of removing the last call to
get_inner_reference(), as the padding after the field can only be
calculated by calling get_inner_reference() on the subsequent field.
> Now seeing all this - and considering that this is purely C++ frontend
> semantics. Why can't the C++ frontend itself constrain accesses
> according to the required semantics? It could simply create
> BIT_FIELD_REF<MEM_REF<&containing_record,
> byte-offset-to-start-of-group>, bit-size, bit-offset> for all bitfield
> references (with a proper
> type for the MEM_REF, specifying the size of the group). That would
> also avoid issues during tree optimization and would at least allow
> optimizing the bitfield accesses according to the desired C++ semantics.
Andrew addressed this as well. Could you respond to his email if you
think it is unsatisfactory?
a
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: stuff
URL: <http://gcc.gnu.org/pipermail/gcc-patches/attachments/20110809/a98de65c/attachment.ksh>
More information about the Gcc-patches
mailing list