[PATCH][RFC][1/2] Bitfield lowering, add BIT_FIELD_EXPR
Richard Henderson
rth@redhat.com
Thu Jun 16 17:18:00 GMT 2011
On 06/16/2011 04:35 AM, Richard Guenther wrote:
>
> This is a (possible) pre-requesite for the bitfield lowering patch,
> taken from the old mem-ref branch. It introduces BIT_FIELD_EXPR
> which can be used to do bitfield composition.
> BIT_FIELD_EXPR <a, b, C1, C2> is equivalent to computing
> a & ~((1 << C1 - 1) << C2) | ((b << C2) & (1 << C1 = 1)), thus
> inserting b of width C1 at the bitfield position C2 in a, returning
> the new value. This allows translating
> BIT_FIELD_REF <a, C1, C2> = b;
> to
> a = BIT_FIELD_EXPR <a, b, C1, C2>;
> which avoids partial definitions of a (thus, BIT_FIELD_EXPR is
> similar to COMPLEX_EXPR). BIT_FIELD_EXPR is supposed to work
> on registers only.
The name BIT_FIELD_EXPR isn't immediately obvious to me what
it does. Does it insert or extract a field?
I suppose you're using BIT_FIELD_REF for extraction?
I think this would be clearer with a name like DEPOSIT_EXPR,
similar to the ia64 deposit instruction.
Or INSV_EXPR, if we want to retain existing GCC terminology
(although the insv pattern itself uses an in-out first operand,
leading to a comment in the ia64 md file about insv being less
optimal that it could be).
> *************** expand_expr_real_1 (tree exp, rtx target
> *** 8680,8685 ****
> --- 8680,8708 ----
>
> return expand_constructor (exp, target, modifier, false);
>
> + case BIT_FIELD_EXPR:
> + {
> + unsigned bitpos = (unsigned) TREE_INT_CST_LOW (TREE_OPERAND (exp, 3));
> + unsigned bitsize = (unsigned) TREE_INT_CST_LOW (TREE_OPERAND (exp, 2));
> + tree bits, mask;
> + if (BYTES_BIG_ENDIAN)
> + bitpos = TYPE_PRECISION (type) - bitsize - bitpos;
> + /* build a mask to mask/clear the bits in the word. */
> + mask = build_bit_mask (type, bitsize, bitpos);
> + /* extend the bits to the word type, shift them to the right
> + place and mask the bits. */
> + bits = fold_convert (type, TREE_OPERAND (exp, 1));
> + bits = fold_build2 (BIT_AND_EXPR, type,
> + fold_build2 (LSHIFT_EXPR, type,
> + bits, size_int (bitpos)), mask);
> + /* switch to clear mask and do the composition. */
> + mask = fold_build1 (BIT_NOT_EXPR, type, mask);
> + return expand_normal (fold_build2 (BIT_IOR_EXPR, type,
> + fold_build2 (BIT_AND_EXPR, type,
> + TREE_OPERAND (exp, 0), mask),
> + bits));
Surely you should implement this with store_bit_field, even if
you have to introduce a copy to a temporary for correctness.
Otherwise you'll have to rely on combine to put this back
together into the backend's insv pattern.
> + /* Bit-field insertion needs several shift and mask operations. */
> + case BIT_FIELD_EXPR:
> + return 3;
... depending on the target, of course.
r~
More information about the Gcc-patches
mailing list