[PATCH] bswap: Improve perform_symbolic_merge [PR103376]
Richard Biener
rguenther@suse.de
Thu Nov 25 08:21:37 GMT 2021
On Thu, 25 Nov 2021, Jakub Jelinek wrote:
> On Wed, Nov 24, 2021 at 09:45:16AM +0100, Richard Biener wrote:
> > > Thinking more about it, perhaps we could do more for BIT_XOR_EXPR.
> > > We could allow masked1 == masked2 case for it, but would need to
> > > do something different than the
> > > n->n = n1->n | n2->n;
> > > we do on all the bytes together.
> > > In particular, for masked1 == masked2 if masked1 != 0 (well, for 0
> > > both variants are the same) and masked1 != 0xff we would need to
> > > clear corresponding n->n byte instead of setting it to the input
> > > as x ^ x = 0 (but if we don't know what x and y are, the result is
> > > also don't know). Now, for plus it is much harder, because not only
> > > for non-zero operands we don't know what the result is, but it can
> > > modify upper bytes as well. So perhaps only if current's byte
> > > masked1 && masked2 set the resulting byte to 0xff (unknown) iff
> > > the byte above it is 0 and 0, and set that resulting byte to 0xff too.
> > > Also, even for | we could instead of return NULL just set the resulting
> > > byte to 0xff if it is different, perhaps it will be masked off later on.
> > > Ok to handle that incrementally?
> >
> > Not sure if it is worth the trouble - the XOR handling sounds
> > straight forward at least. But sure, the merging routine could
> > simply be conservatively correct here.
>
> This patch implements that (except that for + it just punts whenever
> both operand bytes aren't 0 like before).
>
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
OK if you can add a testcase that exercises this "feature".
Thanks,
Richard.
> 2021-11-25 Jakub Jelinek <jakub@redhat.com>
>
> PR tree-optimization/103376
> * gimple-ssa-store-merging.c (perform_symbolic_merge): For
> BIT_IOR_EXPR, if masked1 && masked2 && masked1 != masked2, don't
> punt, but set the corresponding result byte to MARKER_BYTE_UNKNOWN.
> For BIT_XOR_EXPR similarly and if masked1 == masked2 and the
> byte isn't MARKER_BYTE_UNKNOWN, set the corresponding result byte to
> 0.
>
> --- gcc/gimple-ssa-store-merging.c.jj 2021-11-24 09:54:37.684365460 +0100
> +++ gcc/gimple-ssa-store-merging.c 2021-11-24 11:18:54.422226266 +0100
> @@ -556,6 +556,7 @@ perform_symbolic_merge (gimple *source_s
> n->bytepos = n_start->bytepos;
> n->type = n_start->type;
> size = TYPE_PRECISION (n->type) / BITS_PER_UNIT;
> + uint64_t res_n = n1->n | n2->n;
>
> for (i = 0, mask = MARKER_MASK; i < size; i++, mask <<= BITS_PER_MARKER)
> {
> @@ -563,12 +564,33 @@ perform_symbolic_merge (gimple *source_s
>
> masked1 = n1->n & mask;
> masked2 = n2->n & mask;
> - /* For BIT_XOR_EXPR or PLUS_EXPR, at least one of masked1 and masked2
> - has to be 0, for BIT_IOR_EXPR x | x is still x. */
> - if (masked1 && masked2 && (code != BIT_IOR_EXPR || masked1 != masked2))
> - return NULL;
> + /* If at least one byte is 0, all of 0 | x == 0 ^ x == 0 + x == x. */
> + if (masked1 && masked2)
> + {
> + /* + can carry into upper bits, just punt. */
> + if (code == PLUS_EXPR)
> + return NULL;
> + /* x | x is still x. */
> + if (code == BIT_IOR_EXPR && masked1 == masked2)
> + continue;
> + if (code == BIT_XOR_EXPR)
> + {
> + /* x ^ x is 0, but MARKER_BYTE_UNKNOWN stands for
> + unknown values and unknown ^ unknown is unknown. */
> + if (masked1 == masked2
> + && masked1 != ((uint64_t) MARKER_BYTE_UNKNOWN
> + << i * BITS_PER_MARKER))
> + {
> + res_n &= ~mask;
> + continue;
> + }
> + }
> + /* Otherwise set the byte to unknown, it might still be
> + later masked off. */
> + res_n |= mask;
> + }
> }
> - n->n = n1->n | n2->n;
> + n->n = res_n;
> n->n_ops = n1->n_ops + n2->n_ops;
>
> return source_stmt;
>
>
> Jakub
>
>
--
Richard Biener <rguenther@suse.de>
SUSE Software Solutions Germany GmbH, Maxfeldstrasse 5, 90409 Nuernberg,
Germany; GF: Ivo Totev; HRB 36809 (AG Nuernberg)
More information about the Gcc-patches
mailing list