[Bug middle-end/93582] [10 Regression] -Warray-bounds gives error: array subscript 0 is outside array bounds of struct E[1]

cvs-commit at gcc dot gnu.org gcc-bugzilla@gcc.gnu.org
Tue Mar 3 10:26:00 GMT 2020


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93582

--- Comment #38 from CVS Commits <cvs-commit at gcc dot gnu.org> ---
The master branch has been updated by Jakub Jelinek <jakub@gcc.gnu.org>:

https://gcc.gnu.org/g:b07e4e7c7520ca3e798f514dec0711eea2c027be

commit r10-6994-gb07e4e7c7520ca3e798f514dec0711eea2c027be
Author: Jakub Jelinek <jakub@redhat.com>
Date:   Tue Mar 3 11:24:33 2020 +0100

    sccvn: Improve handling of load masked with integer constant [PR93582]

    As mentioned in the PR and discussed on IRC, the following patch is the
    patch that fixes the originally reported issue.
    We have there because of the premature bitfield comparison -> BIT_FIELD_REF
    optimization:
      s$s4_19 = 0;
      s.s4 = s$s4_19;
      _10 = BIT_FIELD_REF <s, 8, 0>;
      _13 = _10 & 8;
    and no other s fields are initialized.  If they would be all initialized
with
    constants, then my earlier PR93582 bitfield handling patches would handle
it
    already, but if at least one bit we ignore after the BIT_AND_EXPR masking
    is not initialized or is initialized earlier to non-constant, we aren't
able
    to look through it until combine, which is too late for the warnings on the
    dead code.
    This patch handles BIT_AND_EXPR where the first operand is a SSA_NAME
    initialized with a memory load and second operand is INTEGER_CST, by trying
    a partial def lookup after pushing the ranges of 0 bits in the mask as
    artificial initializers.  In the above case on little-endian, we push
    offset 0 size 3 {} partial def and offset 4 size 4 (the result is unsigned
    char) and then perform normal partial def handling.
    My initial version of the patch failed miserably during bootstrap, because
    data->finish (...) called vn_reference_lookup_or_insert_for_pieces
    which I believe tried to remember the masked value rather than real for the
    reference, or for failed lookup visit_reference_op_load called
    vn_reference_insert.  The following version makes sure we aren't calling
    either of those functions in the masked case, as we don't know anything
    better about the reference from whatever has been discovered when the load
    stmt has been visited, the patch just calls vn_nary_op_insert_stmt on
    failure with the lhs (apparently calling it with the INTEGER_CST doesn't
    work).

    2020-03-03  Jakub Jelinek  <jakub@redhat.com>

        PR tree-optimization/93582
        * tree-ssa-sccvn.h (vn_reference_lookup): Add mask argument.
        * tree-ssa-sccvn.c (struct vn_walk_cb_data): Add mask and masked_result
        members, initialize them in the constructor and if mask is non-NULL,
        artificially push_partial_def {} for the portions of the mask that
        contain zeros.
        (vn_walk_cb_data::finish): If mask is non-NULL, set masked_result to
        val and return (void *)-1.  Formatting fix.
        (vn_reference_lookup_pieces): Adjust vn_walk_cb_data initialization.
        Formatting fix.
        (vn_reference_lookup): Add mask argument.  If non-NULL, don't call
        fully_constant_vn_reference_p nor vn_reference_lookup_1 and return
        data.mask_result.
        (visit_nary_op): Handle BIT_AND_EXPR of a memory load and INTEGER_CST
        mask.
        (visit_stmt): Formatting fix.

        * gcc.dg/tree-ssa/pr93582-10.c: New test.
        * gcc.dg/pr93582.c: New test.
        * gcc.c-torture/execute/pr93582.c: New test.


More information about the Gcc-bugs mailing list