This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug middle-end/82365] stack locations are consolidated if noreturn function is on the path


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82365

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |jakub at gcc dot gnu.org,
                   |                            |matz at gcc dot gnu.org,
                   |                            |rguenth at gcc dot gnu.org

--- Comment #4 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
(In reply to Andrew Pinski from comment #2)
> Basically someone needs to look into add_scope_conflicts (in gimplecfg.c) to
> see the scope conflict is being added.

??  Implying clobber stmts for all variables after noreturn calls wouldn't
change anything.  The problem is that all the variables are addressable/escape
and all the fortify_panic () calls are threaded.  So, on the coalesced
fortify_panic () call the stack conflict code thinks all the 4 info escaped
variables can be live during that call.

The current add_scope_conflicts is fairly simple algorithm, first we walk basic
blocks, whenever we see any kind of reference to a stack variable under
consideration, we set it in a bitmap, whenever we see a clobber stmt for such a
variable, we clear it, at the start of bb we or into the bitmap the bitmaps
from all the predecessor bb and remember the bitmap at the end of the bb, we
keep propagating it until no further changes happen.
And finally we just walk all bbs, on the first non-debug non-conflict stmt we
add conflicts for all currently live vars at that point, and then for any stmt
add conflicts for directly referenced vars in the stmt.

Now, for this testcase to be optimized, one step would be to be more
conservative on which stmt we do this:
          if (for_conflict
              && visit == visit_op)
            {
              /* If this is the first real instruction in this BB we need
                 to add conflicts for everything live at this point now.
                 Unlike classical liveness for named objects we can't
                 rely on seeing a def/use of the names we're interested in.
                 There might merely be indirect loads/stores.  We'd not add any
                 conflicts for such partitions.  */
              bitmap_iterator bi;
              unsigned i;
              EXECUTE_IF_SET_IN_BITMAP (work, 0, i, bi)
                {
                  struct stack_var *a = &stack_vars[i];
                  if (!a->conflicts)
                    a->conflicts = BITMAP_ALLOC (&stack_var_bitmap_obstack);
                  bitmap_ior_into (a->conflicts, work);
                }
              visit = visit_conflict;
            }
(well, obviously we'd need to visit = visit_conflict; for for_conflict from the
first stmt and use some bool to track the first "indirect" stmt).
I think it needs to be done solely on statements that could have any indirect
loads/stores (so calls, loads, stores, asm stmts, anything else?), but not say
on arithmetics with SSA_NAME operands only etc., there we care only about
explicitly referenced vars.

This wouldn't help here because fortify_panic is a call.

Another step could be track which of the live variables are address taken in
another set of bitmaps (don't track just whether it is addressable, that is
TREE_ADDRESSABLE bit after all, but when its address was taken).  In statements
when the var has been seen live already, but not yet address taken, we wouldn't
need to record conflicts on possibly indirect stmts, for those only explict
references would imply a conflict (in particular, only conflicts between the
addressable work set vs. addressable work set).  I bet a clobber should still
clear both bitmaps, even if address is live across a clobber, any dereferences
through that would be invalid.  The last argument of
walk_stmt_load_store_addr_ops is the callback for stuff that had the address
taken, so I think we could use that.  We'd need to take care of either handling
all the possibly indirect stmts the above way, or track separately which vars
were marked addressable since the last possibly indirect stmt.

Richard/Micha, thoughts on that?

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]