This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Sharing stack slots (Richard Kenner) writes:

> We have a lot of code to do it, but I'm wondering if we really should.
> There's a kludge in expr.c which says:
>       /* If the address of the structure varies, then it might be on
>          the stack.  And, stack slots may be shared across scopes.
>          So, two different structures, of different types, can end up
>          at the same location.  We will give the structures alias set
>          zero; here we must be careful not to give non-zero alias sets
>          to their fields.  */
> That's a real efficiency hit.
> Moreover, one of the ideas behind the MEM tracking I'm doing is to record
> which decl a MEM is for with the idea that if two MEMs are in different 
> decls, we they can't alias one another. But, of course, they can if they
> are in different scopes and have overlapped each other.  But this is
> an important optimization, and I'm told particularly important on IA64.
> On the other hand, overlapping large objects that are in blocks of
> different scope are also quite important, especially in code
> that's generated by some program (and somewhat in Ada expanded code).
> We've been playing around with this compromise for a while, but I think we
> need to deal with it in a less ad-hoc manner now.
> Any thoughts?


- We can't stop sharing stack slots.  That would cause an explosion in
  the amount of stack space used.
- If we knew which decls mapped to which stack slots, then you could
  still perform the optimisation, because it'd be possible to determine
  which decls correspond to overlapping MEMs.
- With that information, it might also be possible to improve on the
  alias set given to a structure on the stack...

- Geoffrey Keating <>

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]