This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Bootstrap failure due to GC / PCH memory corruption


Zack Weinberg <zack@codesourcery.com> writes:

> Ulrich Weigand <weigand@i1.informatik.uni-erlangen.de> writes:
> 
> > I'm not sure how to best fix this problem: either by creating
> > multiple ptes per order in ggc_pch_read so that the assumption
> > of one multi-page object per pte remains true, or else by
> > adapting compute_inverse to work in all cases without relying
> > on that assumption.  (The 'easy fix' of just removing the if
> > doesn't work because the search for the inverse won't terminate
> > for large object sizes.  I haven't investigated in detail why
> > this is so.)
> 
> My suggestion is that you change ggc_pch_read to create multiple ptes
> per order; in fact, that you create precisely the same ptes that
> ggc_alloc would have if it had allocated everything from scratch.
> What it's doing right now is going to cause other problems with some
> things I have in mind for faster allocation and marking.

This will affect compiler speed significantly.  I recommend you fix
compute_inverse instead.

> I also suggest that you find out what these objects are that are
> larger than a page.  To first order there shouldn't be any.  The only
> thing I've ever seen allocate more than a page at once from GC arena
> was a giant dense switch statement.

There are many such objects, for instance the identifier hash table is
a minimum 200k.

-- 
- Geoffrey Keating <geoffk@geoffk.org>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]