[PATCH/RFC/PR28071]: New scheduler dependencies lists.

Daniel Berlin dberlin@dberlin.org
Thu Apr 12 16:40:00 GMT 2007


On 4/12/07, Maxim Kuvyrkov <mkuvyrkov@ispras.ru> wrote:
> Daniel Berlin wrote:
> > On 4/12/07, Maxim Kuvyrkov <mkuvyrkov@ispras.ru> wrote:
> >> Jan Hubicka wrote:
> >> > Hi,
> >> > the memory tester jammed for last week, but looking at the results
> >> > on the PR28071 -O3 compilation at:
> >> > http://www.suse.de/~gcctest/memory/graphs/index.html
> >> > it looks like that while this patch sucesfully addressed the
> >> compilation
> >> > time issues, the overall memory consumption increased noticeably
> >> (100MB,
> >> > or 20%)
> >>
> >> Hi,
> >>
> >> I tested my patch on PR28071 testcase on x86_64 and got the following
> >> numbers:
> >>
> >> Revisions of interest: r121493 (the one before my patch), r121494 (the
> >> one after my patch).
> >>
> >> configure: --enable-languages=c --enable-checking=release
> >> --enable-gather-detailed-mem-stats --disable-bootstrap
> >>
> >> maxmem.sh:
> >> r121493: 701M
> >> r121494: 440M
> >>
> >> -fmem-report 'Total Allocated':
> >> r121493: 565973016
> >> r121494: 337328408
> >>
> >> -fmem-report 'Total Overhead':
> >> r121493: 91480975
> >> r121494: 34319823
> >>
> >> >
> >> > I think it might be caused just by suboptimal allocation (as the lists
> >> > with extra pointer should not consume more memory just because RTL
> >> > containers did needed extra 64bits too).
> >> >
> >> > Looking at the code:
> >> >> +/* Allocate deps_list.
> >> >> +
> >> >> +   If ON_OBSTACK_P is true, allocate the list on the obstack.
> >> This is done for
> >> >> +   INSN_FORW_DEPS lists because they should live till the end of
> >> scheduling.
> >> >> +
> >> >> +   INSN_BACK_DEPS and INSN_RESOLVED_BACK_DEPS lists are allocated
> >> on the free
> >> >> +   store and are being freed in haifa-sched.c: schedule_insn ().  */
> >> >> +static deps_list_t
> >> >> +alloc_deps_list (bool on_obstack_p)
> >> >> +{
> >> >> +  if (on_obstack_p)
> >> >> +    return obstack_alloc (dl_obstack, sizeof (struct _deps_list));
> >> >> +  else
> >> >> +    return xmalloc (sizeof (struct _deps_list));
> >> >> +}
> >> >
> >> > This seems to be obvious candidate for allocpools instead of
> >> > this obstack/xmalloc hybrid scheme. ...
> >>
> >> I'm now testing the patch that among other things changes obstacks into
> >> alloc_pools, but on this testcase it gives exactly the same results as
> >> current obstack implementation.
> >
> > What does the pool say is the number of outstanding bytes when all the
> > dep lists are allocated?
> >
> > (if you just gdb it and print the pool structure, it's in there)
>
> {name = 0xb1f7bb "dep_link", elts_per_block = 554, free_list =
> 0x1d4439a8, elts_allocated = 382260, elts_free = 415, blocks_allocated =
> 690, block_list = 0x1d4412d0, block_size = 13304, elt_size = 24}
>
> Size1 = 690 * 13304 = 9179760
>
> {name = 0xb1f7c4 "dep_node", elts_per_block = 2770, free_list =
> 0x1de0d998, elts_allocated = 3492970, elts_free = 537, blocks_allocated
> = 1261, block_list = 0x1de042d0, block_size = 199448, elt_size = 72}
>
> Size2 = 1261 * 199448 = 251503928
>
> Size = Size1 + Size2 = ~260M
>
> But why are these numbers important?  maxmem.sh still returns a total of
> 440M .

Because it's good to know how big *your* structures are?

PS 260 meg is a *lot* for a bunch of dependence structures.  You have
3.5 million of them at 72 bytes each :(



More information about the Gcc-patches mailing list