This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH/RFC/PR28071]: New scheduler dependencies lists.


On 4/12/07, Maxim Kuvyrkov <mkuvyrkov@ispras.ru> wrote:
Jan Hubicka wrote:
> Hi,
> the memory tester jammed for last week, but looking at the results
> on the PR28071 -O3 compilation at:
> http://www.suse.de/~gcctest/memory/graphs/index.html
> it looks like that while this patch sucesfully addressed the compilation
> time issues, the overall memory consumption increased noticeably (100MB,
> or 20%)

Hi,

I tested my patch on PR28071 testcase on x86_64 and got the following
numbers:

Revisions of interest: r121493 (the one before my patch), r121494 (the
one after my patch).

configure: --enable-languages=c --enable-checking=release
--enable-gather-detailed-mem-stats --disable-bootstrap

maxmem.sh:
r121493: 701M
r121494: 440M

-fmem-report 'Total Allocated':
r121493: 565973016
r121494: 337328408

-fmem-report 'Total Overhead':
r121493: 91480975
r121494: 34319823

>
> I think it might be caused just by suboptimal allocation (as the lists
> with extra pointer should not consume more memory just because RTL
> containers did needed extra 64bits too).
>
> Looking at the code:
>> +/* Allocate deps_list.
>> +
>> +   If ON_OBSTACK_P is true, allocate the list on the obstack.  This is done for
>> +   INSN_FORW_DEPS lists because they should live till the end of scheduling.
>> +
>> +   INSN_BACK_DEPS and INSN_RESOLVED_BACK_DEPS lists are allocated on the free
>> +   store and are being freed in haifa-sched.c: schedule_insn ().  */
>> +static deps_list_t
>> +alloc_deps_list (bool on_obstack_p)
>> +{
>> +  if (on_obstack_p)
>> +    return obstack_alloc (dl_obstack, sizeof (struct _deps_list));
>> +  else
>> +    return xmalloc (sizeof (struct _deps_list));
>> +}
>
> This seems to be obvious candidate for allocpools instead of
> this obstack/xmalloc hybrid scheme. ...

I'm now testing the patch that among other things changes obstacks into
alloc_pools, but on this testcase it gives exactly the same results as
current obstack implementation.

What does the pool say is the number of outstanding bytes when all the dep lists are allocated?

(if you just gdb it and print the pool structure, it's in there)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]