This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [PATCH/RFC/PR28071]: New scheduler dependencies lists.
- From: Maxim Kuvyrkov <mkuvyrkov at ispras dot ru>
- To: Jan Hubicka <hubicka at ucw dot cz>
- Cc: Vladimir Makarov <vmakarov at redhat dot com>, James E Wilson <wilson at specifix dot com>, Ayal Zaks <zaks at il dot ibm dot com>, Andrey Belevantsev <abel at ispras dot ru>, Revital1 Eres <ERES at il dot ibm dot com>, gcc-patches at gcc dot gnu dot org
- Date: Thu, 12 Apr 2007 19:10:29 +0400
- Subject: Re: [PATCH/RFC/PR28071]: New scheduler dependencies lists.
- References: <45B5BC60.8000809@ispras.ru> <20070204214957.GE9870@atrey.karlin.mff.cuni.cz>
Jan Hubicka wrote:
Hi,
the memory tester jammed for last week, but looking at the results
on the PR28071 -O3 compilation at:
http://www.suse.de/~gcctest/memory/graphs/index.html
it looks like that while this patch sucesfully addressed the compilation
time issues, the overall memory consumption increased noticeably (100MB,
or 20%)
Hi,
I tested my patch on PR28071 testcase on x86_64 and got the following
numbers:
Revisions of interest: r121493 (the one before my patch), r121494 (the
one after my patch).
configure: --enable-languages=c --enable-checking=release
--enable-gather-detailed-mem-stats --disable-bootstrap
maxmem.sh:
r121493: 701M
r121494: 440M
-fmem-report 'Total Allocated':
r121493: 565973016
r121494: 337328408
-fmem-report 'Total Overhead':
r121493: 91480975
r121494: 34319823
I think it might be caused just by suboptimal allocation (as the lists
with extra pointer should not consume more memory just because RTL
containers did needed extra 64bits too).
Looking at the code:
+/* Allocate deps_list.
+
+ If ON_OBSTACK_P is true, allocate the list on the obstack. This is done for
+ INSN_FORW_DEPS lists because they should live till the end of scheduling.
+
+ INSN_BACK_DEPS and INSN_RESOLVED_BACK_DEPS lists are allocated on the free
+ store and are being freed in haifa-sched.c: schedule_insn (). */
+static deps_list_t
+alloc_deps_list (bool on_obstack_p)
+{
+ if (on_obstack_p)
+ return obstack_alloc (dl_obstack, sizeof (struct _deps_list));
+ else
+ return xmalloc (sizeof (struct _deps_list));
+}
This seems to be obvious candidate for allocpools instead of
this obstack/xmalloc hybrid scheme. ...
I'm now testing the patch that among other things changes obstacks into
alloc_pools, but on this testcase it gives exactly the same results as
current obstack implementation. So that is not the problem.
Thanks,
Maxim