This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][6/n] tree LIM TLC


On Tue, Mar 12, 2013 at 4:33 PM, Richard Biener wrote:
> On Tue, 12 Mar 2013, Steven Bosscher wrote:

>> I suppose this renders my LIM patch obsolete.
>
> Not really - it's still
>
>  tree loop invariant motion: 588.31 (78%) usr
>
> so limiting the O(n^2) dependence testing is a good thing.  But I
> can take it over from here and implement that ontop of my patches
> if you like.

That'd be good, let's keep it in one hand, one set.


>> Did you also look at the memory foot print?
>
> Yeah, unfortunately processing outermost loops separately doesn't
> reduce peak memory consumption.  I'll look into getting rid of the
> all-refs bitmaps, but I'm not there yet.

A few more ideas (though probably not with as much impact):

Is it possible to use a bitmap_head for the (now merged)
dep_loop/indep_loop, instead of bitmap? Likewise for a few other
bitmaps, especially the vectors of bitmaps.

Put "struct depend" in an alloc pool. (Also allows one to wipe them
all out in free_lim_aux_data.)
Likewise for "struct mem_ref".

Use a shared mem_ref for the error_mark_node case (and hoist the
MEM_ANALYZABLE checks in refs_independent_p above the bitmap tests).

Use nameless temps instead of lsm_tmp_name_add.


> Currently the testcase peaks at 1.7GB for me (after LIM, then
> it gets worse with DSE and IRA).  And I only tested -O1 sofar.

Try my DSE patch (corrected version attached).

What are you using now to measure per-pass memory usage? I'm still
using my old hack (also attached) but it's not quite optimal.

Ciao!
Steven

Attachment: PR39326_RTLDSE.diff
Description: Binary data

Attachment: passes_memstat.diff
Description: Binary data


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]