Fix two more memory leaks in threader
Marek Polacek
polacek@redhat.com
Mon Jul 20 14:38:00 GMT 2015
On Mon, Jul 20, 2015 at 03:19:06PM +0100, James Greenhalgh wrote:
> On Wed, May 20, 2015 at 05:36:25PM +0100, Jeff Law wrote:
> >
> > These fix the remaining leaks in the threader that I'm aware of. We
> > failed to properly clean-up when we had to cancel certain jump threading
> > opportunities. So thankfully this wasn't a big leak.
>
> Hi Jeff,
>
> I don't have a reduced testcase to go on, but by inspection this patch
> looks wrong to me...
>
> > diff --git a/gcc/tree-ssa-threadupdate.c b/gcc/tree-ssa-threadupdate.c
> > index c5b78a4..4bccad0 100644
> > --- a/gcc/tree-ssa-threadupdate.c
> > +++ b/gcc/tree-ssa-threadupdate.c
> > @@ -2159,9 +2159,16 @@ mark_threaded_blocks (bitmap threaded_blocks)
> > {
> > /* Attach the path to the starting edge if none is yet recorded. */
> > if ((*path)[0]->e->aux == NULL)
> > - (*path)[0]->e->aux = path;
> > - else if (dump_file && (dump_flags & TDF_DETAILS))
> > - dump_jump_thread_path (dump_file, *path, false);
> > + {
> > + (*path)[0]->e->aux = path;
> > + }
> > + else
> > + {
> > + paths.unordered_remove (i);
>
> Here we are part-way through iterating through PATHS. With unordered_remove
> we are going to move the end element of the vector to position 'i'. We'll
> then move on and look at 'i + 1' and so never look at the element we just
> moved.
>
> This manifests as a lower number of cancelled jump-threads, and in
> turn some extra jumps threaded - some of which may no longer be safe.
>
> For a particular workload we've talked about before in relation to
> jump-threading, dom1 ends up cancelling and threading these edges with
> your patch applied:
>
> Cancelling jump thread: (28, 32) incoming edge; (32, 36) joiner; (36, 61) normal;
> Cancelling jump thread: (31, 7) incoming edge; (7, 92) joiner; (92, 8) normal;
> Cancelling jump thread: (63, 39) incoming edge; (39, 89) joiner; (89, 40) normal;
> Cancelling jump thread: (67, 68) incoming edge; (68, 69) joiner; (69, 93) normal;
> Cancelling jump thread: (4, 32) incoming edge; (32, 36) joiner; (36, 64) normal;
> Threaded jump 30 --> 28 to 299
> Threaded jump 91 --> 28 to 300
> Threaded jump 35 --> 36 to 302
> Threaded jump 88 --> 60 to 305
> Threaded jump 62 --> 60 to 306
> Threaded jump 32 --> 36 to 304
>
> Reverting the patch we get these edges cancelled and threaded:
>
> Cancelling jump thread: (28, 32) incoming edge; (32, 36) joiner; (36, 61) normal;
> Cancelling jump thread: (31, 7) incoming edge; (7, 92) joiner; (92, 8) normal;
> Cancelling jump thread: (63, 39) incoming edge; (39, 89) joiner; (89, 40) normal;
> Cancelling jump thread: (67, 68) incoming edge; (68, 69) joiner; (69, 93) normal;
> Cancelling jump thread: (4, 32) incoming edge; (32, 36) joiner; (36, 64) normal;
> Cancelling jump thread: (4, 29) incoming edge; (29, 91) joiner; (91, 30) normal; (30, 31) nocopy;
> Cancelling jump thread: (32, 36) incoming edge; (36, 64) joiner; (64, 68) normal;
> Cancelling jump thread: (36, 61) incoming edge; (61, 88) joiner; (88, 62) normal; (62, 63) nocopy;
> Cancelling jump thread: (64, 68) incoming edge; (68, 69) joiner; (69, 93) normal;
> Threaded jump 30 --> 28 to 299
> Threaded jump 91 --> 28 to 300
> Threaded jump 35 --> 36 to 302
> Threaded jump 88 --> 60 to 303
> Threaded jump 62 --> 60 to 304
>
> Note the extra thread of 32 --> 36 to 304 with this patch applied.
>
> I think we either want to defer the unordered_remove until we're done
> processing all the vector elements, or make sure to look at element 'i'
> again after we've moved something new in to it.
>
> A testcase would need to expose at least two threads which we later want
> to cancel, one of which ends up at the end of the vector of threading
> opportunities. We should then see only the first of those threads
> actually get cancelled, and the second skipped over. Reproducing these
> conditions is quite tough, which has stopped me finding a useful example
> for the list, I'll be sure to follow-up this thread if I do get to one.
Yes, there's something wrong about this patch, see PR66372.
I guess what you wrote above is the problem in this PR.
Marek
More information about the Gcc-patches
mailing list