[Bug middle-end/34400] [4.3 regression] bad interaction between DF and SJLJ exceptions

zadeck at naturalbridge dot com gcc-bugzilla@gcc.gnu.org
Sun Dec 16 13:56:00 GMT 2007



------- Comment #29 from zadeck at naturalbridge dot com  2007-12-16 13:56 -------
Subject: Re:  [4.3 regression] bad interaction between
 DF and SJLJ exceptions

steven at gcc dot gnu dot org wrote:
> ------- Comment #28 from steven at gcc dot gnu dot org  2007-12-16 12:01 -------
> Created an attachment (id=14778)
 --> (http://gcc.gnu.org/bugzilla/attachment.cgi?id=14778&action=view)
>  --> (http://gcc.gnu.org/bugzilla/attachment.cgi?id=14778&action=view)
> Change worklist solver to double queue algorithm
>
> I re-read Cooper, Harvey and Kennedy, who wrote a nice paper about various work
> list based dataflow solvers.
>
> The attached patch modifies df_worklist_dataflow to a double queue algorithm,
> while retaining the property that basic blocks are added to the queue in RPO
> order.  This approach gives me a speedup comparable to (but slightly less than)
> the hybrid search algorithm.  This is not really surprising because the double
> queue work list algorithm also completes full iterations over the CFG before
> considering the second queue.
>
> Seongbae, what are your ideas about the double queue approach (or maybe other
> algorithms suggested by Harvey)?
>
>
>   
The bottom line here is that all of these are heuristics.  There are no
tight time bounds here.  Each of these will have good cases and each
will have bad case.  So the only way that anyone can talk about best is
in term of the set of graphs they used to measure output.

Seongbae chose a technique that tends to work very well if the programs
are "normal" and the expense that fully connected programs are quite
bad.  The class of techniques that were used in hybrid search and that
you are looking at tend to do better for bad graphs but will be a little
slower on the "normal" cases.

If you can come up with some fast heuristic test to distinguish the
difference cases you can select between them.  Of course you only have
to do the tests once per function because the chances that optimization
will change the heuristic are vanishingly small.  This gives you a
little more room in terms of implementation cost. 

I think that I would vote for the "any bad back edges" plan to
distinguish the two because i think that Seonbae's technique was
designed to work well on that class of graphs.  But Seonbae may have
other thoughts especially if his technique requires only shallow maximum
nesting to really shine. 

Kenny


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=34400



More information about the Gcc-bugs mailing list