This is the mail archive of the
mailing list for the GCC project.
Re: RFC: Memory SSA. A redesign of the virtual operand scheme.
- From: Diego Novillo <dnovillo at redhat dot com>
- To: Andrew Pinski <pinskia at physics dot uc dot edu>
- Cc: gcc at gcc dot gnu dot org
- Date: Wed, 08 Mar 2006 15:41:31 -0500
- Subject: Re: RFC: Memory SSA. A redesign of the virtual operand scheme.
- References: <200603082005.k28K56sk026414@earth.phy.uc.edu>
On 03/08/06 15:05, Andrew Pinski wrote:
> Is there a reason why not tune the aliasing anaysis to return more liberal results
> instead of changing the representation?
Yes, as I tried to explain on IRC, alias analysis is an undecidable
problem. It is impossible in general to compute an exact points-to set
for a given pointer. Even if you reduce the may-alias sets to a
minimum, you can still end up with tens or hundreds of elements in the
Even if your alias sets are small, the current representation does not
scale well when the code has many pointer dereferences involving the
alias sets. Suppose that you have a perfectly small may-alias set with
3 elements and 50,000 IL instructions that make a load/store through the
pointer. This will generate 150,000 virtual operands. 66% of those are
Currently, we try to avoid this explosion by a grouping heuristic which
will flip around the relation between alias tags and aliased symbols
(that's when you see SMTs being used in virtual operands).
That reduces the amount of virtual operators significantly, but causes
even more loss of aliasing precision (distinct members of the alias set
start affecting each other).
As an experiment, try disabling the grouping and the .GLOBAL_VAR
heuristic and see the results.
> I can look at the other testcases you mention and find more cases where
> aliasing analysis is very conservative and file the bugs for them. This
> seems like a better plan than just changing the representation to work
> around the fact the may-alias sets are big.
Sure, this is helpful. But don't confuse an optimization problem with
the representation problem. The current representation does not scale
well when the may-alias sets are large or when there are many pointer
dereferences in the code. This problem is essentially non-solvable
(that's the undecidable part of the problem), so we need to fix the
representation to avoid the scaling issues.
The proposal does *nothing* to alleviate our precision problems. It
does two main things: (a) Provides a mechanism for plugging in
precision-enhancing analyses, (b) Provides better scaling for
pointer-heavy and/or imprecise aliasing results. There are other goals
I have in mind to help passes that introduce new pointers like ivopts or
the vectorizer that I think are easier to implement with this design.
Adding more precise analysis is great but they come with a cost. As we
make them more sophisticated, we may even reach a point where we turn
some of them off at lower optimization levels. This should not
translate to increased memory usage because of an explosion of may-aliases.
I strongly encourage you to read up on alias analysis. Since you keep
coming back to the same confused point, I guess that I have not been
able to explain sufficiently why the two things are orthogonal. The
references I included in the text should be a good starting point.