This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug c++/66962] [concepts] overloaded function causing memory blow-up and ICE


https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66962

--- Comment #14 from Andrew Sutton <andrew.n.sutton at gmail dot com> ---
Created attachment 36054
  --> https://gcc.gnu.org/bugzilla/attachment.cgi?id=36054&action=edit
Optimize constraint decomposition by contraction

The problem isn't strictly related to decomposition -- it's normalization prior
to decomposition that's causing the problems. Whenever you have a disjunction
in a low-level concept, that OR can be instantiated into the normalized
constraints many, many times. I think I measured 20 times in the reduced
example. I think there's actually worst case 2^N growth in the space required
to solve the problem.

Currently, we might accumulate a term list during decomposition like so:

P || Q, P || Q, P, R, P || Q

And then recursively decompose each term in that list. Each || turns into a new
branch, here, eventually generating 8 different term lists, none of which are
semantically different than any other. They're just permutations of terms.

The patch helps reduce the space by disallowing duplicate expressions to be
added to a term list during decomposition. So instead of generating the term
list above, we just have:

P || Q, P, R

Only one branch required.

There is still complexity induced by the use of disjunctions. This manages it a
little better. It could be further improved by making the term list into a term
set by way of hashing or ordering.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]