I noticed this when looking at compile time / memory usage of PR 8361 and I noticed that OVERLOAD is
currently a link list and we only use about 2 of the fields in the tree which seems like a waste for a full
tree. I started to change it to be a VEC but ran into problems as it looks like we can be in the middle of
the linked list but I did not check for sure.
I added some stats to when chaining to the ovl and got the following interesting result for PR 12850:
average OVL length: 57.498998
So we have an average length of 57 which is long and shows that we are taking too much memory
because of this.
8361 also have simular interesting results:
average OVL length: 44.709989
But I think my counting is wrong as we can have DECL in the chain.
Though fixing that still gives the same number.
Subject: Re: New: OVERLOAD should not be a linked list of trees
"pinskia at gcc dot gnu dot org" <firstname.lastname@example.org> writes:
| I noticed this when looking at compile time / memory usage of PR
| 8361 and I noticed that OVERLOAD is
| currently a link list and we only use about 2 of the fields in the
| tree which seems like a waste for a full
| tree. I started to change it to be a VEC but ran into problems as
| it looks like we can be in the middle of the linked list but I did
| not check for sure.
A VEC is good for collections that do not expand often -- like temlate
parameter list after parsing, or member functions after class definition.
Consequently a vector is not a good use for reprensenting overload
sets in C++ -- they are dynamic entities that grow as the translation
unit is being compiled.
Getting rid of TREE_LIST does not necessarily mean mechanical
replacement with a VEC.
A plain single linked list should be sufficient.
I would imagine that in real world, there are either a rather small number
of overloads of a name (less than five) or very many (more than 20 or 30).
Most code I've seen don't use many overloads (falling into the first class)
but there are a few cases in libstdc++, especially in the streams and strings
libraries, that use very many overloads. I don't know if knowledge of such
overload set size statistics help any in finding an appropriate data
If you look at both PR 8361 and 12850, they average both more than 40 Overloadeds. Those are both
real code so I don't know why people think this is stupid. Also linked lists especially with extra
locations still available is stupid. Look OVERLOAD only takes at max a pointer and boolean. The type is
Can you measure how much memory do all the overload nodes take in the big
testcases? Theoretically, an OVERLOAD could measure 8 bytes or so (on 32 bit
systems). So we currently waste more than 100 bytes per each node, but if there
are not so many of them it is not important.
PR 12850 has the numbers.
Trees were refactored, and we currently have:
ENUM_BITFIELD(tree_code) code : 16;
/* 48 bits for various bitfield flags */
union tree_ann_d *ann;
} /* So on a 64-bit host this is 128bits = 16 bytes */
struct tree_base; /* 16 bytes */
tree chain; /* 8 bytes */
tree type; /* 8 bytes */
} /* So on a 64-bit host, tree_common is 32 bytes */
struct tree_common; /* 32 bytes */
tree function; /* 8 bytes */
So in total 40 bytes per tree_overload. If a single linked list would really be sufficient, then the ideal tree_overload would be 16 bytes (two pointers). In reality, there are a few flags used on tree_overload, so two pointers is not enough. Let's say we use 4 bytes for that. Then we have an overhead of (40/(16+4))*100% = 100% overhead.
In any normal engineering project, 100% overhead would be reason for distress.
However, as long as the G++ front end are comfortable using trees instead of front-end specific structures in the parser, we can't do any better at this point. The overhead is much less than it was in the past. It was 3% before the tree refactoring, so it will be much less now. And like Gaby already said: If there are not so many of them (tree_overload objects) then it is not very important (to squeeze out every last bit of overhead).
Therefore, closing as FIXED by Dan's refactoring.