Why are RB-trees used to implement std::set?
Mon May 11 11:41:00 GMT 2015
I'm curious as to why libstdc++ is using a RB-tree to implement
std::set (details here
and here https://github.com/gcc-mirror/gcc/blob/master/libstdc++-v3/include/bits/stl_tree.h),
when there are faster alternatives?
I'm mainly curious, because from all the people I've asked there
hasn't been a single answer in favor of RB-trees, other than "they're
already popular" or "easy to implement".
If you'd like more details on that, here's a link to my question on
where nobody has yet answered as well.
TL;DR: Using a variant of B-tree (or (a,b)-tree) should be faster, and
everyone seems to suggest so, which makes me wonder what is the reason
for picking RB-trees? This is not a question about their theoretical
speed, but about real world behavior on real hardware with respect to
CPU caches, etc.
More information about the Gcc-help